CN112001244A - Computer-aided iris comparison method and device - Google Patents

Computer-aided iris comparison method and device Download PDF

Info

Publication number
CN112001244A
CN112001244A CN202010694591.2A CN202010694591A CN112001244A CN 112001244 A CN112001244 A CN 112001244A CN 202010694591 A CN202010694591 A CN 202010694591A CN 112001244 A CN112001244 A CN 112001244A
Authority
CN
China
Prior art keywords
iris
image
iris image
normalization
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010694591.2A
Other languages
Chinese (zh)
Other versions
CN112001244B (en
Inventor
陈子龙
马力
王子政
苗迪
胡文锋
邱显超
秦旗
刘京
刘寰
王玥
苗振民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Irisking Science & Technology Co ltd
Institute of Forensic Science Ministry of Public Security PRC
Original Assignee
Beijing Irisking Science & Technology Co ltd
Institute of Forensic Science Ministry of Public Security PRC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Irisking Science & Technology Co ltd, Institute of Forensic Science Ministry of Public Security PRC filed Critical Beijing Irisking Science & Technology Co ltd
Priority to CN202010694591.2A priority Critical patent/CN112001244B/en
Publication of CN112001244A publication Critical patent/CN112001244A/en
Application granted granted Critical
Publication of CN112001244B publication Critical patent/CN112001244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention provides a computer-aided iris comparison method and a computer-aided iris comparison device, wherein the method comprises the following steps: acquiring a first and a second iris image; rotating the iris in the first iris image and the iris in the second iris image to the same angle; iris positioning and normalization operation are carried out on the rotated first iris image and the rotated second iris image; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangle expansion normalization; calculating iris texture feature attribute information of corresponding positions in the first iris image and the second iris image after normalization operation; outputting the iris texture characteristic attribute information for display so as to obtain the texture consistency comparison result of the corresponding areas in the two iris images; and if the iris texture features which are compared and identified to be consistent meet the identification of identity, the first iris image and the second iris image are identified to be from the same iris. The scheme fills the blank of the field of artificial iris comparison.

Description

Computer-aided iris comparison method and device
Technical Field
The invention relates to the technical field of iris recognition, in particular to a computer-aided iris comparison method and device.
Background
Iris recognition is regarded as a biological feature recognition method with extremely high security because of high stability, uniqueness and anti-counterfeiting performance, and is being more and more widely applied to high-recognition-precision application scenes. The iris recognition method is computer iris recognition, that is, the computer automatically collects the iris image of the user, extracts the iris feature, and compares the iris feature with the iris feature template in the database to complete the user identity recognition.
However, the iris recognition level of the computer at present cannot process all iris images completely and accurately, for example, the conditions of mydriasis, irregular iris and the like occur in partial iris images, so that the iris recognition fails; in another example, when the identity of the cadaver is confirmed by iris recognition, the identity authentication fails due to the distortion of the cadaver iris. In addition, in the judicial identification process, when the biological characteristics are identified, objective biological characteristics are required to be identified manually and identification results are given, so that the result of identifying the biological characteristics by a computer cannot be used as the basis of the judicial identification, and the iris identification field is in a blank state.
Disclosure of Invention
In view of the above, the present invention provides a computer-aided iris comparison method and device, so as to implement artificial iris comparison, fill up the blank in the field of artificial iris comparison, improve the success rate of iris identification, and meet the requirement of iris identification.
In order to achieve the purpose, the invention is realized by adopting the following scheme:
according to an aspect of the embodiments of the present invention, there is provided a computer-aided iris comparison method, including:
acquiring a first iris image and a second iris image to be compared;
rotating the iris in the first iris image and the iris in the second iris image to the same angle;
performing iris positioning on the rotated first iris image and the rotated second iris image;
normalizing the first iris image after iris positioning and the second iris image after iris positioning; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangle expansion normalization;
calculating the attribute information of the iris texture features of the corresponding positions in the first iris image after the normalization operation and the second iris image after the normalization operation;
outputting attribute information of the iris texture features of the corresponding positions to display so as to obtain a consistency comparison result of textures of corresponding areas in the first iris image and the second iris image;
and confirming that the irises in the first iris image and the second iris image are both from the same iris under the condition that the iris texture characteristics of the corresponding positions are consistent, wherein the iris texture characteristics are more than or equal to the set number in the consistency comparison result.
In some embodiments, before rotating the iris in the first iris image and the iris in the second iris image to the same angle, the method further comprises: and receiving a judgment result for confirming that the first iris image and the second iris image both accord with the quality requirement of the iris image data.
In some embodiments, rotating the iris in the first iris image and the iris in the second iris image to a same angle comprises: acquiring a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located and a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located; and rotating the iris image to enable the direction of a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located to be consistent with the direction of a straight line where the inner canthus connecting line and the outer canthus connecting line corresponding to the iris in the second iris image are located.
In some embodiments, obtaining a first iris image and a second iris image to be compared comprises: and acquiring and displaying a first iris image and a second iris image to be compared. Acquiring a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located and a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located, wherein the straight line comprises the following steps: receiving an inner canthus clicking command and an outer canthus clicking command aiming at the iris in the first iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located according to the inner canthus clicking command and the outer canthus clicking command aiming at the iris in the first iris image; and receiving an inner canthus click command and an outer canthus click command aiming at the iris in the second iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located according to the inner canthus click command and the outer canthus click command aiming at the iris in the second iris image.
In some embodiments, iris localization of the rotated first iris image and the rotated second iris image comprises: performing iris positioning on the rotated first iris image and the rotated second iris image by using an iris positioning neural network model to obtain an iris boundary position in the rotated first iris image and an iris boundary position in the rotated second iris image; the iris positioning neural network model comprises a deep convolution network sharing layer, a key point regression network layer, an iris segmentation network layer and an output layer; the output of the deep convolutional network sharing layer is connected with the input of the key point regression network layer and the input of the iris segmentation network layer, and the output of the iris segmentation network layer and the output of the key point regression network layer are both connected with the input of the output layer; the deep convolutional network sharing layer is used for converting the input iris image into a shared characteristic map; the key point regression network layer is used for extracting the position information of key points of an iris region in the iris image from the shared characteristic diagram; the iris segmentation network layer is used for segmenting the shared characteristic graph into segmentation images corresponding to each pixel of the shared characteristic graph one by one, and the pixel value of each pixel in the segmentation images is used for identifying whether the corresponding pixel belongs to an iris area in the iris image or not; the output layer is used for calculating to obtain an initial iris boundary in the iris image according to the segmentation image and the input iris image, and synthesizing the position information of the initial iris boundary and key points of the iris region to obtain the position of the iris boundary in the iris image.
In some embodiments, the computer-aided iris comparison method further comprises: training based on a total loss function to obtain the iris positioning neural network model;
wherein the overall loss function is represented as:
Loss=λreg×Lossregcls×Losscls
Figure BDA0002590579410000031
Figure BDA0002590579410000032
wherein Loss denotes the overall Loss function, LossregRepresenting a key point regression Loss function, Loss, corresponding to the key point regression network layerclsRepresenting an average segmentation error, λ, of iris segmentation corresponding to the iris segmentation network layerregWeight, λ, representing the regression loss function of the key pointsclsRepresenting the weight of the average segmentation error of iris segmentation, N representing the number of training samples, M representing the number of key points in an iris image, i representing the serial number of the training samples, j representing the serial number of the key points in the iris image, i and j being positive integers, i being more than or equal to 1 and less than or equal to N, j being more than or equal to 1 and less than or equal to M, xij、yijRespectively representing the row and column coordinates of the prediction keypoint,
Figure BDA0002590579410000033
the row coordinates and the column coordinates of the marked key points are respectively represented, i 'and j' respectively represent the row serial number and the column serial number of the pixel, H, W respectively represents the height and the width of the iris image, N represents the serial number of the sample, N is a positive integer, N is more than or equal to 1 and less than or equal to N, G (i ', j') represents the pixel value of whether the pixel of the ith 'row and the jth' column in the segmentation image is the iris area or not, and M (i ', j') represents whether the pixel of the ith 'row and the jth' column is the iris area prediction result or not.
In some embodiments, performing iris localization on the rotated first iris image and the rotated second iris image further comprises: outputting and displaying the iris boundary position in the first iris image after rotation and the iris boundary position in the second iris image after rotation to acquire an instruction for manually adjusting key points of an iris region and/or an instruction for adjusting the iris boundary; and adjusting the iris boundary position of the corresponding iris image according to the instruction for manually adjusting key points of the iris region and/or the instruction for adjusting the iris boundary.
In some embodiments, the normalization operation includes iris radius normalization and pupil radius normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, comprising: and performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning to realize iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning.
In some embodiments, the iris radius normalization and the pupil radius normalization of the first iris image after iris positioning and the second iris image after iris positioning are performed by performing a scaling operation and an affine transformation on the first iris image after iris positioning and the second iris image after iris positioning, including: scaling the first iris image after iris positioning and the second iris image after iris positioning to enable the iris in the first iris image after iris positioning and the iris in the second iris image after iris positioning to have the same iris radius, so as to realize iris radius normalization of the first iris image after iris positioning and the second iris image after iris positioning; by performing affine transformation on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, the iris in the first iris image subjected to iris radius normalization and the iris in the second iris image subjected to iris radius normalization have the same pupil radius, so that the pupil radius normalization is performed on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, and the iris radius normalization and the pupil radius normalization are performed on the first iris image subjected to iris positioning and the second iris image subjected to iris positioning.
In some embodiments, the normalization operations further comprise rectangular expansion normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, further comprising: respectively carrying out rectangular expansion normalization on the irises in the first iris image and the second iris image after the irises rotate to the same angle; wherein the first and second iris images after the irises thereof are rotated to the same angle include: the first iris image after iris positioning and the second iris image after iris positioning, the first iris image after iris radius normalization and the second iris image after iris radius normalization, or the first iris image after pupil radius normalization and the second iris image after pupil radius normalization.
In some embodiments, the first iris image after normalization comprises: one or more of the first iris image subjected to iris radius normalization, the first iris image subjected to pupil radius normalization, and the first iris image subjected to rectangle expansion normalization; the second iris image after the normalization operation comprises: one or more of the second iris image after iris radius normalization, the second iris image after pupil radius normalization, and the second iris image after rectangular expansion normalization.
In some embodiments, calculating the attribute information of the iris texture feature at the corresponding position in the first iris image after the normalization and the second iris image after the normalization comprises: receiving a marking instruction for an iris texture feature in an image selected from the first iris image subjected to the normalization operation and the second iris image subjected to the normalization operation; acquiring the iris texture characteristics of corresponding positions in the first iris image after normalization and the rest images in the second iris image after normalization according to the mark position information corresponding to the mark instruction of the iris texture characteristics in the selected image; and calculating the attribute information of the iris texture characteristics corresponding to the mark position information in each image of the first iris image after the normalization operation and the second iris image after the normalization operation.
In some embodiments, the comparison result of the consistency of the textures of the corresponding regions in the first iris image and the second iris image is a comprehensive result of the comparison result of the consistency of the iris texture features of the corresponding positions in each pair of images obtained by one-to-one combination of the first iris image after the normalization operation and the second iris image after the normalization operation; and the consistency comparison result of the iris texture features of the corresponding positions in the pair of combined images is obtained by integrating the classification comparison result of the attribute information of the iris texture features and the attribute information of the iris texture features.
In some embodiments, the attribute information of the iris texture feature includes iris texture feature position information and iris texture feature image information.
In some embodiments, the iris texture feature location information includes: one or more of the center, the gravity center, the average gray scale of the iris texture feature, the average gray scale after image gray scale normalization, the center under polar coordinates with the pupil center as an origin, the gravity center under polar coordinates with the pupil center as the origin, the center under polar coordinates with the iris center as the origin, and the gravity center under polar coordinates with the iris center as the origin; the iris texture feature image information includes: and the average gray scale of the iris texture features and/or the average gray scale after image gray scale normalization.
According to another aspect of the embodiments of the present invention, there is provided an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method according to any of the above embodiments when executing the program.
According to a further aspect of embodiments of the present invention, there is provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, performs the steps of the method of any of the above embodiments.
The computer-aided iris comparison method, the electronic equipment and the computer-readable storage medium of the embodiment of the invention realize the artificial iris comparison, fill the blank in the field of artificial iris comparison, improve the iris identification success rate, meet the requirement of iris identification, lay the technical foundation for the application of the iris in the fields of public security and judicial expertise and provide an effective method theory.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a schematic flow chart of a computer-aided iris comparison method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a computer-aided iris comparison method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating iris radius normalization of an iris image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a process of performing rectangular expansion normalization on an iris image according to an embodiment of the present invention;
fig. 5 is a flowchart illustrating an iris positioning method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
At present, iris identification methods are all computer iris identification and cannot bring artificial experience judgment into. Therefore, in the case where the iris recognition level of the computer cannot process iris images one hundred percent accurately, it is impossible to reduce the iris recognition errors of the computer by using artificial experience judgment. In addition, the iris is compared manually and the identity identification result is obtained, and the field is in a blank state, so that no method can be used as an execution basis in the field of iris identification.
In order to solve the above problems, embodiments of the present invention provide a computer-aided iris comparison method, so as to improve the success rate of iris identification and/or meet the requirements of iris identification, which is helpful for promoting the development of the field of iris identification (e.g., as a judicial identification basis), and provide a relevant technical basis and method basis for iris identification work.
Fig. 1 is a flowchart illustrating a computer-aided iris comparison method according to an embodiment of the present invention. As shown in fig. 1, the computer-aided iris comparison methods of the embodiments include the following steps S110 to S170.
Specific embodiments of steps S110 to S170 will be described in detail below.
Step S110: and acquiring a first iris image and a second iris image to be compared.
In step S110, the acquired iris image may be a human eye image, that is, the iris image may include not only the iris but also the peripheral part of the iris such as the eyelid. For the sake of simplicity, the comparison method of the present invention is described by taking two iris images as an example, but the comparison method does not exclude the case of obtaining more than two iris images and comparing two iris images with each other. In practical applications, one of the first iris image and the second iris image may be a live captured image, the other image may be a database or other historically acquired image, or both images may be database images or live captured images. In addition, the process of the executing device acquiring the iris image may be loading the iris image stored or transmitted to the executing device.
In order to obtain a better comparison result and better satisfy the requirement of iris identification, after the iris images to be compared are obtained, before the subsequent processing (such as step S120 to step S170), whether the two obtained iris images satisfy the requirement of artificial comparison can be judged manually. Can be used for comparison only when the iris image meets the comparison requirement.
For example, after the step S110 (before the step S120), the computer-aided iris comparison method shown in fig. 1 may further include the steps of: and receiving a judgment result for confirming that the first iris image and the second iris image both accord with the quality requirement of the iris image data.
The quality requirement of the iris image data can comprise a sixth part of national standards of the people's republic of China-information technology biological characteristic sample essence: iris image data and public safety industry standard of the people's republic of China-technical requirements of security iris identification application images. Whether the first iris image and the second iris image meet the manual comparison requirement can be judged according to the standards through manual work, and judgment results are input, if yes, the execution equipment can continue to execute the subsequent steps, and if not, prompt can be given.
Step S120: rotating the iris in the first iris image and the iris in the second iris image to the same angle.
When the human eye image is captured, due to the influence of the capturing angle or the human face posture, the irises in the two iris images to be compared may be deviated in the circumferential direction, for example, the directions of the lines connecting the inner and outer canthi of the eyes to which the irises belong may not coincide, more specifically, such as being deviated from the horizontal axis. This condition may be referred to as angular disparity of the iris. Therefore, the step S120 can eliminate the situation that the iris in the first iris image and the iris in the second iris image are not identical, thereby improving the accuracy of the comparison result.
Specifically, in step S120, the current angle of the iris in the iris image may be determined by the eye angle, and then the rotation adjustment is performed. The current angle condition can be determined according to the inner and outer canthus of the eye where one iris is located in the same iris image. Certainly, it is not excluded that when the iris image includes two eyes, the current angle condition of the iris of the two eyes is determined by using the canthus of the two eyes, as long as the determination modes of the current angle conditions of the two irises to be compared are the same.
For example, the step S120 of rotating the iris in the first iris image and the iris in the second iris image to the same angle may specifically include the steps of: s121, acquiring a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located and a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located; and S122, rotating the iris image to enable the direction of a straight line where an inner and outer canthus connecting line corresponding to the iris in the first iris image is located to be consistent with the direction of a straight line where an inner and outer canthus connecting line corresponding to the iris in the second iris image is located.
The step S121 may be executed by an executing device, for example, to automatically identify the positions of the inner canthus and the outer canthus corresponding to one iris, and then generate a connecting line between the inner canthus and the outer canthus to find out the straight line where the connecting line is located. Of course, there may be some cases where the eye angle in the iris image is difficult to be automatically recognized, such as incomplete, and then, in order to allow the human to participate in recognizing the eye angle position and more accurately determine the current angle condition of the iris, the above steps S121 and S122 may be allowed to be manually completed.
For example, in order to facilitate manual viewing of the iris image, the step S110, namely, acquiring the first iris image and the second iris image to be compared, may specifically include the steps of: and acquiring and displaying a first iris image and a second iris image to be compared. Further, in the step S121, that is, obtaining a straight line where a connection line between the inner canthus and the outer canthus corresponding to the iris in the first iris image is located and a straight line where a connection line between the inner canthus and the outer canthus corresponding to the iris in the second iris image is located, more specifically, the method may include the steps of: s1211, receiving an inner canthus clicking instruction and an outer canthus clicking instruction aiming at the iris in the first iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located according to the inner canthus clicking instruction and the outer canthus clicking instruction aiming at the iris in the first iris image; and S1212, receiving an inner canthus click command and an outer canthus click command for the iris in the second iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located according to the inner canthus click command and the outer canthus click command for the iris in the second iris image.
In step S1211, for example, the inner canthus and the outer canthus may be selected by clicking, and a connection line may be generated by connecting the inner canthus and the outer canthus. In step S122, the inner and outer canthus connecting lines corresponding to the irises in the two iris images may be manually adjusted to be consistent, for example, both of the inner and outer canthus connecting lines are adjusted to be consistent with the horizontal direction. Of course, it can be adjusted to correspond to other directions, such as the vertical direction. Of course, the inner and outer canthus connecting lines can also be automatically adjusted to be consistent with the standard direction.
Step S130: iris positioning the rotated first iris image and the rotated second iris image.
In specific implementation, the two images can be respectively subjected to iris positioning through the neural network model obtained through training. The neural network model may be an existing neural network model or may be an improved neural network model.
For example, the step S130 of performing iris localization on the rotated first iris image and the rotated second iris image may specifically include the steps of: s131, iris positioning is carried out on the rotated first iris image and the rotated second iris image respectively by using an iris positioning neural network model, and an iris boundary position in the rotated first iris image and an iris boundary position in the rotated second iris image are obtained; the iris positioning neural network model comprises a deep convolution network sharing layer, a key point regression network layer, an iris segmentation network layer and an output layer; the output of the deep convolutional network sharing layer is connected with the input of the key point regression network layer and the input of the iris segmentation network layer, and the output of the iris segmentation network layer and the output of the key point regression network layer are both connected with the input of the output layer; the deep convolutional network sharing layer is used for converting the input iris image into a shared characteristic map; the key point regression network layer is used for extracting the position information of key points of an iris region in the iris image from the shared characteristic diagram; the iris segmentation network layer is used for segmenting the shared characteristic graph into segmentation images corresponding to each pixel of the shared characteristic graph one by one, and the pixel value of each pixel in the segmentation images is used for identifying whether the corresponding pixel belongs to an iris area in the iris image or not; the output layer is used for calculating to obtain an initial iris boundary in the iris image according to the segmentation image and the input iris image, and synthesizing the position information of the initial iris boundary and key points of the iris region to obtain the position of the iris boundary in the iris image.
In step S131, the rotated first iris image may be input to the iris positioning neural network model to obtain an iris boundary position in the first iris image, and the rotated second iris image may be input to the iris positioning neural network model to obtain an iris boundary position in the second iris image. The iris boundary position refers to a position of a boundary of an iris in an iris image, and may include an inner circle position of the iris, an outer circle position of the iris, and a boundary position of a part of an eyelid covering the iris when the iris is covered by the eyelid. Wherein, the position of the eyelid covering the iris region (the boundary position of the part of the eyelid covering the iris) can be obtained by polynomial fitting according to the position of the key point of the iris region.
The feature map (channel) obtained by inputting the iris image to the deep convolutional network sharing layer is referred to as a shared feature map because the feature map can be used as input to the key point regression network layer and the iris segmentation network layer. The key point regression network layer may be a regression network, the key point may be a key point of an iris region in the iris image, and the position information of the key point of the iris region may be row-column coordinates of the key point. The iris segmentation network layer may be an iris segmentation network, the shared feature map is input to the iris segmentation network and subjected to multi-scale transformation, a segmented image corresponding to each pixel of the iris image one by one may be output, each pixel value in the segmented image may be used to identify whether the pixel belongs to the iris region, for example, a pixel value of 1 indicates that the pixel belongs to the iris region in the iris image, a pixel value of 0 indicates that the pixel does not belong to the iris region in the iris image, and one segmented image may include one or more pixels. The output layer may further perform calculation according to the input iris image, the output result of the iris segmentation network layer, and the output result of the key point regression network layer to obtain a desired result, for example, may perform operations such as integral difference on the iris boundary according to the iris segmentation image and the iris image, output a fine positioning result (initial iris boundary) of the iris boundary, and then may perform comprehensive processing based on the key points of the iris region and the initial iris boundary of a certain policy, such as adjusting the initial iris boundary according to the positions of the key points, and output a final iris boundary positioning result.
Further, in some embodiments, a process of training the model may be included. For example, the step S130 of performing iris localization on the rotated first iris image and the rotated second iris image may further include the steps of: and S132, training based on the total loss function to obtain the iris positioning neural network model.
Wherein the overall loss function is represented as:
Loss=λreg×Lossregcls×Losscls
Figure BDA0002590579410000101
Figure BDA0002590579410000102
wherein Loss denotes the overall Loss function, LossregRepresenting a key point regression Loss function, Loss, corresponding to the key point regression network layerclsRepresenting an average segmentation error, λ, of iris segmentation corresponding to the iris segmentation network layerregWeight, λ, representing the regression loss function of the key pointsclsRepresenting the weight of the average segmentation error of iris segmentation, N representing the number of training samples, M representing the number of key points in an iris image, i representing the serial number of the training samples, j representing the serial number of the key points in the iris image, i and j being positive integers, i being more than or equal to 1 and less than or equal to N, j being more than or equal to 1 and less than or equal to M, xij、yijRespectively representing the row and column coordinates of the prediction keypoint,
Figure BDA0002590579410000111
respectively representing the row coordinate and the column coordinate of the marked key point, i 'and j' respectively representing the row serial number and the column serial number of the pixel, H, W respectively representing the height and the width of the iris image, N representing the serial number of the sample, N being a positive integer, 1 ≦ N ≦ N, and G (i ', j') representing whether the pixel representing the ith 'row and the jth' column in the segmentation image is the image of the iris region or notThe pixel value, M (i ', j'), indicates whether the pixel of the ith row and jth column is the iris region prediction result.
More specifically, the step S132, namely, training to obtain the iris positioning neural network model based on the global loss function (model training method), may specifically include the steps of:
s1321, inputting the iris image in the training sample to a deep convolution network sharing layer to obtain a corresponding sharing feature map;
s1322, inputting the shared characteristic graph to a key point regression network layer to obtain the position information of the key points of the corresponding iris area; calculating values of a regression network loss function according to the position information of all key points of the iris image of each training sample;
s1323, inputting the shared characteristic graph into an iris segmentation network layer to obtain a segmentation image which corresponds to each pixel of the corresponding iris image one by one, wherein the pixel value of each pixel in the segmentation image is used for identifying whether the pixel belongs to the iris region of the corresponding iris image; for example, a pixel value of 1 indicates that the pixel belongs to the iris region, and a pixel value of 0 indicates that the pixel does not belong to the iris region; calculating the value of a segmentation network loss function according to pixel values in all segmentation images of the iris image of each training sample and the iris area correspondingly marked by the iris image of the corresponding training sample;
s1324, setting weights for the value of the regression network loss function and the value of the segmentation network loss function, returning to the network model comprising the deep convolution network sharing layer, the key point regression network layer and the iris segmentation network layer, and performing iterative training until the value of the regression network loss function and the value of the segmentation network loss function reach set requirements, such as the comprehensive loss function reaches a set value or reaches set training times, so as to obtain the trained network model.
S1325, connecting the trained network model to an output layer to obtain an iris positioning neural network model; the output layer is used for carrying out operation including integral difference on the iris boundary according to the iris image to be compared and the segmentation image thereof to obtain an initial iris boundary, and obtaining the position of the iris boundary according to the position information of key points of the iris region of the iris image to be compared and the corresponding initial iris boundary.
Further, in other embodiments, the step S130 of performing iris positioning on the rotated first iris image and the rotated second iris image may further include, in addition to the step S131: s133, outputting and displaying the iris boundary position in the first iris image after rotation and the iris boundary position in the second iris image after rotation to acquire an instruction for manually adjusting key points of an iris region and/or an instruction for adjusting the iris boundary; and S134, adjusting the iris boundary position of the corresponding iris image according to the instruction for manually adjusting the key points of the iris region and/or the instruction for adjusting the iris boundary.
In step S133, the iris boundary position in the first iris image after rotation and the iris boundary position in the second iris image after rotation are obtained, and the iris boundary position obtained in step S131 may be displayed for manual viewing. The artificial iris image processing method can judge whether the iris boundary positions found in the iris images have deviation through vision, and can adjust if the iris boundary positions have deviation. For example, the automatically found key points in the iris image can be added, deleted or changed, and the positions of the key points can be dragged when the key points are changed. In addition, the iris boundary automatically found in the iris image may be adjusted, for example, by local deformation, global reduction, global enlargement, or the like of the boundary. In the step S134, if an instruction for manually adding a key point is received, the key point may be added in the iris image according to the selected position; if an instruction of manually deleting key points is received, the selected key points can be deleted; if an instruction to select a keypoint and drag the keypoint to a certain position is received, the position of the automatically found keypoint in the iris image can be changed. In addition, if receiving the command of reducing, enlarging and deforming the iris boundary, the iris boundary automatically found in the iris can be correspondingly reduced, enlarged and deformed. In this way, the accuracy of iris boundary positioning can be improved through artificial participation.
Step S140: normalizing the first iris image after iris positioning and the second iris image after iris positioning; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangle expansion normalization.
The normalization operation can be to process an iris image separately in different modes, or to process an iris image in two modes, or to process an iris image in three modes. The image obtained through the normalization operation can include images obtained through the above different processing modes. The first iris image and the second iris image may be processed in exactly the same way.
The iris radius normalization and the pupil radius normalization together may generate a circular normalized iris image (the radii of the inner circle and the outer circle of the two irises are both correspondingly consistent), wherein the iris radius normalization and the pupil radius normalization may be in a sequence, or may be performed in various ways according to which of the inner circle and the outer circle of the irises. The iris in the shape of a circular ring can be unfolded into a rectangle by the rectangle unfolding normalization, so that the computer identification is facilitated. The image processed by the rectangular expansion normalization may be subjected to other processing, such as iris radius normalization and/or pupil radius normalization, as long as the image is rotated uniformly.
In some embodiments, the normalization operation in step S140 includes iris radius normalization and pupil radius normalization, in which case, the step S140 of normalizing the first iris image after iris localization and the second iris image after iris localization may specifically include the steps of: s141, performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning to realize iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning.
Wherein one of the inner circle and the outer circle of the iris can be made to coincide by the zoom operation on the iris image, and the other of the inner circle and the outer circle can be made to coincide by the affine transformation. In addition, the inner circle or the outer circle of the iris can be adjusted according to the actual varicosity of the iris through affine transformation.
Further, the above step S141, namely, performing the zoom operation and the affine transformation on the first iris image after the iris positioning and the second iris image after the iris positioning to realize the iris radius normalization and the pupil radius normalization on the first iris image after the iris positioning and the second iris image after the iris positioning, may further include the steps of: s1411, performing scaling operation on the first iris image after iris positioning and the second iris image after iris positioning to enable the iris in the first iris image after iris positioning and the iris in the second iris image after iris rotating positioning to have the same iris radius, so as to realize iris radius normalization on the first iris image after iris positioning and the second iris image after iris positioning; and S1412, performing affine transformation on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization to enable the iris in the first iris image subjected to iris radius normalization and the iris in the second iris image subjected to iris radius normalization to have the same pupil radius, so as to realize pupil radius normalization on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, and further realize iris radius normalization and pupil radius normalization on the first iris image subjected to iris positioning and the second iris image subjected to iris positioning.
In this embodiment, first, the iris images are scaled in step S1411 to make the iris radii (outer radius of the iris) of the irises in the two iris images consistent, wherein the purpose of iris radius normalization can be achieved by scaling one iris image, or by scaling both iris images. Then, the iris images with normalized iris radii are affine transformed in step S1412 to make the pupil radii (inner circle radii of the irises) of the irises in the two iris images consistent, wherein the purpose of normalizing the pupil radii can be achieved by performing affine transformation on one of the iris images or by performing radial transformation on both iris images.
In other embodiments, the normalization operation in step S140 may further include rectangular expansion normalization in addition to iris radius normalization and pupil radius normalization, in which case, step S140, namely, performing the normalization operation on the first iris image after iris localization and the second iris image after iris localization, may further include, in addition to step S141, the steps of: and S142, respectively carrying out rectangular expansion normalization on the irises in the first iris image and the second iris image after the irises rotate to the same angle. Wherein the first and second iris images after the irises thereof are rotated to the same angle may include: the first iris image after iris positioning and the second iris image after iris positioning, the first iris image after iris radius normalization and the second iris image after iris radius normalization, or the first iris image after pupil radius normalization and the second iris image after pupil radius normalization.
The rectangular normalized image can be generated by performing rectangular expansion normalization on the iris in the iris image. Specifically, for example, the iris region rectangular normalization operation may be completed by recording an image of the normal line at the iris region at each fixed angle in a corresponding column of the normalized rectangular region, starting from the center of the pupil or the iris as an origin, and rotating by one turn along a specific direction (clockwise or counterclockwise direction) from the horizontal axis.
Further, the first iris image after the normalization operation may include: one or more of the first iris image subjected to iris radius normalization, the first iris image subjected to pupil radius normalization, and the first iris image subjected to rectangle expansion normalization; the second iris image after the normalization operation may include: one or more of the second iris image after iris radius normalization, the second iris image after pupil radius normalization, and the second iris image after rectangular expansion normalization.
In these embodiments, the iris images used for rectangular expansion normalization may be the first iris image and the second iris image which are rotated to make the shooting visual angles consistent, the iris images with consistent shooting visual angles may not be obtained after iris radius normalization and pupil radius normalization, or the iris images with consistent shooting visual angles may be obtained after one or two operations of iris normalization and pupil radius normalization. For example, the first iris image and the second iris image after the operation of step S120 may be used, the first iris image and the second iris image after the operation of step S1411 may be used, and the first iris image and the second iris image after the operation of step S1412 may be used. In addition, the first iris image and the second iris image may be subjected to the same type of operation, and then to rectangular expansion normalization operations, respectively.
Step S150: and calculating the attribute information of the iris texture features of the corresponding positions in the first iris image after the normalization operation and the second iris image after the normalization operation.
The first iris image after normalization may have one or more iris images, which form a group of images related to the first iris image, and may include one or more of the first iris image after the operation of step S1411, the first iris image after the operation of step S1412, and the first iris image after the operation of step S142, for example. Similarly, the second iris image after normalization may have one or more iris images, which form another set of images related to the second iris image, and may include, for example, one or more iris images of the second iris image after the operation of step S1411, the second iris image after the operation of step S1412, and the second iris image after the operation of step S142.
Any one of the images of the group of images with respect to the first iris image and any one of the images of the group of images with respect to the second iris image may be formed into a pair of images, and thus one or more pairs of images may be formed, for example, two iris images having undergone the same processing procedure as those of the two groups of images may be formed into a pair of images. Various attribute information of the iris texture features of the corresponding position area in each pair of images can be extracted. It is of course not excluded that the images forming the pair are subjected to different processing procedures, for example, one is a first iris image subjected to iris normalization and the other is a second iris image subjected to pupil normalization, because the execution device can record the correspondence of the positions in the iris images obtained by various operations.
The attribute information of the iris texture features may include iris texture feature position information and iris texture feature image information. Specifically, the iris texture feature location information may include: center, center of gravity, average gray scale of iris texture feature, average gray scale after image gray scale normalization, center under polar coordinates with pupil center as origin, center of gravity under polar coordinates with pupil center as origin, center under polar coordinates with iris center as origin, and center of gravity under polar coordinates with iris center as origin. The iris texture feature image information may include: and the average gray scale of the iris texture features and/or the average gray scale after image gray scale normalization. These attribute information may be obtained by the execution device. In addition, the iris texture feature image information may further include an iris texture feature (patch) image.
In some embodiments, the step S150 of calculating the attribute information of the iris texture feature at the corresponding position in the first iris image after the normalization and the second iris image after the normalization may specifically include the steps of: s151, receiving a marking instruction for an iris texture feature in an image selected from the first iris image after normalization and the second iris image after normalization; s152, obtaining iris texture characteristics of corresponding positions in the first iris image after normalization and other images in the second iris image after normalization according to mark position information corresponding to the mark instruction of the iris texture characteristics in the selected image; and S153, calculating the attribute information of the iris texture characteristics corresponding to the mark position information in each image of the first iris image after the normalization operation and the second iris image after the normalization operation.
In step S151, the clear iris texture features (patches) may be manually found from one iris image and marked, so that the execution device may receive a marking instruction, including the marked location information. In step S152, more than one iris image may be selected, and the iris texture feature corresponding to the position of the marked iris texture feature may be found from all the other iris images. In step S153, various attribute information corresponding to the same position of all iris images may be acquired by the execution device. The process is repeated in sequence, and the attribute information of the plaque in each iris image of a plurality of corresponding positions can be found.
Step S160: outputting the attribute information of the iris texture features of the corresponding positions to display so as to obtain a consistency comparison result of the textures of the corresponding areas in the first iris image and the second iris image.
The output attribute information of the iris texture features in each iris image can be displayed on an execution device or output to other devices through the execution device for display. After various attribute information is manually checked, some judgment can be made according to experience, so that the judgment result of manually comparing the irises is included.
According to the attribute information of the iris texture features, the consistency of the iris texture features can be judged manually. Specifically, for example, when each pair of images (for example, a pair of images composed of the first iris image and the second iris image after the iris radius normalization obtained in the above step S1411, a pair of images composed of the first iris image and the second iris image after the iris radius normalization and the pupil radius normalization obtained in the above step S1412, and a pair of images composed of the first iris image and the second iris image after the rectangle expansion normalization obtained in the above step S142) are aligned in pairs, the method for determining whether the iris texture patches are the same texture patches may include, but is not limited to, one or more of the following manners: (1) judging whether the connection line of the centers or the centers of gravity of the two patches and the center of the pupil or the iris is the same as the included angle between the connection line and the coordinate axis direction; (2) judging whether the gray scales of the two patches are similar; (3) judging whether the contrast of the two plaques and the surrounding non-plaque area is similar or not; (4) judging whether the boundary shapes of the two plaques are similar; (5) after the two images are overlapped together in front and back according to a certain principle, calculating the proportion of the intersection pixel number of the two patches and the union pixel number of the two patches, comparing the proportion with a preset threshold value, judging that the two patches are similar if the two patches are higher than the threshold value, otherwise, judging that the two patches are not similar; (6) a determination is made as to whether there are any interpretable changes to the two blobs.
In the step S160, the comparison result of the consistency of the textures of the corresponding regions in the first iris image and the second iris image may be a comprehensive result of the comparison result of the consistency of the iris texture features of the corresponding positions in each pair of images obtained by combining the first iris image subjected to the normalization operation and the second iris image subjected to the normalization operation one by one. The consistency comparison result of the iris texture features of the corresponding positions in the pair of combined images can be obtained by integrating the classification comparison results of the attribute information of the iris texture features and the attribute information of the iris texture features. For example, a group of images related to the first iris image and another group of images related to the second iris image may be configured to correspond to form three pairs of images, and the comparison results of each pair of images in the three pairs of images may be combined to obtain the final comparison result of the consistency of the iris texture features.
Step S170: and confirming that the irises in the first iris image and the second iris image are both from the same iris under the condition that the iris texture characteristics of the corresponding positions are consistent, wherein the iris texture characteristics are more than or equal to the set number in the consistency comparison result.
The iris texture features can be obtained in sequence or in parallel at a plurality of corresponding positions in each image for comparison, so that the probability that the irises in the first iris image are the same as the probability that the irises in the second iris image are the same is higher if more iris texture features at the corresponding positions are obtained. The inventors have empirically found that if there are, for example, 5 or 6 or more identical iris texture patches, the probability of the irises being identical is already high, and therefore, the set number may be, for example, 4, 5, or 6.
The computer-aided iris comparison method of each embodiment obtains the attribute information of the iris texture features which can be manually judged through operations such as rotational positioning, normalization, attribute information extraction and the like, and further receives the judgment result manually obtained according to the attribute information, so that the judgment of the iris texture features manually is brought into the iris comparison result, the problem of comparison failure caused by insufficient accuracy of a computer can be avoided, and the requirement of iris identification can be conveniently met.
In addition, an embodiment of the present invention further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method according to any of the above embodiments.
Embodiments of the present invention further provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method according to any of the above embodiments.
In order that those skilled in the art will better understand the present invention, embodiments of the present invention will be described below with reference to specific examples.
At present, no implementation scheme capable of manually comparing and identifying two iris images exists. In order to solve the problem of artificial comparison and identification of two iris images, in a specific embodiment, the proposed computer-aided artificial iris comparison method is to judge whether two compared iris images belong to the same iris by artificially comparing two iris images with each other based on computer-aided image generation and image attribute calculation of the iris images acquired by acquisition equipment.
Fig. 2 is a schematic flow chart of a computer-aided iris comparison method according to an embodiment of the present invention, and referring to fig. 2, the computer-aided iris comparison method according to the embodiment may include the following steps:
s1, judging whether the two iris images meet the requirement of manual comparison;
s2, marking the left and right canthus and inner and outer boundaries of the iris for each image, wherein the marking can be done manually or automatically by a computer;
s3, rotating the two images to the same direction through the left and right canthi, for example, the direction of the line between the canthus parallel to the horizontal axis;
s4, zooming the two images respectively to make the iris radiuses in the two images the same, and obtaining images A _1 and B _1 correspondingly;
s5, generating a circular normalized image: carrying out affine transformation on the irises and the pupil areas in the two images A _1 and B _1 by using affine transformation, and ensuring that the radiuses of the pupils and the irises in the two images after transformation are the same to obtain images A _2 and B _ 2;
s6, generating a rectangular normalized image: taking the positive direction of an x-axis starting from the center of a pupil as a starting point, normalizing and expanding iris areas in the images A _1 and B _1 into rectangles along the tangential direction of the annular iris, and marking the normalized images as images A _3 and B _ 3;
s7, manually marking any one of the clearly visible iris texture features in any one of the images A _1, A _2, A _3, B _1, B _2 and B _ 3;
s8, automatically drawing the positions corresponding to the iris texture features marked in the step S7 on other unmarked images by the computer;
s9, calculating the position information and image information of the marked position in all the images with the help of computer, wherein the information includes but is not limited to the center, gravity center, average gray scale of the marked patch, gray scale after image gray scale normalization, and center under polar coordinates with the pupil or iris center as the origin;
s10, integrating the obtained position information and the comparison results of 3 pairs of images (e.g., image a _1 and image B _1, image a _2 and image B _2, and image a _3 and image B _3) obtained from the image information, determines whether there is a significant unexplainable difference. If so, abandoning the current plaque, and repeating the processes in the steps S7, S8 and S9; otherwise, judging that the plaque marked currently belongs to the same texture plaque;
s11, repeating the steps of marking the iris texture features in the steps S7, S8, S9 and 10 until the number of the same texture patches is not less than N (positive integer), or clear and visible iris texture features can not be found in all the current images;
s12, if the number of the same texture patches is not less than N after manual comparison, the current two iris images are determined to belong to the same iris, and identity judgment can be given; otherwise, the current two iris images can not be judged to be from the same iris.
Wherein, the steps S1, S2, S3, S7, S10, S11 and S12 can be manually performed; the above steps S4, S5, S6, S8 and S9 may be performed by computer-aided operations.
In the step S1, it is determined whether the two images meet the comparison requirement, and the reference standard may be "national standard of the people' S republic of china — sixth part of nature of biological feature sample of information technology": iris image data and public safety industry standard of the people's republic of China-technical requirements of security iris identification application images. That is, the image may be the sixth part that needs to satisfy national standards of the people's republic of china-the nature of information technology biometrics: part 6 of the iris image data and the sixth part of national standards of the people's republic of China-nature of biological characteristics of information technology: requirements of section 4 in iris image data.
In step S2, the corner of the eye may be marked manually or automatically by a computer. The automatic marking of the canthus by the computer can be realized by utilizing the existing key point detection model. Fig. 5 is a schematic flow chart of an iris positioning method according to an embodiment of the present invention, and referring to fig. 5, the method for automatic calibration by a computer may include the following steps:
s2.1, inputting the iris image into a depth convolution network sharing layer, and outputting a sharing characteristic map; the shared characteristic graph is used for subsequent key point regression prediction and iris region segmentation;
s2.2, inputting the shared characteristic graph into a key point regression network, and outputting a plurality of key point coordinate pairs (x) requiredk,yk) Wherein x iskAnd ykRespectively a row coordinate and a column coordinate of the kth key point;
s2.3, inputting the shared characteristic graph into an iris segmentation network, performing multi-scale transformation, and outputting a segmentation image which corresponds to each pixel of the iris image one by one, wherein each pixel value in the segmentation image is 1 or 0 and respectively represents whether the pixel belongs to an iris region in the iris image;
s2.4, performing operations such as integral difference and the like on the iris boundary according to the iris segmentation image and the iris image, and outputting a fine positioning result of the iris boundary;
and S2.5, comprehensively processing the outputs in the steps S2.2 and S2.4 based on a set strategy, and outputting a final iris boundary positioning result.
In the process of training the neural network model, the iris images in the training samples are labeled with key point marks and iris area marks in advance. In order to solve the multi-task problem, the loss function adopted in the model training process can comprise two parts which respectively correspond to a key point regression task and an iris segmentation task. The overall Loss function Loss may be identified as:
Loss=λreg×Lossregcls×Losscls
wherein λ isregAnd λclsWeight of Loss function, Loss, respectively, for key point regression, iris segmentationregAnd LossclsThe regression loss of the key points and the average segmentation error of the iris segmentation are respectively.
Loss of regression LossregCan be expressed as:
Figure BDA0002590579410000201
wherein N is the number of training samples in a batch, and M is the number x of key points on an imageij、yijTo predict line coordinates of key pointsAnd the coordinates of the columns are determined,
Figure BDA0002590579410000202
are the row and column coordinates of the marked keypoints.
Mean segmentation error Loss for iris segmentationclsComprises the following steps:
Figure BDA0002590579410000203
where N is the number of samples in a batch, H, W is the height and width of the iris image, G (i ', j') is the label of whether the pixel in the image is an iris region, and M (i ', j') is the result of the model predicting whether the current pixel is an iris region.
After the model is trained, the position of the eyelid can be drawn through polynomial fitting according to the key point result, the iris fine positioning result is fused, and the final iris positioning output result can be obtained comprehensively.
Further, the positioning result can be adjusted through human-computer interaction: after the automatic calibration by the computer, the positioning result can be modified by human-computer interaction, for example, the iris center is revised again by manual operation, the key point is pulled to correct the position, and the accurate iris positioning result is obtained by human-computer cooperation.
In step S3, the step may be performed manually, or the two images may be automatically rotated after being calculated by a computer. If the computer is used for automatic rotation, man-machine interaction can be carried out after the rotation, and the rotation degree of the two images is manually adjusted in a fine mode, so that the textures of the two images are overlapped as much as possible.
In step S5, an affine transformation in which the iris is normalized by a specific inner circle radius is shown in fig. 3. Images between the inner edge and the outer edge of the iris are transformed in a manner of stretching according to the texture of the iris muscles, so that the texture of the iris is changed when the inner edge of the iris is stretched.
Fig. 4 shows a variation of the rectangular normalized image generated in step S6. Specifically, the iris region rectangular normalization operation can be completed by rotating the iris region in a specific direction (clockwise or counterclockwise) from the horizontal axis by one circle with the center of the pupil or the iris as the origin, and recording the image of the normal line at the iris region at each fixed angle in the corresponding column of the normalization rectangular region.
In the step S10, when two images of three pairs of image pairs (for example, the image a _1 and the image B _1, the image a _2 and the image B _2, and the image a _3 and the image B _3) are aligned, the corresponding iris texture patches may be determined whether they are the same texture patches, which includes but is not limited to the following:
s10_1, judging whether the length of the connecting line of the center \ gravity center of the two patches and the center of the pupil \ iris and the included angle between the connecting line and the coordinate axis direction are the same or not;
s10_2, judging whether the gray scales of the two patches are similar;
s10_3, judging whether the contrast of the two plaques and the surrounding non-plaque area is similar;
s10_4, judging whether the boundary shapes of the two patches are similar;
and S10_5, overlapping the two images according to a certain principle, calculating the proportion of the intersection pixel number of the two patches and the union pixel number of the two patches, and comparing the proportion with a preset threshold value. If the difference is higher than the threshold value, judging the similarity, otherwise, judging the similarity;
and S10_6, judging whether the two plaques have interpretable changes.
In the step 11, the number N of the same texture patches is determined: the inventors have found that the probability of identical iris textures occurring at the same location on images of different irises is not higher than 5%. Therefore, when the number of the same iris texture patch pairs in two iris images is N, the probability upper limit that the two iris images do not come from the same iris is p (N), it can be known that: p (N) is less than or equal to 0.1N. When N is 6, P (N) is less than or equal to 1.5625 × 10-8That is, when the number of the same iris texture patches in two iris images is 6, the probability that the two iris images come from different irises does not exceed two parts per million, and the judgment of iris identity can be satisfiedAnd (4) determining the conditions.
In the embodiment, the artificial iris comparison method based on computer assistance takes an original iris image, a circular normalized iris image and a rectangular normalized iris image as data bases for artificially comparing irises, and judges whether the two compared iris images belong to the same iris or not by comparing the two iris images in pairs artificially based on computer-assisted image generation and image attribute calculation, so that the characteristics of the artificially compared iris textures are brought into the comparison process, and the artificial iris image comparison identification in pairs is realized.
In summary, the computer-aided iris comparison method, the electronic device and the computer-readable storage medium according to the embodiments of the present invention implement artificial iris comparison, so as to improve the success rate of iris identification and meet the requirement of iris identification.
In the description herein, reference to the description of the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," "an example," "a particular example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the various embodiments is provided to schematically illustrate the practice of the invention, and the sequence of steps is not limited and can be suitably adjusted as desired.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (17)

1. A computer-assisted iris comparison method, comprising:
acquiring a first iris image and a second iris image to be compared;
rotating the iris in the first iris image and the iris in the second iris image to the same angle;
performing iris positioning on the rotated first iris image and the rotated second iris image;
normalizing the first iris image after iris positioning and the second iris image after iris positioning; the normalization operation comprises one or more of iris radius normalization, pupil radius normalization and rectangle expansion normalization;
calculating the attribute information of the iris texture features of the corresponding positions in the first iris image after the normalization operation and the second iris image after the normalization operation;
outputting attribute information of the iris texture features of the corresponding positions to display so as to obtain a consistency comparison result of textures of corresponding areas in the first iris image and the second iris image;
and confirming that the irises in the first iris image and the second iris image are both from the same iris under the condition that the iris texture characteristics of the corresponding positions are consistent, wherein the iris texture characteristics are more than or equal to the set number in the consistency comparison result.
2. The computer-assisted iris comparison method of claim 1 wherein before rotating the iris in the first iris image and the iris in the second iris image to the same angle, further comprising:
and receiving a judgment result for confirming that the first iris image and the second iris image both accord with the quality requirement of the iris image data.
3. The computer-assisted iris comparison method of claim 1 wherein rotating the iris in the first iris image and the iris in the second iris image to the same angle comprises:
acquiring a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located and a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located;
and rotating the iris image to enable the direction of a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located to be consistent with the direction of a straight line where the inner canthus connecting line and the outer canthus connecting line corresponding to the iris in the second iris image are located.
4. The computer-aided iris comparison method of claim 3 wherein,
acquiring a first iris image and a second iris image to be compared, comprising:
acquiring and displaying a first iris image and a second iris image to be compared;
acquiring a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located and a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located, wherein the straight line comprises the following steps:
receiving an inner canthus clicking command and an outer canthus clicking command aiming at the iris in the first iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the first iris image are located according to the inner canthus clicking command and the outer canthus clicking command aiming at the iris in the first iris image;
and receiving an inner canthus click command and an outer canthus click command aiming at the iris in the second iris image, and generating a straight line where an inner canthus connecting line and an outer canthus connecting line corresponding to the iris in the second iris image are located according to the inner canthus click command and the outer canthus click command aiming at the iris in the second iris image.
5. The computer-assisted iris comparison method of claim 1 wherein performing iris localization on the rotated first iris image and the rotated second iris image comprises:
performing iris positioning on the rotated first iris image and the rotated second iris image by using an iris positioning neural network model to obtain an iris boundary position in the rotated first iris image and an iris boundary position in the rotated second iris image;
the iris positioning neural network model comprises a deep convolution network sharing layer, a key point regression network layer, an iris segmentation network layer and an output layer; the output of the deep convolutional network sharing layer is connected with the input of the key point regression network layer and the input of the iris segmentation network layer, and the output of the iris segmentation network layer and the output of the key point regression network layer are both connected with the input of the output layer; the deep convolutional network sharing layer is used for converting the input iris image into a shared characteristic map; the key point regression network layer is used for extracting the position information of key points of an iris region in the iris image from the shared characteristic diagram; the iris segmentation network layer is used for segmenting the shared characteristic graph into segmentation images corresponding to each pixel of the shared characteristic graph one by one, and the pixel value of each pixel in the segmentation images is used for identifying whether the corresponding pixel belongs to an iris area in the iris image or not; the output layer is used for calculating to obtain an initial iris boundary in the iris image according to the segmentation image and the input iris image, and synthesizing the position information of the initial iris boundary and key points of the iris region to obtain the position of the iris boundary in the iris image.
6. The computer-assisted iris comparison method of claim 5 further comprising:
training based on a total loss function to obtain the iris positioning neural network model;
wherein the overall loss function is represented as:
Loss=λreg×Lossregcls×Losscls
Figure FDA0002590579400000021
Figure FDA0002590579400000031
wherein Loss denotes the overall Loss function, LossregRepresenting a key point regression Loss function, Loss, corresponding to the key point regression network layerclsRepresenting an average segmentation error, λ, of iris segmentation corresponding to the iris segmentation network layerregWeight, λ, representing the regression loss function of the key pointsclsRepresenting the weight of the average segmentation error of iris segmentation, N representing the number of training samples, M representing the number of key points in an iris image, i representing the serial number of the training samples, j representing the serial number of the key points in the iris image, i and j being positive integers, i being more than or equal to 1 and less than or equal to N, j being more than or equal to 1 and less than or equal to M, xij、yijRespectively representing the row and column coordinates of the prediction keypoint,
Figure FDA0002590579400000032
the row coordinates and the column coordinates of the marked key points are respectively represented, i 'and j' respectively represent the row serial number and the column serial number of the pixel, H, W respectively represents the height and the width of the iris image, N represents the serial number of the sample, N is a positive integer, N is more than or equal to 1 and less than or equal to N, G (i ', j') represents the pixel value of whether the pixel of the ith 'row and the jth' column in the segmentation image is the iris area or not, and M (i ', j') represents whether the pixel of the ith 'row and the jth' column is the iris area prediction result or not.
7. The computer-assisted iris comparison method of claim 5 wherein performing iris localization on the rotated first iris image and the rotated second iris image further comprises:
outputting and displaying the iris boundary position in the first iris image after rotation and the iris boundary position in the second iris image after rotation to acquire an instruction for manually adjusting key points of an iris region and/or an instruction for adjusting the iris boundary;
and adjusting the iris boundary position of the corresponding iris image according to the instruction for manually adjusting key points of the iris region and/or the instruction for adjusting the iris boundary.
8. The computer-assisted iris alignment method of claim 1 wherein the normalization operation includes iris radius normalization and pupil radius normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, comprising:
and performing scaling operation and affine transformation on the first iris image after iris positioning and the second iris image after iris positioning to realize iris radius normalization and pupil radius normalization on the first iris image after iris positioning and the second iris image after iris positioning.
9. The computer-aided iris comparison method of claim 8, wherein the iris radius normalization and the pupil radius normalization of the first iris image after iris localization and the second iris image after iris localization are performed by performing a scaling operation and an affine transformation on the first iris image after iris localization and the second iris image after iris localization, comprising:
scaling the first iris image after iris positioning and the second iris image after iris positioning to enable the iris in the first iris image after iris positioning and the iris in the second iris image after iris positioning to have the same iris radius, so as to realize iris radius normalization of the first iris image after iris positioning and the second iris image after iris positioning;
by performing affine transformation on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, the iris in the first iris image subjected to iris radius normalization and the iris in the second iris image subjected to iris radius normalization have the same pupil radius, so that the pupil radius normalization is performed on the first iris image subjected to iris radius normalization and the second iris image subjected to iris radius normalization, and the iris radius normalization and the pupil radius normalization are performed on the first iris image subjected to iris positioning and the second iris image subjected to iris positioning.
10. The computer-assisted iris alignment method of claim 8 or 9 in which the normalization operation further comprises rectangular unfolding normalization; normalizing the first iris image after iris positioning and the second iris image after iris positioning, further comprising:
respectively carrying out rectangular expansion normalization on the irises in the first iris image and the second iris image after the irises rotate to the same angle; wherein the first and second iris images after the irises thereof are rotated to the same angle include: the first iris image after iris positioning and the second iris image after iris positioning, the first iris image after iris radius normalization and the second iris image after iris radius normalization, or the first iris image after pupil radius normalization and the second iris image after pupil radius normalization.
11. The computer-assisted iris comparison method of claim 10 wherein said first iris image after normalization comprises: one or more of the first iris image subjected to iris radius normalization, the first iris image subjected to pupil radius normalization, and the first iris image subjected to rectangle expansion normalization; the second iris image after the normalization operation comprises: one or more of the second iris image after iris radius normalization, the second iris image after pupil radius normalization, and the second iris image after rectangular expansion normalization.
12. The computer-aided iris comparison method of claim 1 or 11, wherein calculating the attribute information of the iris texture feature at the corresponding position in the first iris image after the normalization and the second iris image after the normalization comprises:
receiving a marking instruction for an iris texture feature in an image selected from the first iris image subjected to the normalization operation and the second iris image subjected to the normalization operation;
acquiring the iris texture characteristics of corresponding positions in the first iris image after normalization and the rest images in the second iris image after normalization according to the mark position information corresponding to the mark instruction of the iris texture characteristics in the selected image;
and calculating the attribute information of the iris texture characteristics corresponding to the mark position information in each image of the first iris image after the normalization operation and the second iris image after the normalization operation.
13. The computer-aided iris comparison method according to claim 1 or 11, wherein the consistency comparison result of the textures of the corresponding regions in the first iris image and the second iris image is a composite result of consistency comparison results of the iris texture features of the corresponding positions in each pair of images obtained by one-to-one combination of the first iris image after the normalization operation and the second iris image after the normalization operation; and the consistency comparison result of the iris texture features of the corresponding positions in the pair of combined images is obtained by integrating the classification comparison result of the attribute information of the iris texture features and the attribute information of the iris texture features.
14. The computer-aided iris comparison method of claim 1 wherein the attribute information of the iris texture feature includes iris texture feature position information and iris texture feature image information.
15. The computer-aided iris comparison method of claim 14 wherein the iris texture feature location information comprises: one or more of the center, the gravity center, the average gray scale of the iris texture feature, the average gray scale after image gray scale normalization, the center under polar coordinates with the pupil center as an origin, the gravity center under polar coordinates with the pupil center as the origin, the center under polar coordinates with the iris center as the origin, and the gravity center under polar coordinates with the iris center as the origin; the iris texture feature image information includes: and the average gray scale of the iris texture features and/or the average gray scale after image gray scale normalization.
16. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method according to any of claims 1 to 15 are implemented when the processor executes the program.
17. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 15.
CN202010694591.2A 2020-07-17 2020-07-17 Computer-aided iris comparison method and device Active CN112001244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010694591.2A CN112001244B (en) 2020-07-17 2020-07-17 Computer-aided iris comparison method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010694591.2A CN112001244B (en) 2020-07-17 2020-07-17 Computer-aided iris comparison method and device

Publications (2)

Publication Number Publication Date
CN112001244A true CN112001244A (en) 2020-11-27
CN112001244B CN112001244B (en) 2023-11-03

Family

ID=73466480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010694591.2A Active CN112001244B (en) 2020-07-17 2020-07-17 Computer-aided iris comparison method and device

Country Status (1)

Country Link
CN (1) CN112001244B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949518A (en) * 2021-03-09 2021-06-11 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN113688874A (en) * 2021-07-29 2021-11-23 天津中科智能识别产业技术研究院有限公司 Method and system for automatically segmenting iris region in human eye iris image
CN113780239A (en) * 2021-09-27 2021-12-10 上海聚虹光电科技有限公司 Iris recognition method, iris recognition device, electronic equipment and computer readable medium
CN113837117A (en) * 2021-09-28 2021-12-24 上海电力大学 Novel normalization and deep neural network-based iris coding method
CN115083006A (en) * 2022-08-11 2022-09-20 北京万里红科技有限公司 Iris recognition model training method, iris recognition method and iris recognition device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060165266A1 (en) * 2005-01-26 2006-07-27 Honeywell International Inc. Iris recognition system and method
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060165266A1 (en) * 2005-01-26 2006-07-27 Honeywell International Inc. Iris recognition system and method
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
苑玮琦;白晓光;冯琪;: "一种新颖的虹膜图像预处理方法", 光电工程, no. 04 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112949518A (en) * 2021-03-09 2021-06-11 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN112949518B (en) * 2021-03-09 2024-04-05 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN113688874A (en) * 2021-07-29 2021-11-23 天津中科智能识别产业技术研究院有限公司 Method and system for automatically segmenting iris region in human eye iris image
CN113780239A (en) * 2021-09-27 2021-12-10 上海聚虹光电科技有限公司 Iris recognition method, iris recognition device, electronic equipment and computer readable medium
CN113780239B (en) * 2021-09-27 2024-03-12 上海聚虹光电科技有限公司 Iris recognition method, iris recognition device, electronic device and computer readable medium
CN113837117A (en) * 2021-09-28 2021-12-24 上海电力大学 Novel normalization and deep neural network-based iris coding method
CN113837117B (en) * 2021-09-28 2024-05-07 上海电力大学 Iris coding method based on novel normalization and depth neural network
CN115083006A (en) * 2022-08-11 2022-09-20 北京万里红科技有限公司 Iris recognition model training method, iris recognition method and iris recognition device

Also Published As

Publication number Publication date
CN112001244B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
CN112001244B (en) Computer-aided iris comparison method and device
US11961227B2 (en) Method and device for detecting and locating lesion in medical image, equipment and storage medium
CN111310624B (en) Occlusion recognition method, occlusion recognition device, computer equipment and storage medium
US8194936B2 (en) Optimal registration of multiple deformed images using a physical model of the imaging distortion
CN105740899B (en) A kind of detection of machine vision image characteristic point and match compound optimization method
CN111223133B (en) Registration method of heterogeneous images
CN111368683B (en) Face image feature extraction method and face recognition method based on modular constraint CenterFace
EP3091479A1 (en) Fingerprint identification method and fingerprint identification device
CN110490158B (en) Robust face alignment method based on multistage model
US20040161134A1 (en) Method for extracting face position, program for causing computer to execute the method for extracting face position and apparatus for extracting face position
EP3680794A1 (en) Device and method for user authentication on basis of iris recognition
CN107066969A (en) A kind of face identification method
CN111428689B (en) Face image feature extraction method based on multi-pool information fusion
CN108470178B (en) Depth map significance detection method combined with depth credibility evaluation factor
CN108256454B (en) Training method based on CNN model, and face posture estimation method and device
CN114612472B (en) SegNet improvement-based leather defect segmentation network algorithm
CN112364881B (en) Advanced sampling consistency image matching method
CN113658129B (en) Position extraction method combining visual saliency and line segment strength
CN104268550A (en) Feature extraction method and device
CN109993090B (en) Iris center positioning method based on cascade regression forest and image gray scale features
CN105809657A (en) Angular point detection method and device
CN115456974A (en) Strabismus detection system, method, equipment and medium based on face key points
Nsaef et al. Enhancement segmentation technique for iris recognition system based on Daugman's Integro-differential operator
CN113591601B (en) Method and device for identifying hyphae in cornea confocal image
CN112380966B (en) Monocular iris matching method based on feature point re-projection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant