CN116798107A - Visual processing method and device for comparing iris images - Google Patents

Visual processing method and device for comparing iris images Download PDF

Info

Publication number
CN116798107A
CN116798107A CN202310712499.8A CN202310712499A CN116798107A CN 116798107 A CN116798107 A CN 116798107A CN 202310712499 A CN202310712499 A CN 202310712499A CN 116798107 A CN116798107 A CN 116798107A
Authority
CN
China
Prior art keywords
iris
image
iris image
preset
recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310712499.8A
Other languages
Chinese (zh)
Inventor
李茂林
张小亮
魏衍召
杨占金
戚纪纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202310712499.8A priority Critical patent/CN116798107A/en
Publication of CN116798107A publication Critical patent/CN116798107A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The application provides a visual processing method and a visual processing device for comparing iris images, wherein the method comprises the following steps: acquiring a preset iris image and a first identification iris image; correcting the first recognition iris image to obtain a corrected recognition iris image; acquiring iris information in a preset iris image and a second identification iris image; obtaining a third recognition iris image according to the iris information; acquiring a first effective area of a preset iris image and a second effective area of a third identification iris image; respectively obtaining a preset iris edge image and an identification iris edge image according to the first effective area and the second effective area; and displaying the preset iris edge image and the identification iris edge image to a user. By the method, the problems that when the database and the iris images acquired in real time are directly compared, the difference characteristics between the iris images cannot be clearly and intuitively found, and the review of the iris images is not facilitated are solved.

Description

Visual processing method and device for comparing iris images
Technical Field
The application relates to the technical field of iris image processing, in particular to a visual processing method and device for comparing iris images.
Background
At present, under the condition of huge data, in the process of daily work, a public security criminal investigation organization verifies identity or confirms the identity of a suspected person in a traditional mode of checking identity cards, other certificates and the like, and therefore the public security criminal investigation organization is neither accurate nor time-consuming and labor-consuming. The iris recognition technology is one of the biological recognition technologies with highest safety and accuracy accepted in the industry, and plays an extremely important role in controlling, escaping, entering and exiting management, identity authentication and the like in police and criminal investigation work.
The efficiency of police affairs and criminal investigation work has been improved, the human cost has been saved simultaneously. When making a mode recognition decision through machine learning or deep learning, similarity measurement judgment is generally made through features, and if the similarity is greater than a certain preset threshold value, the matching is considered to be successful. From the point of public security, in order to ensure the security and reliability of the matching result, it is necessary to confirm whether the pattern recognition result is correct or not again. However, when the database is directly compared with the iris images acquired in real time, the difference features between the iris images cannot be clearly and intuitively found, so that the review of the iris images is not facilitated.
Based on the actual needs in the above-mentioned scene, it is important to propose a visual processing method and device for comparing iris images.
Disclosure of Invention
The application provides a visual processing method and a visual processing device for comparing iris images, which are used for solving the problems that when a database is directly compared with iris images acquired in real time, the difference characteristics between the iris images cannot be clearly and intuitively found, and the review of the iris images is not facilitated.
The first aspect of the present application provides a visual processing method for comparing iris images, the method comprising: acquiring a preset iris image and a first identification iris image; the preset iris image is an iris image corresponding to the target ID stored in an iris image database; the first recognition iris image is an iris image corresponding to the suspected target ID; correcting the first recognition iris image according to the eye corner point coordinates of the preset iris image and the eye corner point coordinates of the first recognition iris image to obtain a corrected recognition iris image; the corrected iris recognition image is a second iris recognition image; the method comprises the steps of obtaining iris information in a preset iris image and a second identification iris image, wherein the iris information comprises pupil radius, iris radius and a mask map, and the pixel value of any pixel point in the mask map is a first preset pixel value or a second preset pixel value; obtaining a third recognition iris image according to the iris information, wherein the pupil radius and the iris radius of the third recognition iris image are the same as those of the preset iris image; acquiring a first effective area of a preset iris image and a second effective area of a third identification iris image, wherein the effective areas are areas with pixel values of a first preset pixel value in a mask image; respectively obtaining a preset iris edge image and an identification iris edge image according to the first effective area and the second effective area; and displaying the preset iris edge image and the identification iris edge image to a user.
By adopting the method, in the process of checking and detecting the iris recognition result, the preset iris image and the recognition iris image are processed, so that a user can more intuitively and clearly know whether the difference exists between the iris images when checking and detecting the iris images, and the user can be helped to finish the checking work of the iris images.
In one possible embodiment, correcting the first recognition iris image according to the eye corner point coordinates of the preset iris image and the eye corner point coordinates of the first recognition iris image specifically includes: acquiring left eye corner coordinates and right eye corner coordinates of a preset iris image; obtaining a first horizontal included angle of the preset iris image according to the left eye corner coordinates and the right eye corner coordinates of the preset iris image; acquiring left eye corner coordinates and right eye corner coordinates of a first identification iris image; obtaining a second horizontal included angle of the first recognition iris image according to the left eye corner coordinates and the right eye corner coordinates of the first recognition iris image; determining a correction angle of the first recognition iris image according to the first horizontal included angle and the second horizontal included angle; and correcting the first recognition iris image by adopting the correction angle to obtain a second recognition iris image, wherein the horizontal included angle of the second recognition iris image is the first horizontal included angle.
By adopting the method, the method corrects the identification iris image by acquiring the horizontal included angle of the iris image, so that the identification iris image and the preset iris image are positioned on the same horizontal line, and further processing of the image is facilitated.
In one possible implementation manner, obtaining the third identified iris image according to the iris information specifically includes: judging whether the pupil radius of the preset iris image is the same as the pupil radius of the second identification iris image; when the pupil radius of the preset iris image is different from that of the second identification iris image, performing telescopic transformation on the mask map of the second identification iris image according to the pupil radius of the preset iris image to obtain a transformation mask map; the pupil radius of the transformation mask map is the same as that of the preset iris image, and the iris radius of the transformation mask map is the same as that of the preset iris image; and inputting the second identification iris image and the transformation mask map into a preset image generation model to generate a third identification iris image.
According to the method, when the pupil radius of the preset iris image is different from the pupil radius of the identification iris image, the mask map of the identification iris image is subjected to telescopic transformation to obtain a transformation mask map, and a new identification iris image with the same pupil radius as the preset iris image is obtained according to the transformation mask map.
In one possible implementation manner, the third identified iris image is obtained according to iris information, and the method specifically further includes: judging whether the pupil radius of the preset iris image is the same as the pupil radius of the second identification iris image; and when the pupil radius of the preset iris image is the same as that of the second identification iris image, directly determining the second identification iris image as a third identification iris image.
In one possible implementation manner, acquiring a first effective area of a preset iris image and a second effective area of a third identification iris image specifically includes: normalizing the preset iris image and the third recognition iris image to obtain a normalized preset iris image and a normalized recognition iris image; extracting a first binarization feature and a first mask of a preset iris image from a normalized preset iris image by adopting a preset feature extraction mode, and extracting a second binarization feature and a second mask of a third iris image from a normalized iris image; judging whether the point location coordinates and the point location pixel values corresponding to the first binarization features are valid or not according to the first mask, and judging whether the point location coordinates and the point location pixel values corresponding to the second binarization features are valid or not according to the second mask; when the point location coordinates and the point location pixel values corresponding to the first binarization feature are valid, determining a first effective area corresponding to the first binarization feature according to the first mask; and when the point position coordinates and the point position pixel values corresponding to the second binarization features are effective, determining a second effective area corresponding to the second binarization features according to the second mask.
By adopting the method, the binary features and the corresponding masks are extracted from the normalized preset iris image and the normalized identification iris image, so that the effective areas capable of representing the features in the iris image are obtained.
In one possible implementation manner, according to the first effective area and the second effective area, a preset iris edge image and an identified iris edge image are respectively obtained, which specifically includes: acquiring a preset initial mark graph, wherein the pixel value of any point in the preset initial mark graph is a first pixel value; screening a plurality of coincident points with the same pixel value and the same point position from the first effective area and the second effective area, wherein the pixel value of any one of the coincident points is a second pixel value or a third pixel value; according to the coordinates and pixel values of the target coincidence points, replacing the points which are the same as the coordinates of the target coincidence points in the preset initial mark map to obtain a comparison mark map; the target coincidence point is any one of a plurality of coincidence points; constructing a plurality of communication areas according to the comparison mark graph; the pixel values of all the points in any one of the plurality of communication areas are the same; sorting the plurality of connected regions from large to small according to the area, and selecting the first N connected regions to obtain a connected region set; and mapping the edge shapes corresponding to the connected region set into a preset iris image and a third identification iris image respectively to obtain the preset iris edge image and the identification iris edge image.
According to the method, the connected region set formed by the plurality of connected regions is obtained according to the first effective region and the second effective region, so that the preset iris edge image and the identification iris edge image which can display the preset iris image and the identification iris image edge features more clearly and directly are obtained according to the edge shape of the connected region set.
In one possible embodiment, the correction angle is determined according to the following formula:
wherein ,to correct the angle, the left eye corner coordinates of the iris image are preset to be +.>Presetting the right eye corner coordinate of the iris image as +.>The left eye corner coordinate of the first recognition iris image is +.>The right eye corner coordinate of the first recognition iris image is +.>
In a possible implementation manner, before the preset iris edge image and the identification iris edge image are obtained according to the first effective area and the second effective area, the method further includes: acquiring the Hamming distance between the first effective area and the second effective area; and when the Hamming distance is smaller than a preset Hamming distance threshold, determining the third recognition iris image as a valid image.
By adopting the method, the third recognition iris image is ensured to meet the image processing requirement by detecting the Hamming distance between the first effective area and the second effective area before the preset iris edge image and the recognition iris edge image are obtained.
The second aspect of the application provides a visual processing device for comparing iris images, which comprises an image acquisition unit, an image correction unit, an information acquisition unit, an image changing unit, an area acquisition unit, an edge image acquisition unit and an image display unit;
the image acquisition unit is used for acquiring a preset iris image and a first identification iris image; the preset iris image is an iris image corresponding to the target ID stored in an iris image database; the first recognition iris image is an iris image corresponding to the suspected target ID;
the image correction unit is used for correcting the first recognition iris image according to the eye corner point coordinates of the preset iris image and the eye corner point coordinates of the first recognition iris image to obtain a corrected recognition iris image; the corrected iris recognition image is a second iris recognition image;
the information acquisition unit is used for acquiring iris information in a preset iris image and a second identification iris image, wherein the iris information comprises pupil radius, iris radius and a mask map, and the pixel value of any pixel point in the mask map is a first preset pixel value or a second preset pixel value;
the image changing unit is used for obtaining a third identification iris image according to the iris information, wherein the pupil radius and the iris radius of the third identification iris image are the same as those of the preset iris image;
The region acquisition unit is used for acquiring a first effective region of the preset iris image and a second effective region of the third identification iris image, wherein the effective region is a region with a pixel value of a mask image being a first preset pixel value;
the edge image acquisition unit is used for respectively obtaining a preset iris edge image and an iris edge image identification according to the first effective area and the second effective area;
and the image display unit is used for displaying the preset iris edge image and the identification iris edge image to the user.
A third aspect of the application provides an electronic device comprising a processor, a memory, a user interface and a network interface, the memory for storing instructions, the user interface and the network interface for communicating to other devices, the processor for executing the instructions stored in the memory to cause the electronic device to perform the method of any of the above.
A fourth aspect of the application provides a computer readable storage medium storing instructions that, when executed, perform a method of any one of the above.
Compared with the related art, the application has the beneficial effects that:
1. in the checking detection process of the iris recognition result, the preset iris image and the recognition iris image are processed, so that a user can more intuitively and clearly know whether the iris images have differences or not when checking detection is carried out, and the user is helped to finish checking work of the iris images.
2. The iris image recognition method has the advantages that the iris image recognition method corrects the iris image recognition through obtaining the horizontal included angle of the iris image, so that the iris image recognition method and the preset iris image are positioned on the same horizontal line, and further processing of the image is facilitated.
3. When the pupil radius of the preset iris image is different from the pupil radius of the identification iris image, performing telescopic transformation on the mask map of the identification iris image to obtain a transformation mask map, and acquiring a new identification iris image with the same pupil radius as the preset iris image according to the transformation mask map.
4. And extracting binarization features and corresponding masks from the normalized preset iris image and the normalized identified iris image, thereby obtaining effective areas capable of representing the features in the iris image.
5. According to the first effective area and the second effective area, a communication area set formed by a plurality of communication areas is obtained, so that according to the edge shape of the communication area set, a preset iris edge image and an iris edge image which can display the preset iris image and the edge characteristics of the iris image more clearly and directly are obtained.
Drawings
Fig. 1 is a schematic flow chart of a visual processing method for comparing iris images according to an embodiment of the present application;
FIG. 2 is a schematic illustration of a second flow chart of a method for visualizing process of comparing iris images according to an embodiment of the present application;
FIG. 3 is a schematic view of a third flow chart of a visual processing method for comparing iris images according to an embodiment of the present application;
fig. 4 is a fourth flowchart of a visual processing method for comparing iris images according to an embodiment of the present application;
fig. 5 is a schematic diagram of a fifth flow chart of a visual processing method for comparing iris images according to an embodiment of the present application;
fig. 6 is a schematic view of a first scene of a visual processing method for comparing iris images according to an embodiment of the present application;
FIG. 7 is a schematic view of a second scenario illustrating a method for comparing iris images according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a visual processing device for comparing iris images according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Reference numerals: 81. an image acquisition unit; 82. an image correction unit; 83. an information acquisition unit; 84. an image changing unit; 85. a region acquisition unit; 86. an edge image acquisition unit; 87. an image display unit; 900. An electronic device; 901. a processor; 902. a communication bus; 903. a user interface; 904. a network interface; 905. a memory.
Description of the embodiments
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, not all embodiments.
In describing embodiments of the present application, words such as "exemplary," "such as" or "for example" are used to mean serving as examples, illustrations or explanations. Any embodiment or design described herein as "illustrative," "such as" or "for example" is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "illustratively," "such as" or "for example," etc., is intended to present related concepts in a concrete fashion.
In describing embodiments of the present application, the term "plurality" means two or more unless otherwise indicated. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
The embodiment of the application provides a visual processing method and a visual processing device for comparing iris images, which are used for processing preset iris images and identified iris images in the process of checking and detecting iris identification results, so that a user can more intuitively and clearly know whether differences exist between the iris images when checking and detecting iris comparison results, and the user is helped to finish the checking work of the iris images.
In the embodiment of the application, the iris comparison technology is realized through machine learning or deep learning, for example, and the iris comparison technology adopted in the embodiment of the application is not particularly limited. The embodiment of the application only exemplarily provides that the similarity measurement judgment is carried out through the characteristics, and if the similarity is larger than a certain preset threshold value, the matching is considered to be successful. Because the user often needs to manually review the successfully matched result, the comparison of the two iris images does not have strong observability for the person, and the two iris images cannot be intuitively and clearly observed. Based on the above requirements, the application provides a visual processing method and a visual processing device for comparing iris images.
Fig. 1 is a schematic flow chart of a visualization processing method for comparing iris images according to the present application, as shown in fig. 1, including steps S1-S7.
S1, acquiring a preset iris image and a first identification iris image; the preset iris image is an iris image corresponding to the target ID stored in an iris image database; the first recognition iris image is an iris image corresponding to the suspected target ID.
In the embodiment of the application, iris images corresponding to a plurality of person IDs are stored in an iris image database, and in practical application, a first recognition iris image of a person A is obtained through a preset iris image acquisition device. And judging that the first identification iris image is successfully matched with a preset iris image in an iris image database through the existing iris comparison technology, so that an identification result of the person A is a target ID corresponding to the preset iris image.
S2, correcting the first recognition iris image according to the eye corner point coordinates of the preset iris image and the eye corner point coordinates of the first recognition iris image to obtain a corrected recognition iris image; the corrected identified iris image is a second identified iris image.
In one possible embodiment, as shown in fig. 2, in step S2, correcting the first recognition iris image according to the eye corner point coordinates of the preset iris image and the eye corner point coordinates of the first recognition iris image specifically includes: steps S21 to S25.
S21, acquiring left-eye corner coordinates and right-eye corner coordinates of a preset iris image.
In the embodiment of the application, a horizontal line is selected as an X axis of a coordinate system, and a coordinate system is constructed by selecting any point from the left eye corner or the right eye corner of a preset iris image at the same time, so as to obtain the left eye corner coordinate of the preset iris image as the X axisPresetting the right eye corner coordinate of the iris image as +.>
S22, obtaining a first horizontal included angle of the preset iris image according to the left eye corner coordinates and the right eye corner coordinates of the preset iris image.
S23, acquiring left-eye corner coordinates and right-eye corner coordinates of the first recognition iris image.
And similarly constructing a coordinate system of the first recognition iris image based on the construction mode of the preset iris image.
Acquiring the left eye corner coordinate of the first identification iris image asThe right eye corner coordinate of the first recognition iris image is +.>
S24, obtaining a second horizontal included angle of the first recognition iris image according to the left eye corner coordinates and the right eye corner coordinates of the first recognition iris image.
S25, determining the correction angle of the first recognition iris image according to the first horizontal included angle and the second horizontal included angle.
In one possible implementation of the method according to the application,
wherein ,to correct the angle, the left eye corner coordinates of the iris image are preset to be +. >Presetting the right eye corner coordinate of the iris image as +.>The left eye corner coordinate of the first recognition iris image is +.>The right eye corner coordinate of the first recognition iris image is +.>
S26, correcting the first recognition iris image by adopting the correction angle to obtain a second recognition iris image, wherein the horizontal included angle of the second recognition iris image is the first horizontal included angle.
Specifically, by the above method, the correction angle is obtainedAnd rotating the first recognition iris image by taking the geometric center of the first recognition iris image as a center point to obtain a second recognition iris image, wherein the horizontal included angle of the second recognition iris image is a first horizontal included angle of a preset iris image. So that the person a's recognition iris image is aligned with the preset iris image.
In the embodiment of the present application, the construction of the coordinate system of the first recognition iris image and the preset iris image is given by way of example only, and is not limited thereto. In addition, the right eye corner and the position of the right eye corner in the iris image in the embodiment of the present application may be referred to the description in the related art, and the present application is not repeated.
S3, iris information in the preset iris image and the second identification iris image is obtained, wherein the iris information comprises pupil radius, iris radius and a mask map, and the pixel value of any pixel point in the mask map is a first preset pixel value or a second preset pixel value.
In the embodiment of the application, the pupil radius of the preset iris image is obtained by adopting the preset image segmentation and positioning technologyIris radius->Mask pattern->Pupil radius of second recognition iris image is obtained +.>Iris radius->Mask pattern->. Specifically, in the embodiment of the present application, the mask map includes an iris area and other areas. Wherein, the pixel value of the iris area is a first preset pixel value, and the first preset pixel value is 255; the pixel values of other areas are second preset pixel values, the second preset pixel values are 0, and the length and width are further obtained>Is a binary image of (a). The image segmentation positioning technology can be referred to the related technical description, and the embodiments of the present application are not described herein in detail.
And S4, obtaining a third identification iris image according to the iris information, wherein the pupil radius and the iris radius of the third identification iris image are the same as those of the preset iris image.
In a possible embodiment, as shown in fig. 3, in step S4, a third iris image is obtained according to iris information, specifically including steps S41 to S43.
S41, judging whether the pupil radius of the preset iris image is the same as that of the second recognition iris image.
In the embodiment of the application, before the first effective area of the preset iris image and the second effective area of the iris image are acquired, the pupil radius of the iris image and the pupil radius of the preset iris image are detected. The purpose is to ensure that the acquisition results of the subsequent first effective area and the second effective area can meet the requirement of subsequent image processing, so that the pupil radius of the transformation mask map is the same as the pupil radius of the preset iris image, the iris radius of the transformation mask map is the same as the iris radius of the preset iris image, and the acquisition of the transformation mask map can be seen in the content of the subsequent embodiment.
In the embodiment of the application, when the pupil radius of the preset iris image is the same as that of the second recognition iris image, the iris radius of the preset iris image is the same as that of the second recognition iris image. Therefore, in detecting the iris image, it is only necessary to determine whether the pupil radius of the preset iris image is the same as the pupil radius of the second recognition iris image. In the embodiment of the application, the difference value between the pupil radius of the preset iris image and the pupil radius of the second recognition iris image is that . When->When the value of (2) is smaller than the preset pupil radius difference value, the pupil radius of the preset iris image is considered to be the same as the pupil radius of the second recognition iris image, and the selection of the preset pupil radius difference value is not particularly limited in the embodiment of the application.
S42, performing telescopic transformation on the mask map of the second recognition iris image according to the pupil radius of the preset iris image when the pupil radius of the preset iris image is different from the pupil radius of the second recognition iris image, so as to obtain a transformation mask map; the pupil radius of the transformation mask map is the same as that of the preset iris image, and the iris radius of the transformation mask map is the same as that of the preset iris image.
In the embodiment of the application, whenWhen the value of (2) is greater than or equal to the difference value of the preset pupil radius, the pupil radius of the preset iris image can be considered to be different from the pupil radius of the second recognition iris image. At this time, pupil radius of the iris image is preset +.>Is the standard, when->Is greater than->Amplifying a mask map of the second recognition iris image; when->Less than->And reducing the mask image of the second recognition iris image. Finally, the pupil radius in the mask map of the second recognition iris image is equal to the pupil radius of the preset iris image +. >. The new mask map obtained by performing a series of expansion transformation operations on the mask map of the second iris image is the transformation mask map.
S43, inputting the second recognition iris image and the transformation mask map into a preset image generation model, and generating a third recognition iris image.
In the embodiment of the application, an image generation model is trained by using an image generation technology, such as a GAN model or a diffusion model, and the image generation model has the characteristic of keeping the characteristics of the characteristics in the image unchanged. So as to keep the iris radius, pupil radius, iris inner and outer ring, iris limited area unchanged, in the embodiment of the application, the adopted image generation technology is not particularly limited.
Specifically, the second iris image and the transformation mask map are input into a preset image generation model, and a third iris image is generated. The transformation mask map mainly plays a role in generating guidance so that the second recognition iris image is iterated continuously. In the model application process, the transformation mask diagram is used as a reference standard, if the pupil radius and the iris radius of the second recognition iris image are inconsistent with the transformation mask diagram, the image is continuously generated, and finally a third recognition iris image is obtained.
In one possible embodiment, as shown in fig. 3, step S4 further includes step S44.
And S44, when the pupil radius of the preset iris image is the same as that of the second recognition iris image, directly determining the second recognition iris image as a third recognition iris image.
S5, acquiring a first effective area of a preset iris image and a second effective area of a third identification iris image, wherein the effective areas are areas with pixel values of a first preset pixel value in a mask image.
In one possible implementation, as shown in fig. 4, in step S5, a first effective area of the preset iris image and a second effective area of the third identified iris image are acquired, and specifically includes steps S51-S54.
S51, carrying out normalization processing on the preset iris image and the third recognition iris image to obtain a normalized preset iris image and a normalized recognition iris image.
In the embodiment of the application, the normalization processing of the iris image refers to a process of performing a series of standard processing transformations on the iris image to transform the iris image into a fixed standard form, and the standard image obtained in the process is called normalized preset iris image and normalized identified iris image.
S52, extracting first binarization features and a first mask of a preset iris image from the normalized preset iris image by adopting a preset feature extraction mode, and extracting second binarization features and a second mask of a third iris image from the normalized iris image; the first mask is used for representing the point location coordinates and the point location pixel values corresponding to the first binarization feature; the second mask is used to represent the point coordinates and the point pixel values corresponding to the second binarized feature.
Specifically, in the application, the normalized preset iris image is based on the characteristic in the preset iris image extracted by the Gabor classical characteristic extraction method or the neural network characteristic extraction method, and the corresponding first binary characteristic is obtained by utilizing the binarization technologyIts corresponding first mask is +.>. The normalized iris image is extracted by a classical feature extraction method or a feature extraction method of a neural network based on Gabor, features in a third iris image are extracted, and a corresponding second binary feature +_is obtained by utilizing a binarization technology>Its corresponding second mask is +.>
S53, judging whether the point coordinates and the point pixel values corresponding to the first binarization feature are valid according to the first mask, and judging whether the point coordinates and the point pixel values corresponding to the second binarization feature are valid according to the second mask.
In the embodiment of the application, whether the point position corresponding to the first binarization feature is a non-zero point position is judged according to the first mask, and when the point position is a non-zero point position, the point position coordinates corresponding to the first binarization feature and the point position pixel value are determined to be effective. Otherwise, when the point position corresponding to the first binarization feature is the zero point position, determining that the point position coordinate corresponding to the first binarization feature and the point position pixel value are invalid. In the embodiment of the present application, the process of determining whether the point location coordinates and the point location pixel values corresponding to the second binarization feature are valid may refer to a specific implementation manner of the first binarization feature.
Illustratively, the first binarization feature includes at least one point location A, and the point locations in the first binarization feature are determined one by one according to a first mask. When the point position A is a non-zero point position, determining that the point position coordinate and the point position pixel value corresponding to the point position A are valid; and when the point position A is the zero point position, determining that the point position coordinate corresponding to the point position A and the point position pixel value are invalid. In the embodiment of the present application, the specific number of the points in the first binarization feature is not limited.
S54, when the point position coordinates and the point position pixel values corresponding to the first binarization feature are valid, determining a first valid area corresponding to the first binarization feature according to the first mask.
And S55, when the point position coordinates and the point position pixel values corresponding to the second binarization features are valid, determining a second effective area corresponding to the second binarization features according to the second mask.
In the embodiment of the application, when the point position A is judged to be a non-zero point position according to the first mask, based on the obtained point position coordinates and point position pixel values of the effective point position A, corresponding point position coordinates and point position pixel values are obtained from a mask diagram of a preset iris image. Further, according to the first mask, point location coordinates and point location pixel values of all the effective points are sequentially obtained, and corresponding point location coordinates and point location pixel values are obtained from a mask diagram of a preset iris image. And obtaining a first effective area according to the point coordinates and the point pixel values in the mask map of the obtained preset iris image. The acquisition of the second active area may be referred to as acquisition of the first active area. The second effective area is acquired based on a mask map of the third recognition iris image.
Specifically, the process of acquiring the mask map of the third recognition iris image includes two different cases. Wherein, based on the condition when the mask map of the second iris image is acquired, referring to the case in step S44, when the pupil radius of the preset iris image is the same as the pupil radius of the second iris image, the mask map of the third iris image is directly obtained according to the mask map of the second iris image. Based on the condition at the time of acquisition of the transformation mask map, referring to the case in step S42, when the pupil radius of the preset iris image is different from the pupil radius of the second recognition iris image, the mask map of the third recognition iris image is directly obtained from the transformation mask map.
S6, respectively obtaining a preset iris edge image and an identification iris edge image according to the first effective area and the second effective area.
In a possible implementation manner, as shown in fig. 5, in step S6, a preset iris edge image and an identified iris edge image are obtained according to the first effective area and the second effective area, respectively, and specifically include steps S61-S66.
S61, acquiring a preset initial mark graph, wherein the pixel value of any point in the preset initial mark graph is a first pixel value.
S62, screening a plurality of overlapping points with the same pixel value and the same point position from the first effective area and the second effective area, wherein the pixel value of any overlapping point in the plurality of overlapping points is the second pixel value or the third pixel value.
S63, replacing the point position which is the same as the coordinate of the target coincidence point position in the preset initial mark image according to the coordinate and the pixel value of the target coincidence point position to obtain a comparison mark image; the target coincidence point is any one of a plurality of coincidence points.
For steps S61-S63, as shown in fig. 6, a transformation procedure of the initialization map is exemplarily given. Specifically, an initialization tag map is first acquiredThe pixel value of any point in the initialization mark graph is all-1. And counting a plurality of coincident points with the same pixel value and the same point position in the first effective area and the second effective area. When the pixel value of the coincidence point is 1, the coordinate of the coincidence point is in the corresponding initialization mark graph +. >The pixel value of (1); when the pixel value of the coincidence point is 0, the coordinate of the coincidence point is in the corresponding mark figure +.>In (a) and (b)The pixel value is 0. The final signature is changed to the alignment signature as in fig. 7.
S64, constructing a plurality of communication areas according to the comparison mark graph; the pixel values of the points in any one of the plurality of communication areas are the same.
S65, sorting the plurality of connected regions from large to small according to the area, and selecting the first N connected regions to obtain a connected region set.
Specifically, the obtained connected regions are searched for and compared with the connected regions with pixel values of 1 and 0 in the marker graph, the area of each connected region is counted, and the first N connected regions are selected according to the sequence from large to small of the area, so that a connected region set is formed, wherein />Is a certain communication area. In the embodiment of the application, the comparison marker diagram is only given by way of example, and in practical application, the number of the points of the comparison marker diagram is huge, and the number of the connected areas in the diagram is numerous. Therefore, the first N connected areas are selected to form an area set according to the sequence from large to small, and redundant calculation processing is reduced.
And S66, mapping the edge shapes corresponding to the connected region set into a preset iris image and a third identification iris image respectively to obtain the preset iris edge image and the identification iris edge image.
In the embodiment of the application, the detailed visual characteristics of spots, stripes, filaments, crowns, recesses and the like in the iris image can be used as a connected region, the regions have geometric characteristics, and the matching region in the iris image can be obtained through the edge shapes corresponding to the connected region sets acquired in S61-S65.
Specifically, edge shape information is acquired for each connected region in the connected region set by using a boundary tracking technique. And marking a preset iris edge image in the preset iris image and an identified iris edge image in the third identified iris image according to the edge shape information after inverse normalization.
And S7, displaying the preset iris edge image and the identification iris edge image to a user.
The user can obtain the preset iris edge image and the identification iris edge image, and the images marked according to the edge shape are more visual. And clearly observing a preset iris edge image and an iris edge image to complete the rechecking of the iris recognition result.
In a possible implementation manner, before the preset iris edge image and the identification iris edge image are obtained according to the first effective area and the second effective area, the method further includes:
Acquiring the Hamming distance between the first effective area and the second effective area; and when the Hamming distance is smaller than a preset Hamming distance threshold, determining the third recognition iris image as a valid image.
In one possible implementation, when the hamming distance is greater than or equal to the preset hamming distance threshold, the third recognition iris image is determined to be an invalid image, and the third recognition iris image is reacquired.
The hamming distance in the embodiment of the present application is used to represent the similarity degree between the first effective area and the second effective area, and when the hamming distance is greater than or equal to the preset hamming distance threshold, it indicates that the difference between the first effective area and the second effective area is larger. At this time, the third iris image needs to be re-acquired to obtain a new second effective area.
By adopting the embodiment, the beneficial effects of the application can be achieved by one or more of the following:
1. in the checking detection process of the iris recognition result, the preset iris image and the recognition iris image are processed, so that a user can more intuitively and clearly know whether the iris images have differences or not when checking detection is carried out, and the user is helped to finish checking work of the iris images.
2. The iris image recognition method has the advantages that the iris image recognition method corrects the iris image recognition through obtaining the horizontal included angle of the iris image, so that the iris image recognition method and the preset iris image are positioned on the same horizontal line, and further processing of the image is facilitated.
3. When the pupil radius of the preset iris image is different from the pupil radius of the identification iris image, performing telescopic transformation on the mask map of the identification iris image to obtain a transformation mask map, and acquiring a new identification iris image with the same pupil radius as the preset iris image according to the transformation mask map.
4. And extracting binarization features and corresponding masks from the normalized preset iris image and the normalized identified iris image, thereby obtaining effective areas capable of representing the features in the iris image.
5. According to the first effective area and the second effective area, a communication area set formed by a plurality of communication areas is obtained, so that according to the edge shape of the communication area set, a preset iris edge image and an iris edge image which can display the preset iris image and the edge characteristics of the iris image more clearly and directly are obtained.
The embodiment of the application provides a visual processing device for comparing iris images, as shown in fig. 8. The apparatus includes an image acquisition unit 81, an image correction unit 82, an information acquisition unit 83, an image modification unit 84, an area acquisition unit 85, an edge image acquisition unit 86, and an image display unit 87.
An image acquisition unit 81 for acquiring a preset iris image and a first recognition iris image; the preset iris image is an iris image corresponding to the target ID stored in an iris image database; the first recognition iris image is an iris image corresponding to the suspected target ID.
An image correction unit 82, configured to correct the first identified iris image according to the coordinates of the corner of the eye of the preset iris image and the coordinates of the corner of the eye of the first identified iris image, to obtain a corrected identified iris image; the corrected identified iris image is a second identified iris image.
The information obtaining unit 83 is configured to obtain iris information in the preset iris image and the second identified iris image, where the iris information includes a pupil radius, an iris radius, and a mask map, and a pixel value of any one pixel point in the mask map is a first preset pixel value or a second preset pixel value.
The image changing unit 84 is configured to obtain a third identified iris image according to the iris information, where the pupil radius and the iris radius of the third identified iris image are the same as those of the preset iris image.
The area obtaining unit 85 is configured to obtain a first effective area of the preset iris image and a second effective area of the third identified iris image, where the effective areas are areas in which pixel values in the mask image are first preset pixel values;
And an edge image obtaining unit 86, configured to obtain a preset iris edge image and an identified iris edge image according to the first effective area and the second effective area, respectively.
And an image display unit 87 for displaying the preset iris edge image and the recognition iris edge image to the user.
In one possible implementation, the image correction unit 82 includes a first coordinate acquisition module, a first included angle acquisition module, a second coordinate acquisition module, a second included angle acquisition module, an angle calculation module, and a correction module.
The first coordinate acquisition module is used for acquiring left eye corner coordinates and right eye corner coordinates of a preset iris image.
The first included angle acquisition module is used for acquiring a first horizontal included angle of the preset iris image according to the left eye corner coordinates and the right eye corner coordinates of the preset iris image.
And the second coordinate acquisition module is used for acquiring the left eye corner coordinates and the right eye corner coordinates of the first identification iris image.
And the second included angle acquisition module is used for acquiring a second horizontal included angle of the first recognition iris image according to the left eye corner coordinate and the right eye corner coordinate of the first recognition iris image.
And the correction module is used for determining the correction angle of the first identification iris image according to the first horizontal included angle and the second horizontal included angle.
In one possible implementation, the image modification unit 84 includes a pupil detection module, a first modification module, a model application module, and a second modification module.
And the pupil detection module is used for judging whether the pupil radius of the preset iris image is the same as the pupil radius of the second identification iris image.
The first changing module is used for performing telescopic transformation on the mask map of the second identification iris image according to the pupil radius of the preset iris image when the pupil radius of the preset iris image is different from the pupil radius of the second identification iris image, so as to obtain a transformation mask map; the pupil radius of the transformation mask map is the same as that of the preset iris image, and the iris radius of the transformation mask map is the same as that of the preset iris image.
And the model application module is used for inputting the second identification iris image and the transformation mask map into a preset image generation model to generate a third identification iris image.
And the second changing module is used for directly determining the second recognition iris image as a third recognition iris image when the pupil radius of the preset iris image is the same as that of the second recognition iris image.
In one possible implementation manner, the region acquiring unit 85 includes a normalization processing module, a feature extraction module, a first region determining module, and a second region determining module.
The normalization processing module is used for carrying out normalization processing on the preset iris image and the third recognition iris image to obtain a normalized preset iris image and a normalized recognition iris image.
The feature extraction module is used for extracting first binarization features and first masks of the preset iris image from the normalized preset iris image by adopting a preset feature extraction mode, and extracting second binarization features and second masks of the third iris image from the normalized iris image; the first mask is used for representing the point location coordinates and the point location pixel values corresponding to the first binarization feature; the second mask is used to represent the point coordinates and the point pixel values corresponding to the second binarized feature.
And the first area determining module is used for determining a first effective area corresponding to the first binarization characteristic according to the first mask.
And the second region determining module is used for determining a second effective region corresponding to the second binarization feature according to the second mask.
In one possible implementation, the edge image acquisition unit 86 includes a marker map acquisition module, a point screening module, a point transformation module, a region construction module, a region screening module, and an edge image acquisition module.
The mark image acquisition module is used for acquiring a preset initial mark image, wherein the pixel value of any point position in the preset initial mark image is a first pixel value.
And the point position screening module is used for screening a plurality of coincident point positions with the same pixel value and the same point position from the first effective area and the second effective area, wherein the pixel value of any one of the coincident point positions is the second pixel value or the third pixel value.
The point position transformation module is used for replacing the point position which is the same as the coordinate of the target coincidence point position in the preset initial mark image according to the coordinate and the pixel value of the target coincidence point position to obtain a comparison mark image; the target coincidence point is any one of a plurality of coincidence points.
The region construction module is used for constructing a plurality of communication regions according to the comparison mark graph; the pixel values of the points in any one of the plurality of communication areas are the same.
And the region screening module is used for sorting the plurality of connected regions from large to small according to the area, and selecting the first N connected regions to obtain a connected region set.
The edge image acquisition module is used for mapping the edge shapes corresponding to the connected region set into a preset iris image and a third identification iris image respectively so as to obtain the preset iris edge image and the identification iris edge image.
It should be noted that: in the device provided in the above embodiment, when implementing the functions thereof, only the division of the above functional modules is used as an example, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the embodiments of the apparatus and the method provided in the foregoing embodiments belong to the same concept, and specific implementation processes of the embodiments of the method are detailed in the method embodiments, which are not repeated herein.
Referring to fig. 9, a schematic structural diagram of an electronic device is provided in an embodiment of the present application. As shown in fig. 9, the electronic device 900 may include: at least one processor 901, at least one network interface 904, a user interface 903, memory 905, at least one communication bus 902.
Wherein a communication bus 902 is employed to facilitate a coupled communication between the components.
The user interface 903 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 903 may further include a standard wired interface and a wireless interface.
The network interface 904 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 901 may include one or more processing cores, among other things. The processor 901 connects various parts within the overall server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 905, and invoking data stored in the memory 905. Alternatively, the processor 901 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 901 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 901 and may be implemented by a single chip.
The Memory 905 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 905 includes a non-transitory computer readable medium (non-transitory computer-readable storage medium). The memory 905 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 905 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described respective method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 905 may also optionally be at least one storage device located remotely from the processor 901. As shown in fig. 9, an operating system, a network communication module, a user interface module, and an application program regarding a visualization process for comparing iris images may be included in the memory 905 as one type of computer storage medium.
In the electronic device 900 shown in fig. 9, the user interface 903 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 901 may be operable to invoke an application stored in memory 905 for use in the visualization process of an iris image, which when executed by one or more processors, causes electronic device 900 to perform the method as described in one or more of the embodiments above.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present application is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all of the preferred embodiments, and that the acts and modules referred to are not necessarily required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as the division of the units, merely a logical function division, and there may be additional manners of dividing the actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on this understanding, the technical solution of the present application may be embodied essentially or partly in the form of a software product, or all or part of the technical solution, which is stored in a memory, and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains.

Claims (10)

1. A visual processing method for comparing iris images, which is applied to a server, the method comprising:
acquiring a preset iris image and a first identification iris image; the preset iris image is an iris image corresponding to the target ID stored in the iris image database; the first identification iris image is an iris image corresponding to the suspected target ID;
correcting the first recognition iris image according to the eye corner point coordinates of the preset iris image and the eye corner point coordinates of the first recognition iris image to obtain a corrected recognition iris image; the corrected iris recognition image is a second iris recognition image;
Acquiring iris information in the preset iris image and the second identification iris image, wherein the iris information comprises pupil radius, iris radius and a mask map, and the pixel value of any pixel point in the mask map is a first preset pixel value or a second preset pixel value;
obtaining a third identification iris image according to the iris information, wherein the pupil radius and the iris radius of the third identification iris image are the same as those of the preset iris image;
acquiring a first effective area of the preset iris image and a second effective area of the third identification iris image, wherein the effective areas are areas with pixel values of the first preset pixel values in the mask image;
respectively obtaining a preset iris edge image and an identification iris edge image according to the first effective area and the second effective area;
and displaying the preset iris edge image and the identification iris edge image to a user.
2. The method according to claim 1, wherein said correcting said first identified iris image based on said predetermined iris image's eye-corner point coordinates and said first identified iris image's eye-corner point coordinates, comprises:
Acquiring left eye corner coordinates and right eye corner coordinates of the preset iris image;
obtaining a first horizontal included angle of the preset iris image according to the left eye corner coordinates and the right eye corner coordinates of the preset iris image;
acquiring left eye corner coordinates and right eye corner coordinates of the first recognition iris image;
obtaining a second horizontal included angle of the first recognition iris image according to the left eye corner coordinates and the right eye corner coordinates of the first recognition iris image;
determining a correction angle of the first identification iris image according to the first horizontal included angle and the second horizontal included angle;
and correcting the first recognition iris image by adopting the correction angle to obtain the second recognition iris image, wherein the horizontal included angle of the second recognition iris image is the first horizontal included angle.
3. The method according to claim 1, wherein obtaining a third identified iris image from the iris information specifically comprises:
judging whether the pupil radius of the preset iris image is the same as the pupil radius of the second recognition iris image;
when the pupil radius of the preset iris image is different from that of the second recognition iris image, performing telescopic transformation on a mask map of the second recognition iris image according to the pupil radius of the preset iris image to obtain a transformation mask map; the pupil radius of the transformation mask map is the same as that of the preset iris image, and the iris radius of the transformation mask map is the same as that of the preset iris image;
And inputting the second identification iris image and the transformation mask map into a preset image generation model to generate the third identification iris image.
4. The method according to claim 1, wherein obtaining a third identified iris image according to the iris information, further specifically comprises:
judging whether the pupil radius of the preset iris image is the same as the pupil radius of the second recognition iris image;
and when the pupil radius of the preset iris image is the same as that of the second recognition iris image, directly determining the second recognition iris image as the third recognition iris image.
5. The method according to claim 1, wherein the acquiring the first effective area of the preset iris image and the second effective area of the third identified iris image specifically comprises:
normalizing the preset iris image and the third recognition iris image to obtain a normalized preset iris image and a normalized recognition iris image;
extracting a first binarization feature and a first mask of the preset iris image from the normalized preset iris image by adopting a preset feature extraction mode, and extracting a second binarization feature and a second mask of the third iris image from the normalized iris image;
Judging whether the point location coordinates and the point location pixel values corresponding to the first binarization features are valid or not according to the first mask, and judging whether the point location coordinates and the point location pixel values corresponding to the second binarization features are valid or not according to the second mask;
when the point location coordinates and the point location pixel values corresponding to the first binarization feature are valid, determining a first effective area corresponding to the first binarization feature according to the first mask;
and when the point position coordinates and the point position pixel values corresponding to the second binarization features are effective, determining a second effective area corresponding to the second binarization features according to the second mask.
6. The method according to claim 1, wherein the obtaining a preset iris edge image and an identified iris edge image according to the first effective area and the second effective area respectively specifically includes:
acquiring a preset initial mark graph, wherein the pixel value of any point in the preset initial mark graph is a first pixel value;
screening a plurality of coincident points with the same pixel value and the same point position from the first effective area and the second effective area, wherein the pixel value of any one of the coincident points is a second pixel value or a third pixel value;
Replacing the point position, which is the same as the coordinate of the target coincidence point position, in the preset initial mark map according to the coordinate and the pixel value of the target coincidence point position, so as to obtain a comparison mark map; the target coincidence point is any one of a plurality of coincidence points;
constructing a plurality of communication areas according to the comparison mark graph; the pixel values of all the points in any one of the communication areas are the same;
sorting the plurality of communication areas from large to small, and selecting the first N communication areas to obtain a communication area set;
and mapping the edge shapes corresponding to the connected region set into the preset iris image and the third identification iris image respectively to obtain the preset iris edge image and the identification iris edge image.
7. The method of claim 2, wherein the correction angle is determined according to the following formula:
wherein ,for the correction angle, the left eye corner coordinate of the preset iris image is +.>The right eye corner coordinate of the preset iris image is +.>The left eye corner coordinate of the first iris image is +.>The right eye corner coordinate of the first recognition iris image is +. >
8. The method of claim 1, wherein before obtaining the preset iris edge image and the identified iris edge image according to the first effective area and the second effective area, respectively, the method further comprises:
acquiring the Hamming distance between the first effective area and the second effective area;
and when the Hamming distance is smaller than a preset Hamming distance threshold, determining the third recognition iris image as a valid image.
9. A visualization processing device for comparing iris images, characterized in that the device comprises an image acquisition unit (81), an image correction unit (82), an information acquisition unit (83), an image changing unit (84), a region acquisition unit (85), an edge image acquisition unit (86) and an image display unit (87);
the image acquisition unit (81) is used for acquiring a preset iris image and a first identification iris image; the preset iris image is an iris image corresponding to the target ID stored in the iris image database; the first identification iris image is an iris image corresponding to the suspected target ID;
the image correction unit (82) is configured to correct the first identified iris image according to the coordinates of the eye corners of the preset iris image and the coordinates of the eye corners of the first identified iris image, so as to obtain a corrected identified iris image; the corrected iris recognition image is a second iris recognition image;
The information obtaining unit (83) is configured to obtain iris information in the preset iris image and the second identified iris image, where the iris information includes a pupil radius, an iris radius, and a mask map, and a pixel value of any one pixel point in the mask map is a first preset pixel value or a second preset pixel value;
the image changing unit (84) is configured to obtain a third identified iris image according to the iris information, where a pupil radius and an iris radius of the third identified iris image are the same as those of the preset iris image;
the region acquisition unit (85) is configured to acquire a first effective region of the preset iris image and a second effective region of the third iris image, where the effective region is a region in which a pixel value in the mask map is the first preset pixel value;
the edge image acquisition unit (86) is used for respectively obtaining a preset iris edge image and an iris edge recognition image according to the first effective area and the second effective area;
the image display unit (87) is used for displaying the preset iris edge image and the identification iris edge image to a user.
10. An electronic device comprising a processor (901), a user interface (903), a network interface (904) and a memory (905), the memory (905) for storing instructions, the user interface (903) and the network interface (904) for communicating to other devices, the processor (901) for executing the instructions stored in the memory (905) for causing the electronic device (900) to perform the method according to any of claims 1-8.
CN202310712499.8A 2023-06-16 2023-06-16 Visual processing method and device for comparing iris images Pending CN116798107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310712499.8A CN116798107A (en) 2023-06-16 2023-06-16 Visual processing method and device for comparing iris images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310712499.8A CN116798107A (en) 2023-06-16 2023-06-16 Visual processing method and device for comparing iris images

Publications (1)

Publication Number Publication Date
CN116798107A true CN116798107A (en) 2023-09-22

Family

ID=88043401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310712499.8A Pending CN116798107A (en) 2023-06-16 2023-06-16 Visual processing method and device for comparing iris images

Country Status (1)

Country Link
CN (1) CN116798107A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014718A1 (en) * 2008-04-17 2010-01-21 Biometricore, Inc Computationally Efficient Feature Extraction and Matching Iris Recognition
CN104091155A (en) * 2014-07-04 2014-10-08 武汉工程大学 Rapid iris positioning method with illumination robustness
CN104484649A (en) * 2014-11-27 2015-04-01 北京天诚盛业科技有限公司 Method and device for identifying irises
CN105160306A (en) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 Iris image blurring determination method and device
CN106022315A (en) * 2016-06-17 2016-10-12 北京极创未来科技有限公司 Pupil center positioning method for iris recognition
CN108734064A (en) * 2017-04-20 2018-11-02 上海耕岩智能科技有限公司 A kind of method and apparatus of iris recognition
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
CN111476808A (en) * 2020-03-19 2020-07-31 北京万里红科技股份有限公司 Iris image definition evaluation method and device
CN114445904A (en) * 2021-12-20 2022-05-06 北京无线电计量测试研究所 Iris segmentation method, apparatus, medium, and device based on full convolution neural network
CN115457126A (en) * 2022-08-29 2022-12-09 中汽创智科技有限公司 Pupil positioning method and device, electronic equipment and storage medium
KR20220169770A (en) * 2021-06-21 2022-12-28 주식회사 에이제이투 Apparatus and method for generating image for iris recognition
US20230080861A1 (en) * 2020-02-20 2023-03-16 Eyecool Shenzhen Technology Co., Ltd. Automatic Iris Capturing Method And Apparatus, Computer-Readable Storage Medium, And Computer Device
CN115984538A (en) * 2022-12-16 2023-04-18 北京无线电计量测试研究所 Iris positioning method, device, equipment and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100014718A1 (en) * 2008-04-17 2010-01-21 Biometricore, Inc Computationally Efficient Feature Extraction and Matching Iris Recognition
CN104091155A (en) * 2014-07-04 2014-10-08 武汉工程大学 Rapid iris positioning method with illumination robustness
CN104484649A (en) * 2014-11-27 2015-04-01 北京天诚盛业科技有限公司 Method and device for identifying irises
CN105160306A (en) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 Iris image blurring determination method and device
CN106022315A (en) * 2016-06-17 2016-10-12 北京极创未来科技有限公司 Pupil center positioning method for iris recognition
CN108734064A (en) * 2017-04-20 2018-11-02 上海耕岩智能科技有限公司 A kind of method and apparatus of iris recognition
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network
CN111161276A (en) * 2019-11-27 2020-05-15 天津中科智能识别产业技术研究院有限公司 Iris normalized image forming method
US20230080861A1 (en) * 2020-02-20 2023-03-16 Eyecool Shenzhen Technology Co., Ltd. Automatic Iris Capturing Method And Apparatus, Computer-Readable Storage Medium, And Computer Device
CN111476808A (en) * 2020-03-19 2020-07-31 北京万里红科技股份有限公司 Iris image definition evaluation method and device
KR20220169770A (en) * 2021-06-21 2022-12-28 주식회사 에이제이투 Apparatus and method for generating image for iris recognition
CN114445904A (en) * 2021-12-20 2022-05-06 北京无线电计量测试研究所 Iris segmentation method, apparatus, medium, and device based on full convolution neural network
CN115457126A (en) * 2022-08-29 2022-12-09 中汽创智科技有限公司 Pupil positioning method and device, electronic equipment and storage medium
CN115984538A (en) * 2022-12-16 2023-04-18 北京无线电计量测试研究所 Iris positioning method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李庆嵘, 马争: "虹膜定位算法研究", 电子科技大学学报, no. 01, 28 February 2002 (2002-02-28), pages 10 - 12 *

Similar Documents

Publication Publication Date Title
US11109941B2 (en) Tracking surgical items with prediction of duplicate imaging of items
JP3725998B2 (en) Fingerprint verification device and verification method
US20060008124A1 (en) Iris image-based recognition system
EP3121759A2 (en) System and method of biometric enrollment and verification
US20180075291A1 (en) Biometrics authentication based on a normalized image of an object
Berretti et al. Selecting stable keypoints and local descriptors for person identification using 3D face scans
CN112001244B (en) Computer-aided iris comparison method and device
US10679094B2 (en) Automatic ruler detection
WO2018176514A1 (en) Fingerprint registration method and device
WO2017161636A1 (en) Fingerprint-based terminal payment method and device
CN112396050B (en) Image processing method, device and storage medium
CN112132812B (en) Certificate verification method and device, electronic equipment and medium
CN108389053B (en) Payment method, payment device, electronic equipment and readable storage medium
CN111222452A (en) Face matching method and device, electronic equipment and readable storage medium
US10740590B2 (en) Skin information processing method, skin information processing device, and non-transitory computer-readable medium
CN105190689A (en) Image processing including adjoin feature based object detection, and/or bilateral symmetric object segmentation
CN106725564B (en) Image processing apparatus and image processing method
JP5299196B2 (en) Marker detection device and program for marker detection device
CN116798107A (en) Visual processing method and device for comparing iris images
JP4775957B2 (en) Face detection device
CN111753723B (en) Fingerprint identification method and device based on density calibration
CN115690486A (en) Method, device and equipment for identifying focus in image and storage medium
Shamsafar A new feature extraction method from dental X-ray images for human identification
KR102333453B1 (en) Smartphone-based identity verification method using fingerprints and facial images
CN109145833B (en) Dual-mode credit card handling terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination