CN115083006A - Iris recognition model training method, iris recognition method and iris recognition device - Google Patents

Iris recognition model training method, iris recognition method and iris recognition device Download PDF

Info

Publication number
CN115083006A
CN115083006A CN202210963759.4A CN202210963759A CN115083006A CN 115083006 A CN115083006 A CN 115083006A CN 202210963759 A CN202210963759 A CN 202210963759A CN 115083006 A CN115083006 A CN 115083006A
Authority
CN
China
Prior art keywords
iris
image
periocular
recognized
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210963759.4A
Other languages
Chinese (zh)
Inventor
李茂林
张小亮
戚纪纲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Superred Technology Co Ltd
Original Assignee
Beijing Superred Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Superred Technology Co Ltd filed Critical Beijing Superred Technology Co Ltd
Priority to CN202210963759.4A priority Critical patent/CN115083006A/en
Publication of CN115083006A publication Critical patent/CN115083006A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application relates to the technical field of image processing, in particular to an iris recognition model training method, an iris recognition device, electronic equipment and a storage medium, wherein the model training method comprises the steps of firstly, segmenting an iris training sample into an iris region and a periocular region; then obtaining an iris image and a periocular image of an iris training sample, wherein the iris image is an image of an iris region, and the periocular image is an image obtained by filling the iris region of the iris training sample with filling pixel values; and finally, training an iris recognition model by using the iris image and the periocular image. In the invention, the iris recognition model is trained by combining the iris image and the periocular image simultaneously so as to improve the accuracy and reliability of iris recognition by using the iris recognition model.

Description

Iris recognition model training method, iris recognition method and iris recognition device
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an iris recognition model training method, an iris recognition device, an electronic apparatus, and a storage medium.
Background
In recent years, iris recognition technology has been rapidly developed, and has also received wide attention from academic, industrial, government and military. The biological characteristic identification is difficult to forge and imitate, and in numerous biological characteristics, the iris has great advantages due to the stability, uniqueness and non-invasiveness, so that the iris identification has wide market prospect and scientific research value.
However, the iris is easily affected by the external collection environment, such as illumination, lens shielding, unmatched shooting objects and the like, so that the iris recognition performance is affected, the iris area is limited, and in an interference environment, iris recognition is performed only by relying on iris area characteristics, and the reliability is poor.
Disclosure of Invention
The invention provides an iris recognition model training method, an iris recognition device, electronic equipment and a storage medium, wherein iris features and eye periphery features are effectively fused together to obtain fusion features, recognition is carried out according to the fusion features, and the reliability of recognition in an interference environment is effectively improved.
In a first aspect, an embodiment of the present invention provides an iris recognition model training method, including:
dividing the iris training sample into an iris region and a periocular region;
acquiring an iris image and a periocular image of an iris training sample; the iris image is an image of an iris region, and the eye periphery image is an image obtained by filling the iris region of an iris training sample with filling pixel values;
and training an iris recognition model by using the iris image and the periocular image.
Firstly, dividing an iris training sample into an iris region and a periocular region; then obtaining an iris image and a periocular image of the iris training sample, wherein the iris image is an image of an iris region, and the periocular image is an image obtained by filling the iris region of the iris training sample with filling pixel values; and finally, training an iris recognition model by using the iris image and the periocular image. In the invention, the iris recognition model is trained by combining the iris image and the periocular image simultaneously so as to improve the accuracy and reliability of iris recognition by using the iris recognition model.
Optionally, the training of the iris recognition model using the iris image and the periocular image comprises:
acquiring iris characteristics of an iris training sample based on the iris image;
acquiring periocular characteristics of an iris training sample based on periocular images;
carrying out weighted fusion on the iris features and the periocular features to obtain fusion features;
and training an iris recognition model according to the iris characteristics, the periocular characteristics and the fusion characteristics.
In the invention, the fusion characteristics obtained by weighting and fusing the iris characteristics and the periocular characteristics integrate the periocular characteristics and the iris characteristics, so that the iris identification accuracy is improved conveniently; the iris recognition model is trained according to the individual iris features, the eye periphery features and the fusion features, so that the accuracy of iris recognition can be further improved.
Optionally, the training of the iris recognition model according to the iris features, the eye circumference features and the fusion features includes:
calculating a loss value of the iris recognition model according to the iris characteristics, the periocular characteristics and the fusion characteristics;
and training an iris recognition model according to the loss value.
In the invention, the loss value of the iris recognition model is calculated, and then the iris recognition model is trained according to the loss value, so that the recognition accuracy of the iris recognition model can reach a higher state.
Optionally, the loss values include: loss values corresponding to iris features, loss values corresponding to periocular features, and loss values corresponding to fusion features.
Optionally, the filling pixel value is a preset pixel value or obtained according to a pixel value corresponding to the eye surrounding area.
In a second aspect, an embodiment of the present invention provides an iris identification method, including:
dividing an image to be identified into an iris area to be identified and a periocular area to be identified;
acquiring an iris image to be recognized and a periocular image to be recognized of an image to be recognized; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with the filling pixel value;
inputting the iris image to be recognized and the periocular image to be recognized into an iris recognition model, and recognizing the iris in the image to be recognized by using the iris recognition model.
Firstly, segmenting an image to be identified into an iris area to be identified and a periocular area to be identified; then, acquiring an iris image to be recognized and a periocular image to be recognized of the image to be recognized; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with the filling pixel value; and finally, inputting the iris image to be recognized and the periocular image to be recognized into an iris recognition model, and recognizing the iris in the image to be recognized by using the iris recognition model. In the invention, the iris recognition model is used for recognizing the iris in the image to be recognized by combining the iris image and the periocular image, so that the accuracy and reliability of iris recognition are effectively improved.
In a third aspect, an embodiment of the present invention provides an iris recognition model training apparatus, including:
the first segmentation module is used for segmenting the iris training sample into an iris region and a periocular region;
the first acquisition module is used for acquiring an iris image and a periocular image of an iris training sample; the iris image is an image of an iris region, and the eye periphery image is an image obtained by filling the iris region of an iris training sample with filling pixel values;
and the training module is used for training the iris recognition model by utilizing the iris image and the periocular image.
In a fourth aspect, an embodiment of the present invention provides an iris recognition apparatus, including:
the second segmentation module is used for segmenting the image to be identified into an iris area to be identified and a periocular area to be identified;
the second acquisition module is used for acquiring an iris image to be identified and a periocular image to be identified of the image to be identified; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with the filling pixel value;
and the identification module is used for inputting the iris image to be identified and the periocular image to be identified into the iris identification model and identifying the iris in the image to be identified by using the iris identification model.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method according to any one of the first aspect or the second aspect when executing the program.
In a sixth aspect, an embodiment of the invention provides a computer-readable storage medium on which a computer program is stored, which, when executed by a processor, implements the method according to any one of the first or second aspects.
Advantageous effects
The invention provides an iris recognition model training method, an iris recognition device, electronic equipment and a storage medium, wherein the model training method comprises the steps of firstly, dividing an iris training sample into an iris region and a periocular region; then obtaining an iris image and a periocular image of the iris training sample, wherein the iris image is an image of an iris region, and the periocular image is an image obtained by filling the iris region of the iris training sample with filling pixel values; and finally, training an iris recognition model by using the iris image and the periocular image. In the invention, the iris recognition model is trained by combining the iris image and the periocular image simultaneously so as to improve the accuracy and reliability of iris recognition by using the iris recognition model.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of any embodiment of the invention, nor are they intended to limit the scope of the invention. Other features of the present invention will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present invention will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, the same or similar reference numerals denote the same or similar elements.
FIG. 1 is a flowchart of an iris recognition model training method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of an iris recognition model according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a feature fusion module according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method for iris recognition according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of an iris recognition model training apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an iris recognition apparatus according to an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device according to an embodiment of the invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
It should be noted that, the description in the embodiment of the present invention is only for clearly illustrating the technical solutions in the embodiment of the present invention, and does not limit the technical solutions provided in the embodiment of the present invention.
FIG. 1 is a flowchart of an iris recognition model training method according to an embodiment of the present invention; referring to fig. 1, the present embodiment provides an iris recognition model training method, including:
s101, dividing the iris training sample into an iris area and a periocular area.
In this embodiment, the iris training sample is segmented to obtain the iris region and the periocular region, and the iris training sample may be input into an iris segmentation model to realize segmentation, where the iris segmentation model may adopt an FCN model, a SegNet model, a U-Net model, or the like.
S102, obtaining an iris image and a periocular image of the iris training sample.
The iris image is an image of an iris region, and the periocular image is an image obtained by filling the iris region of an iris training sample with fill pixel values. Optionally, the iris image and the eye circumference image may be normalized so as to convert the iris image and the eye circumference image of different sizes into fixed sizes, thereby facilitating the processing in the subsequent training process.
In this embodiment, the filling pixel value is a preset pixel value or is obtained according to a pixel value of the periocular region, and is used to bias the iris recognition model to learn the features of the periocular region, so as to reduce the attention to the features of the iris region. The filling pixel value may be calculated according to a pixel value of the eye surrounding area, such as a mean value, a maximum value, a minimum value, and the like of the pixel value of the eye surrounding area, or may be a preset pixel value, such as a pixel value 0, which is not limited in this application.
And S103, training an iris recognition model by using the iris image and the periocular image.
And inputting the iris image and the periocular image into an iris recognition model for iris recognition. The iris training sample of the embodiment comprises the iris label, the result obtained by the iris recognition model identification can be compared with the iris label for analysis, the loss value of the result obtained by the identification is calculated, parameters such as the weight and the offset of the iris recognition model are adjusted by utilizing the back propagation of the loss value, the weight matrix and the offset matrix of the iris recognition model are used for realizing the training of the iris recognition model, and the accuracy and the reliability of the iris recognition by using the iris recognition model can be improved.
In summary, the iris identification model training method firstly divides the iris training sample into an iris region and a periocular region; then obtaining an iris image and a periocular image of the iris training sample, wherein the iris image is an image of an iris region, and the periocular image is an image obtained by filling the iris region of the iris training sample with filling pixel values; and finally, training an iris recognition model by using the iris image and the periocular image. In the invention, the iris recognition model is trained by combining the iris image and the periocular image simultaneously so as to improve the accuracy and reliability of iris recognition by using the iris recognition model.
Optionally, the iris recognition model training method may further include:
iris training samples are obtained.
The iris training samples are from a plurality of subjects, each subject corresponding to a number of images. The iris training samples are from different objects, each object acquires a plurality of images and an iris label corresponding to each image, each object is marked with a corresponding ID, the IDs of different objects are different, and the acquired images and iris labels corresponding to users are marked with the IDs so as to facilitate identification and training. In order to improve the accuracy of training, each user in this embodiment acquires at least 10 images and corresponding iris labels. The iris label can be made in advance, in the process of training the iris recognition model, firstly, the iris recognition model is used for extracting the characteristics of the image, such as the iris characteristics, the periocular characteristics and the fusion characteristics, then the extracted iris characteristics, the periocular characteristics and the fusion characteristics are compared with the iris label, the loss of the model is calculated, and then the iris recognition model is trained according to the back propagation of the loss value so as to improve the reliability of the iris recognition model.
Fig. 2 is a schematic structural diagram of an iris recognition model according to an embodiment of the present invention. Referring to fig. 2, the iris recognition model includes an iris feature extraction module, a periocular feature extraction module, and a feature fusion module.
Optionally, training the iris recognition model using the iris image and the periocular image, comprising:
step 1, obtaining iris characteristics of an iris training sample based on an iris image.
The method can be realized by an iris feature extraction module, wherein the iris feature extraction module comprises a plurality of convolution components and a residual error module, and the iris image is firstly input into the convolution components to extract features. In this embodiment, the iris image may be successively input into the plurality of convolution components to reduce parameters and perform more nonlinear mapping, thereby increasing the nonlinear fitting capability of the iris feature extraction module and improving the accuracy of feature extraction. In the embodiment, after each convolution component is input into the iris image, the output characteristics are input into the residual error module firstly, then the output characteristics are input into the next convolution component group, the residual error module can avoid excessive gradient dispersion caused by the deep depth of a neural network after continuously passing through the plurality of convolution components, network degradation caused by the deep depth of the network can be avoided, the performance of the iris identification model is further improved, and the iris characteristics are obtained finally after passing through the plurality of convolution components and the residual error module. In each group of residual error modules, firstly, inputting the characteristics output by the upper convolution component into two branches, respectively accessing the convolution layer and the normalization layer in the two branches, outputting the corresponding characteristics of the two branches, then merging the summed characteristics of the two branches with the characteristics input from the upper convolution component into the residual error module, and then accessing the convolution layer, the normalization layer and the activation layer to output the characteristics.
And 2, acquiring periocular characteristics of the iris training sample based on the periocular image.
The method can be realized by the periocular feature extraction module, and the periocular images are sequentially input into a plurality of groups of convolution components, so that the nonlinear fitting capacity of the periocular feature extraction module network is improved, and the accuracy of feature extraction is improved. And when each convolution component is input, the output characteristics are input into the residual error module firstly, then the output characteristics are input into the next group of convolution components so as to improve the performance of the iris identification model, and the eye periphery characteristics are obtained finally after the convolution components and the residual error module are processed.
And 3, performing weighted fusion on the iris features and the eye periphery features to obtain fusion features.
When the iris recognition model is trained subsequently, the weight values of the iris features and the periocular features are adjusted all the time in the feature fusion module, wherein the sum of the weight values of the iris features and the periocular features is 1. Can be realized by a feature fusion module, and fig. 3 is a schematic structural diagram of the feature fusion module according to the embodiment of the present invention.
Referring to fig. 3, the feature fusion module includes a convolution layer, a normalization layer, and an activation layer, and in the feature fusion module, firstly, the iris features extracted by the iris feature extraction module and the eye circumference features extracted by the eye circumference feature extraction module are input into the convolution layer for feature extraction; then inputting the extracted features into a normalization layer, and mainly used for preventing gradient explosion and gradient disappearance during training and improving the accuracy of extracting the fusion features; and inputting the features output by the normalization layer into the activation layer, and outputting the fusion features by the activation layer.
The iris recognition model in this embodiment further includes a full connection layer, and the iris feature, the periocular feature, and the fusion feature are all input into the full connection layer. The full connection layer corresponding to the iris features is used for integrating a plurality of groups of convolution components in the iris feature extraction module and local features output by the residual error module to output an identification result; the full connection layer corresponding to the eye periphery feature is used for integrating a plurality of groups of convolution components in the eye periphery feature extraction module and local features output by the residual error module to output another identification result; and the full connection layer corresponding to the fusion feature is used for integrating the convolution layer, the normalization layer and the local feature output by the activation layer of the feature fusion module to output a second identification result. Loss values of the three features (loss values corresponding to the iris features, loss values corresponding to the eye periphery features and loss values corresponding to the fusion features) can be respectively calculated according to the three recognition results, and then the iris recognition model is trained through back propagation of the loss values. In this embodiment, at least two groups of full connection layers corresponding to the iris feature, the eye contour feature and the fusion feature are provided, so as to solve the problem of nonlinearity in the network.
And 4, training an iris recognition model according to the iris characteristics, the eye circumference characteristics and the fusion characteristics.
Optionally, the training of the iris recognition model according to the iris features, the eye circumference features and the fusion features includes:
and a, calculating the loss value of the iris recognition model according to the iris characteristics, the eye periphery characteristics and the fusion characteristics.
And b, training an iris recognition model according to the loss value.
Parameters such as the weight and the bias of the iris recognition model can be adjusted through the back propagation of the loss value, so that the loss value is continuously reduced until the loss value is smaller than a set threshold value or reaches a set training frequency, and the training is completed. The iris recognition model is trained according to the loss value, so that the accuracy of the iris recognition model can be improved.
Optionally, the loss values include: loss values corresponding to iris features, loss values corresponding to periocular features, and loss values corresponding to fusion features.
Optionally, the loss value may be calculated by a cross entropy loss function, and parameters such as weights and offsets of a convolution component and a residual error module in the iris feature extraction module may be adjusted by the loss value corresponding to the iris feature until the loss value of the iris feature extraction module network is less than a set threshold or reaches a set training number; parameters such as weights and offsets of a convolution component and a residual error module in the eye periphery feature extraction module can be adjusted through loss values corresponding to the eye periphery features until the loss value of the iris feature extraction module network is smaller than a set threshold or reaches a set training frequency; parameters such as the weight, the bias weight and the like of the convolution layer, the normalization layer and the activation layer in the feature fusion module can be adjusted through the loss value corresponding to the fusion feature until the loss value of the feature fusion module network is smaller than a set threshold value or reaches a set training frequency.
The embodiment of the invention also provides an iris identification method. Fig. 4 is a flowchart of an iris identification method according to an embodiment of the present invention. As shown in fig. 4, the iris recognition method includes:
s401, segmenting an image to be identified into an iris area to be identified and a periocular area to be identified.
The image to be recognized in the embodiment can be acquired in real time through an acquisition machine, and can also be realized by inputting an existing image.
S402, obtaining the iris image to be recognized and the periocular image to be recognized of the image to be recognized.
The iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with the filling pixel values. The filling pixel value is a preset pixel value or is obtained according to the pixel value of the eye periphery area. The embodiment of "acquiring the iris image to be recognized and the eye periphery image to be recognized of the image to be recognized" may refer to the embodiment of "acquiring the iris image and the eye periphery image of the iris training sample" in the above training method embodiment.
And S403, inputting the iris image to be recognized and the periocular image to be recognized into an iris recognition model, and recognizing the iris in the image to be recognized by using the iris recognition model.
The iris image to be recognized and the periocular image to be recognized of the image to be recognized are obtained, the iris image to be recognized and the periocular image to be recognized are input into the iris recognition model obtained through the training, the iris feature to be recognized and the periocular feature to be recognized, which are respectively extracted from the iris image to be recognized and the periocular image to be recognized, are fused to form a fusion feature to be recognized, and finally the fusion feature to be recognized is output. The iris recognition method comprises the steps of identifying the fusion features to be recognized, identifying the iris features to be recognized and the periocular features to be recognized, recognizing the iris through the fusion features to be recognized, and improving recognition accuracy.
Optionally, the image to be recognized is acquired by an iris acquisition device, and the recognition can be performed in the following manner:
the first mode is that the iris acquisition device is wirelessly connected with a remote server, a trained iris recognition model is stored in the server, the iris acquisition device can wirelessly upload acquired images to the remote server, the remote server outputs iris features to be recognized, eye features to be recognized and fusion features to be recognized after the iris acquisition device is processed by the iris recognition model, and meanwhile the iris acquisition device further comprises an iris recognition module in the server and recognizes the fusion features to be recognized. Optionally, an iris feature library is connected to the server, a plurality of images and corresponding features thereof are stored in the iris feature library, each image has an ID which is unique to itself and information corresponding to the ID, the ID may be an identification number, similarity comparison analysis is performed between the feature to be identified and the features of the images in the iris feature library, and finally an image corresponding to the feature with the highest similarity in the feature library is found, that is, information corresponding to the ID and the ID is found, wherein a receiving end for outputting the information may be an iris identification device or a third-party device connected to the server, and finally, the server may wirelessly send the information obtained by identification to the corresponding iris identification device or the third-party device according to requirements on the iris identification device or the third-party device.
And in the second mode, the trained iris recognition model is downloaded and installed in the iris acquisition device in a software mode, and the iris acquisition device can directly upload the acquired image to be recognized to corresponding software for recognition and finally directly output corresponding recognition information.
The embodiment provides an iris identification method, which comprises the steps of firstly, segmenting an image to be identified into an iris area to be identified and a periocular area to be identified; then, acquiring an iris image to be recognized and a periocular image to be recognized of the image to be recognized; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with the filling pixel value; and finally, inputting the iris image to be recognized and the periocular image to be recognized into the iris recognition model, and recognizing the iris in the image to be recognized by using the iris recognition model. In the embodiment, the iris in the image to be recognized is recognized by combining the iris features and the features of the periphery of the eye, so that the accuracy and reliability of iris recognition are effectively improved.
The iris recognition model training method and the iris recognition method provided by the embodiment of the present application are described in detail above with reference to fig. 1 to 4. The following describes in detail an iris recognition model training apparatus for performing the iris recognition model training method according to the embodiment of the present application, and an iris recognition apparatus for performing the iris recognition method according to the embodiment of the present application, with reference to fig. 5 and 6.
FIG. 5 is a schematic structural diagram of an iris recognition model training apparatus according to an embodiment of the present invention; referring to fig. 5, the iris recognition model training apparatus includes:
a first segmentation module 501 is configured to segment the iris training sample into an iris region and a periocular region.
A first obtaining module 502, configured to obtain an iris image and a periocular image of an iris training sample; the iris image is an image of an iris region, and the eye periphery image is an image obtained by filling the iris region of an iris training sample with filling pixel values; the filling pixel value is a preset pixel value or is obtained according to the pixel value of the eye periphery area.
And a training module 503, configured to train an iris recognition model using the iris image and the periocular image.
The technical effect of the iris recognition model training device refers to the technical effect of the iris recognition model training method, and is not described herein again.
Fig. 6 is a schematic structural diagram of an iris recognition apparatus according to an embodiment of the present invention; referring to fig. 6, the iris recognition apparatus includes:
the second segmentation module 601 is configured to segment the image to be identified into an iris area to be identified and an eye area to be identified.
The second obtaining module 602 is configured to obtain an iris image to be recognized and an eye periphery image to be recognized of the image to be recognized; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with the filling pixel values. The filling pixel value is a preset pixel value or is obtained according to the pixel value of the periocular region to be identified.
The identification module 603 is configured to input the iris image to be identified and the periocular image to be identified into the iris identification model, and identify an iris in the iris image to be identified by using the iris identification model.
The technical effects of the iris recognition device refer to the technical effects of the iris recognition method, and are not described herein again.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 7, the electronic device includes a Central Processing Unit (CPU) 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input portion 706 including a keyboard, a mouse, and the like; an output section 707 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 708 including a hard disk and the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. A drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 710 as necessary, so that a computer program read out therefrom is mounted into the storage section 708 as necessary.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present invention may be implemented by software, or may be implemented by hardware. The described units or modules may also be provided in a processor, and may be described as: a processor includes a first segmentation module 501, a first acquisition module 502, a training module 503, where the names of these modules do not in some cases constitute a limitation on the module itself, e.g., the first segmentation module 501 may also be described as "the first segmentation module 501 for segmenting iris training samples into iris regions and periocular regions"; it can also be described as: a processor comprises a second segmentation module 601, a second acquisition module 602, a recognition module 603, wherein the names of these modules do not in some cases constitute a limitation on the module itself, e.g., the second segmentation module 601 may also be described as "the second segmentation module 601 for segmenting an image to be recognized into an iris region to be recognized and a periocular region to be recognized";
as another aspect, the present invention also provides a computer-readable storage medium, which may be a computer-readable storage medium included in an iris recognition model training apparatus or an iris recognition apparatus described in the above embodiments; or it may be a computer-readable storage medium that exists separately and is not built into the electronic device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing an iris recognition model training method or an iris recognition method described in the present invention.
The foregoing description is only exemplary of the preferred embodiments of the invention and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features and the technical features (but not limited to) having similar functions disclosed in the present invention are mutually replaced to form the technical solution.

Claims (10)

1. An iris recognition model training method is characterized by comprising the following steps:
dividing the iris training sample into an iris region and a periocular region;
acquiring an iris image and a periocular image of an iris training sample; the iris image is an image of the iris region, and the periocular image is an image obtained by filling the iris region of the iris training sample with filling pixel values;
and training an iris recognition model by using the iris image and the periocular image.
2. The method for training an iris recognition model according to claim 1, wherein the training an iris recognition model using the iris image and the periocular image comprises:
acquiring iris features of the iris training sample based on the iris image;
acquiring periocular features of the iris training sample based on the periocular image;
carrying out weighted fusion on the iris features and the periocular features to obtain fusion features;
and training the iris recognition model according to the iris characteristics, the periocular characteristics and the fusion characteristics.
3. The method for training the iris recognition model according to claim 2, wherein the training the iris recognition model according to the iris features, the eye periphery features and the fusion features comprises:
calculating a loss value of the iris recognition model according to the iris features, the periocular features and the fusion features;
and training the iris recognition model according to the loss value.
4. The iris recognition model training method as claimed in claim 3, wherein the loss value comprises:
loss values corresponding to the iris features, loss values corresponding to the periocular features, and loss values corresponding to the fusion features.
5. An iris recognition model training method according to claim 1, wherein the fill pixel values are preset pixel values or obtained according to pixel values of the periocular region.
6. An iris identification method, comprising:
dividing an image to be identified into an iris area to be identified and a periocular area to be identified;
acquiring an iris image to be recognized and a periocular image to be recognized of the image to be recognized; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with filling pixel values;
and inputting the iris image to be recognized and the periocular image to be recognized into an iris recognition model, and recognizing the iris in the image to be recognized by using the iris recognition model.
7. An iris recognition model training device, comprising:
the first segmentation module is used for segmenting the iris training sample into an iris region and a periocular region;
the first acquisition module is used for acquiring an iris image and a periocular image of an iris training sample; the iris image is an image of the iris region, and the periocular image is an image obtained by filling the iris region of the iris training sample with filling pixel values;
and the training module is used for training an iris recognition model by utilizing the iris image and the periocular image.
8. An iris recognition apparatus, comprising:
the second segmentation module is used for segmenting the image to be identified into an iris area to be identified and a periocular area to be identified;
the second acquisition module is used for acquiring an iris image to be recognized and a periocular image to be recognized of the image to be recognized; the iris image to be recognized is an image of the iris area to be recognized, and the periocular image to be recognized is an image obtained by filling the iris area to be recognized of the image to be recognized with filling pixel values;
and the identification module is used for inputting the iris image to be identified and the periocular image to be identified into an iris identification model and identifying the iris in the image to be identified by using the iris identification model.
9. An electronic device comprising a memory and a processor, the memory having a computer program stored thereon, wherein the processor, when executing the computer program, implements the method of any of claims 1-6.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN202210963759.4A 2022-08-11 2022-08-11 Iris recognition model training method, iris recognition method and iris recognition device Pending CN115083006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210963759.4A CN115083006A (en) 2022-08-11 2022-08-11 Iris recognition model training method, iris recognition method and iris recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210963759.4A CN115083006A (en) 2022-08-11 2022-08-11 Iris recognition model training method, iris recognition method and iris recognition device

Publications (1)

Publication Number Publication Date
CN115083006A true CN115083006A (en) 2022-09-20

Family

ID=83244720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210963759.4A Pending CN115083006A (en) 2022-08-11 2022-08-11 Iris recognition model training method, iris recognition method and iris recognition device

Country Status (1)

Country Link
CN (1) CN115083006A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597500A (en) * 2023-07-14 2023-08-15 腾讯科技(深圳)有限公司 Iris recognition method, iris recognition device, iris recognition equipment and storage medium
CN117079339A (en) * 2023-08-17 2023-11-17 北京万里红科技有限公司 Animal iris recognition method, prediction model training method, electronic equipment and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102844766A (en) * 2011-04-20 2012-12-26 中国科学院自动化研究所 Human eyes images based multi-feature fusion identification method
US20180173951A1 (en) * 2016-12-15 2018-06-21 Fotonation Limited Iris recognition workflow
US10185891B1 (en) * 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
CN111539256A (en) * 2020-03-31 2020-08-14 北京万里红科技股份有限公司 Iris feature extraction method and device and storage medium
CN112001244A (en) * 2020-07-17 2020-11-27 公安部物证鉴定中心 Computer-aided iris comparison method and device
CN112163456A (en) * 2020-08-28 2021-01-01 北京中科虹霸科技有限公司 Identity recognition model training method, identity recognition model testing method, identity recognition model identification method and identity recognition model identification device
CN112233026A (en) * 2020-09-29 2021-01-15 南京理工大学 SAR image denoising method based on multi-scale residual attention network
CN112949454A (en) * 2021-02-26 2021-06-11 西安工业大学 Iris identification method based on small sample learning
CN113643215A (en) * 2021-10-12 2021-11-12 北京万里红科技有限公司 Method for generating image deblurring model and iris image deblurring method
CN113936329A (en) * 2021-10-08 2022-01-14 上海聚虹光电科技有限公司 Iris recognition method, iris recognition device, electronic equipment and computer readable medium
US20220051105A1 (en) * 2020-08-17 2022-02-17 International Business Machines Corporation Training teacher machine learning models using lossless and lossy branches
CN114332522A (en) * 2020-09-29 2022-04-12 阿里巴巴集团控股有限公司 Image identification method and device and construction method of residual error network model
CN114596622A (en) * 2022-03-17 2022-06-07 吉林大学 Iris and periocular antagonism adaptive fusion recognition method based on contrast knowledge drive

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102844766A (en) * 2011-04-20 2012-12-26 中国科学院自动化研究所 Human eyes images based multi-feature fusion identification method
US10185891B1 (en) * 2016-07-08 2019-01-22 Gopro, Inc. Systems and methods for compact convolutional neural networks
US20180173951A1 (en) * 2016-12-15 2018-06-21 Fotonation Limited Iris recognition workflow
CN111539256A (en) * 2020-03-31 2020-08-14 北京万里红科技股份有限公司 Iris feature extraction method and device and storage medium
CN112001244A (en) * 2020-07-17 2020-11-27 公安部物证鉴定中心 Computer-aided iris comparison method and device
US20220051105A1 (en) * 2020-08-17 2022-02-17 International Business Machines Corporation Training teacher machine learning models using lossless and lossy branches
CN112163456A (en) * 2020-08-28 2021-01-01 北京中科虹霸科技有限公司 Identity recognition model training method, identity recognition model testing method, identity recognition model identification method and identity recognition model identification device
CN112233026A (en) * 2020-09-29 2021-01-15 南京理工大学 SAR image denoising method based on multi-scale residual attention network
CN114332522A (en) * 2020-09-29 2022-04-12 阿里巴巴集团控股有限公司 Image identification method and device and construction method of residual error network model
CN112949454A (en) * 2021-02-26 2021-06-11 西安工业大学 Iris identification method based on small sample learning
CN113936329A (en) * 2021-10-08 2022-01-14 上海聚虹光电科技有限公司 Iris recognition method, iris recognition device, electronic equipment and computer readable medium
CN113643215A (en) * 2021-10-12 2021-11-12 北京万里红科技有限公司 Method for generating image deblurring model and iris image deblurring method
CN114596622A (en) * 2022-03-17 2022-06-07 吉林大学 Iris and periocular antagonism adaptive fusion recognition method based on contrast knowledge drive

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
RAGHAVENDRA R等: ""Combining iris and periocular recognition using light field camera"", 《PROCEEDINGS OF THE 2ND IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION》 *
WOODARD D L等: ""On the fusion of periocular and iris biometrics in non-ideal imagery"", 《PROCEEDINGS OF THE 2010 20TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 *
张驰等: ""光场成像技术及其在计算机视觉中的应用"", 《中国图象图形学报》 *
秦涛: ""基于深度神经网络的眼周识别方法研究"", 《企业科技与发展》 *
胡亦等: ""基于近红外眼周图像有用特征的人类身份识别研究"", 《激光杂志》 *
裴晓芳等: "基于改进残差网络的花卉图像分类算法", 《电子器件》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116597500A (en) * 2023-07-14 2023-08-15 腾讯科技(深圳)有限公司 Iris recognition method, iris recognition device, iris recognition equipment and storage medium
CN116597500B (en) * 2023-07-14 2023-10-20 腾讯科技(深圳)有限公司 Iris recognition method, iris recognition device, iris recognition equipment and storage medium
CN117079339A (en) * 2023-08-17 2023-11-17 北京万里红科技有限公司 Animal iris recognition method, prediction model training method, electronic equipment and medium

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
US12020473B2 (en) Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN108509915B (en) Method and device for generating face recognition model
CN108427939B (en) Model generation method and device
CN115083006A (en) Iris recognition model training method, iris recognition method and iris recognition device
CN110245573B (en) Sign-in method and device based on face recognition and terminal equipment
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN110765882B (en) Video tag determination method, device, server and storage medium
WO2019184851A1 (en) Image processing method and apparatus, and training method for neural network model
US20190294863A9 (en) Method and apparatus for face classification
CN112989995B (en) Text detection method and device and electronic equipment
CN113705361A (en) Method and device for detecting model in living body and electronic equipment
CN109815823B (en) Data processing method and related product
CN108399401B (en) Method and device for detecting face image
CN112287945A (en) Screen fragmentation determination method and device, computer equipment and computer readable storage medium
CN110135428B (en) Image segmentation processing method and device
CN113033305B (en) Living body detection method, living body detection device, terminal equipment and storage medium
CN112287734A (en) Screen-fragmentation detection and training method of convolutional neural network for screen-fragmentation detection
CN114627424A (en) Gait recognition method and system based on visual angle transformation
CN113869253A (en) Living body detection method, living body training device, electronic apparatus, and medium
CN112464873A (en) Model training method, face living body recognition method, system, device and medium
CN111753618A (en) Image recognition method and device, computer equipment and computer readable storage medium
CN111104965A (en) Vehicle target identification method and device
CN114782822A (en) Method and device for detecting state of power equipment, electronic equipment and storage medium
CN117011904A (en) Image recognition method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220920

RJ01 Rejection of invention patent application after publication