CN111798404B - Iris image quality evaluation method and system based on deep neural network - Google Patents

Iris image quality evaluation method and system based on deep neural network Download PDF

Info

Publication number
CN111798404B
CN111798404B CN201910269994.XA CN201910269994A CN111798404B CN 111798404 B CN111798404 B CN 111798404B CN 201910269994 A CN201910269994 A CN 201910269994A CN 111798404 B CN111798404 B CN 111798404B
Authority
CN
China
Prior art keywords
image
iris
layer
neural network
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910269994.XA
Other languages
Chinese (zh)
Other versions
CN111798404A (en
Inventor
程治国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianyumian Intelligent Technology Co ltd
Original Assignee
Shanghai Dianyumian Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianyumian Intelligent Technology Co ltd filed Critical Shanghai Dianyumian Intelligent Technology Co ltd
Priority to CN201910269994.XA priority Critical patent/CN111798404B/en
Publication of CN111798404A publication Critical patent/CN111798404A/en
Application granted granted Critical
Publication of CN111798404B publication Critical patent/CN111798404B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Ophthalmology & Optometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an iris image quality evaluation method based on a deep neural network, which comprises the following steps: establishing an iris image sample database; preprocessing an iris image; constructing a multilayer deep convolutional neural network model; training a multilayer deep convolutional neural network model and determining an optimal model; and (5) testing and evaluating iris images. The method can realize real-time discrimination of the fuzzy and clear iris images. The method of the invention can have higher recognition rate aiming at low quality iris images generated under different conditions, has good environmental adaptability, high automation degree and high operation speed, and can reject the iris images which do not accord with the quality standard in real time at the image acquisition end. The invention also discloses an iris image quality evaluation system based on the deep neural network.

Description

Iris image quality evaluation method and system based on deep neural network
Technical Field
The invention relates to the field of image recognition, in particular to an iris image quality evaluation method and an iris image quality evaluation system based on a deep neural network.
Background
With the development of information technology and the increasing need for security, biometric-based identification technology has been rapidly developed in recent years. As one of the important identity features, iris technology has three major characteristics: non-contact of image acquisition; uniqueness and stability of iris texture; can realize the anti-counterfeiting performance of the living body detection. Iris recognition can make up for limitations of other biological characteristics such as fingerprints and human face characteristics in large-scale identity authentication, and is recognized as the most accurate biological characteristic recognition technology at present.
Iris image quality assessment is one of the key steps in iris recognition. As the iris acquisition equipment is widely applied to the field of security protection, the iris acquisition equipment works in various indoor and outdoor scenes, and low-quality iris images are easily generated by over-intense illumination, lens defocusing blur, large-angle deviation of visual angles, serious shielding and the like. The low quality iris image will greatly reduce the matching accuracy of the iris texture. An iris image quality evaluation link is introduced at an image acquisition end, so that the effectiveness and the definition of the image can be ensured on the premise of limiting the acquired person as little as possible.
At present, a quality evaluation algorithm used in an iris recognition system is mainly based on a classical image processing algorithm, such as a fusion evaluation algorithm for realizing image definition judgment based on frequency domain high-frequency energy, gray gradient value judgment of the boundary of an iris and a scleral region, texture energy judgment based on two-dimensional Gabor transformation and the like by utilizing two-dimensional FFT transformation. For example, in the multi-biological characteristic fusion recognition method based on image quality evaluation (application number: CN201810512935), the quality evaluation of the face and iris images is carried out by using illumination and definition as quality influence factors, and the quality evaluation scores of the face and iris images are obtained; the multi-scale progressive iris image quality evaluation method (application number: CN201010217904) adopts a regionalization, weighting and multi-scale method to realize the steps of illumination evaluation, spot reflection evaluation, positioning rationality evaluation, fuzzy evaluation algorithm, occlusion evaluation algorithm and the like, and completes the quality evaluation of the iris image; an iris image quality detection method (application number: CN201810311821) obtains the joint quality score of an iris image sequence by calculating the integral definition index, the local definition index, the availability index and the iris neighborhood contrast index of an iris image. The multi-measure fusion algorithm is realized by data fusion of artificially defined features, and is often difficult to adapt to the quality evaluation requirements of low-quality images caused by different factors.
In recent years, the application of deep learning theory in the field of computer vision is becoming more and more extensive, and the application fields of image understanding, target detection, natural language processing and the like have been revolutionized so far.
Although some research results have been published for applying the deep learning technique to the quality evaluation of the common image, the quality evaluation of the common image and the iris image is obviously different in the research objects and the application field. The research objects of the common image quality evaluation are artificially generated compressed images, noisy images and the like, the image recovery is simple, and the method is mainly applied to the aspects of algorithm analysis comparison, system performance evaluation and the like. The quality evaluation of the iris image realizes the resolution of the naturally formed low-quality image, and the image recovery has great difficulty. At the present stage, the deep learning technology is applied to the iris image quality evaluation, and related research results are few, and relevant patent documents are not published yet.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an iris image quality evaluation method based on a deep neural network, which realizes real-time discrimination of fuzzy and clear iris images.
The invention provides an iris image quality evaluation method based on a deep neural network, which mainly comprises the following steps:
(1) establishing a sample database;
(2) preprocessing an iris image;
(3) constructing a multilayer deep convolutional neural network model;
(4) training a deep convolutional neural network model and determining an optimal model;
(5) iris image test and evaluation; namely, the quality of the iris image is tested and evaluated based on the optimal model of deep learning, and the result is output.
The technical scheme adopted by the invention is as follows:
step (1) establishing a sample database;
(1.1): the defocus blur value is calculated for S (preferably S >20000) iris images.
Since high-frequency information in each direction in the out-of-focus blurred image is attenuated, the loss/attenuation degree of the high-frequency information can be detected by constructing a high-frequency filter. The method comprises the steps of firstly, constructing an isotropic filter with the size of p x p pixels, wherein the isotropic filter can ensure that the gradient amplitudes of the obtained edges are consistent when detecting high-frequency information in different directions, and the original state of the high-frequency information in the iris image is not changed. The convolution value of the filter with the iris region of the image is calculated. The defocus degree of the iris image, i.e. the defocus blur value QblurDefined by formula (1):
Qblur=|I*H|2dxdy formula (1)
Wherein, I is an input iris image; h is p × p isotropic filter operator; dxdy represents two-dimensional filtering for the x and y directions.
In the step (1.1), the source of the iris image comprises an iris acquisition device for shooting, and an iris image public database provided by a network;
in step (1.1), the number S of iris images is preferably > 20000. The more images for training, the more beneficial the deep convolution neural network adopted by the invention to improve the generalization capability and establish an effective model.
In the step (1.1), the step of calculating the convolution value of the filter and the iris region of the image includes sequentially shifting the image from left to right, from top to bottom, or by one pixel position, dividing the image into blocks with the size of p × p as large as the convolution operator, and forming an overlapping region between the blocks. And multiplying the size p by p module by the filter operator matrix respectively to obtain a convolution result.
In the invention, the algorithm can run in real time, so that the iris image which does not accord with the quality standard can be rejected in real time at the image acquisition end.
(1.2): pre-sorting iris image samples.
Setting threshold values T _ good and T _ bad according to the defocus blur value QblurValue Pre-classification of Iris image samples into a set of clear images
Figure BDA0002018057540000031
And fuzzy image set
Figure BDA0002018057540000035
Two classes of pre-classified image sets:
Figure BDA0002018057540000032
(1.3): for image sets obtained by pre-classification
Figure BDA0002018057540000033
And
Figure BDA0002018057540000034
screening to obtain the final clear sample set D _ good ═ { dg ═1,dg2,…,dgND _ bad ═ db, and the final blurred sample set D _ bad ═ db1,db2,…,dbMAs shown in fig. 2. The screening may be manual or other screening methods. Preferably, the manual screening is to visually judge the clear blur by human eyes to obtain a final clear sample set and a final blur sample set. Preferably, the final sample set is obtained by manual screening followed by a voting mechanism, e.g., byThe clearness and the fuzziness obtained by visual judgment of human eyes are pre-classified through the result obtained by defocusing fuzzy judgment, five people further perform fine classification on the pre-classified result according to visual perception, and a final sample set is obtained through a voting mechanism, so that the accuracy is improved.
Step (2): and (5) preprocessing an iris image.
Iris area detection step. Detecting the final clear sample set D _ good ═ { dg) obtained in step (1.3) by using a full convolution network1,dg2,…,dgND _ bad ═ db, and the final blurred sample set D _ bad ═ db1,db2,…,dbMIris area in the iris image, from which the smallest rectangle containing the iris area is determined.
And (5) normalizing. Only the smallest rectangular area of the image containing the iris area is retained and normalized to a gray scale image of size sz × sz pixels.
In one embodiment, the iris images of the final sharp sample set and the final blurred sample set obtained in step (1.3) are normalized to 640 × 480 pixels, and the images are converted into grayscale images.
Since the image acquired by the image acquisition contains the complete eye and face partial regions, and the iris quality evaluation only needs to consider the iris region, only the minimum rectangle of the iris region is reserved. Since the iris region size of images obtained by different people at different shooting distances is greatly different, the images need to be normalized to the same size when uniformly input into a network.
The image size adopted in the unification is 640 x 480, 640 x 480 is the image size obtained by the conventional iris image acquisition equipment, and can be any other numerical value, but the original size horizontal diameter of the iris area in the image is required to be not less than 160 pixels.
And (3): and constructing a multilayer deep convolutional neural network model.
The network structure of the multilayer deep convolutional neural network model is divided into an input layer, a hidden layer and an output layer.
Wherein the input layer input data is the grayscale image normalized to sz × sz pixel size obtained in step (2).
The hidden layer comprises convolutional layers c1-c6, BN layers BN1-BN6, pooling layers mp1-mp4, ap1 and a full connection layer f 1.
The activation function adopted by the convolutional layers c1-c6 is a modified linear unit relu; the convolution kernels of the convolution layers c1-c6 are all 3 × 3, and the number of the convolution kernels is 8, 12, 16, 24, 32 and 48.
The BN layers BN1-BN6 are batch normalization layers (Batchnormalization) used to achieve that each layer of the neural network maintains the same distribution.
The pooling layers mp1-mp4 and ap1 are divided into a maximum pooling layer mp and an average pooling layer ap, which are respectively applied to different stages of the network architecture.
The number of output nodes of the full-connection layer f1 is 2, and the output nodes correspond to 2 classifications of a clear image set and a fuzzy image set of iris image evaluation. The specific network architecture is shown in fig. 3.
The output layer is used for outputting the classification result, and the two values are respectively clear and fuzzy probability values.
And (4): training a multilayer deep convolutional neural network model and determining an optimal model.
The iris image sample library is divided into a training set, a verification set and a test set. The training set is used for training neural network parameters, the verification set is used for testing and optimizing the network in the training process, and the test set is used for finally detecting the network effectiveness and is not intersected with each other. The training set, the verification set and the test set all contain final clear iris images and final fuzzy iris images, and the number of the final clear iris images and the number of the final fuzzy iris images in the training set are the same.
The training of the neural network model comprises two stages of network forward propagation and network backward propagation:
network forward propagation stage: the sample image enters through the input layer and then passes through the rolling layer c1- > BN layer BN1- > maximum pooling layer mp1- > rolling layer c2- > BN layer BN2- > maximum pooling layer mp2- > rolling layer c3- > BN layer BN3- > maximum pooling layer mp3- > rolling layer c4- > BN layer BN4- > maximum pooling layer mp4- > rolling layer c5- > BN layer BN5- > maximum pooling layer mp5- > rolling layer c6- > BN layer BN6- > average pooling layer ap1- > full connecting layer f1- > output layer (as shown in FIG. 3).
In the network back propagation stage, a cross entropy calculation loss function is adopted, and the weight and the bias of each layer in the network are reversely adjusted by adopting a random Gradient Descent (SGD) method. And training the multilayer deep convolutional neural network model when the training period is more than 90 times and the optimal accuracy of the verification set is saved, determining the optimal model, wherein the corresponding network parameter is the network parameter of the optimal model.
Wherein, the weights and the offsets of each layer are calculated by a network through a random gradient descent algorithm, and belong to the automatic adjustment of machine learning. The optimal setting of the weights and the offsets can only be determined by manually setting the lowest loss function values or the training periods, and the invention determines whether the optimal values have been reached by setting the highest training period.
And (5): and (5) testing and evaluating iris images.
And (4) inputting the images of the training set in the step (4) into the trained deep convolutional neural network for test evaluation, outputting results, judging the clear and fuzzy categories of the images of the training set according to the probability, wherein the results are the probabilities that the input iris region belongs to two clear and fuzzy types.
The effect of the method of the invention is evaluated to reach 99% accuracy for the training image and 94% accuracy for the verification image.
The invention also provides an iris image quality evaluation system based on the deep neural network, which comprises the following steps:
a sample database establishing module;
an iris image preprocessing module;
a building module of a multilayer deep convolutional neural network model;
the training module of the deep convolutional neural network model and the determining module of the optimal model;
and an iris image test evaluation module.
In conclusion, the beneficial effects of the invention are as follows:
(1) the existing artificial feature selection method is difficult to consider all image conditions comprehensively, such as clear images but dark light, for example, clear images but less image textures, and the like, and the optimal values of the weights of different features are difficult to determine. Aiming at the limitations of artificial feature selection and weight calculation in the traditional iris image quality evaluation method based on multi-feature fusion, machine learning can be used for adaptively selecting the required optimal features and feature proportion, and the method is superior to artificial judgment.
(2) The invention has good environmental adaptability, high automation degree and high operation speed, and can reject the iris image which does not accord with the quality standard in real time at the image acquisition end. The convolutional neural network adopted by the design of the invention has simple structure, high recognition rate and high operation speed.
(3) The invention introduces the deep learning technology into the design of the iris image quality evaluation algorithm, can realize the machine natural selection of the image characteristics, effectively avoids the evaluation limitation caused by artificial image characteristic setting, and obtains a more robust iris image quality evaluation model.
Drawings
FIG. 1 is a flow chart of the iris image quality evaluation method based on the deep neural network of the present invention.
Fig. 2 is a schematic diagram of a sharp image and a blurred image in an iris database, wherein the first behavior is a blurred iris image and the second behavior is a sharp iris image.
Fig. 3 is a deep neural network architecture proposed by the present invention.
FIG. 4 is a flow chart of the system for evaluating the quality of an iris image based on a deep neural network according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following specific examples and the accompanying drawings. The procedures, conditions, experimental methods and the like for carrying out the present invention are general knowledge and common general knowledge in the art except for the contents specifically mentioned below, and the present invention is not particularly limited.
The iris image quality evaluation method based on the deep neural network in the embodiment mainly comprises the following steps:
(1) establishing a sample database;
(2) preprocessing an iris image;
(3) constructing a multilayer deep convolutional neural network model;
(4) training a deep convolutional neural network model and determining an optimal model;
(5) iris image test and evaluation; namely, the quality of the iris image is tested and evaluated based on the optimal model of deep learning, and the result is output.
The technical scheme adopted by the embodiment is as follows:
step (1) establishing a sample database;
(1.1): the defocus blur value is calculated for 25000 iris images. In the step (1.1), the source of the iris image comprises an iris acquisition device for shooting, and an iris image public database provided by a network; in the step (1.1), the step of calculating the convolution value of the filter and the iris region of the image includes sequentially shifting the image from left to right, from top to bottom, or by one pixel position, dividing the image into blocks with the size of p × p as large as the convolution operator, and forming an overlapping region between the blocks. And multiplying the size p by p module by the filter operator matrix respectively to obtain a convolution result.
In the embodiment, the algorithm can run in real time, so that the iris image which does not meet the quality standard can be rejected in real time at the image acquisition end.
(1.2): pre-sorting iris image samples.
Setting threshold values T _ good and T _ bad according to QblurValue Pre-classification of Iris image samples into a set of clear images
Figure BDA0002018057540000071
And fuzzy image set
Figure BDA0002018057540000072
Two classes of pre-classified image sets:
Figure BDA0002018057540000073
(1.3): as shown in FIG. 2, the image set obtained by pre-classification is referred to
Figure BDA0002018057540000074
And
Figure BDA0002018057540000075
preferably, the final clear sample set D _ good ═ { dg) is obtained by a manual screening method, for example, by means of visual judgment of human eyes1,dg2,…,dgND _ bad ═ db, and the final blurred sample set D _ bad ═ db1,db2,…,dbM}. The clearness and the fuzziness obtained through visual judgment of human eyes are obtained, the result obtained through out-of-focus fuzzy judgment becomes a pre-classification result, five people further perform fine classification on the pre-classification result according to visual perception, and then a voting mechanism is adopted to obtain a final clear sample set and a final fuzzy sample set. Preferably, the criterion for the human eye to visually judge the clearness/blur may be adjusted according to actual needs, and may include: the shape and the distinguishability of the iris texture in the image and whether obvious noise exists in the image or not.
Step (2): and (5) preprocessing an iris image.
Iris area detection step. Detecting the final clear sample set D _ good ═ { dg) obtained in step (1.3) by using a full convolution network1,dg2,…,dgND _ bad ═ db, and the final blurred sample set D _ bad ═ db1,db2,…,dbMIris area in the iris image, from which the smallest rectangle containing the iris area is determined.
And (5) normalizing. Only the smallest rectangular area of the image containing the iris area is retained and normalized to a gray scale image of size sz × sz pixels.
In one embodiment, the iris images of the final sharp sample set and the final blurred sample set obtained in step (1.3) are normalized to 192 × 192 pixel size, and the images are converted into grayscale images.
Since the image acquired by the image acquisition contains the complete eye and face partial regions, and the iris quality evaluation only needs to consider the iris region, only the minimum rectangle of the iris region is reserved. Since the iris region size of images obtained by different people at different shooting distances is greatly different, the images need to be normalized to the same size when uniformly input into a network.
The image size adopted in the unification is 192 × 192.
And (3): and constructing a multilayer deep convolutional neural network model.
The network structure of the multilayer deep convolutional neural network model is divided into an input layer, a hidden layer and an output layer.
Wherein the input layer input data is the grayscale image normalized to 196 × 196 pixels obtained in step (2).
The hidden layer comprises convolutional layers c1-c6, BN layers BN1-BN6, pooling layers mp1-mp4, ap1 and a full connection layer f 1.
The activation function adopted by the convolutional layers c1-c6 is a modified linear unit relu; the convolution kernels of the convolution layers c1-c6 are all 3 × 3, and the number of the convolution kernels is 8, 12, 16, 24, 32 and 48; the method is used for feature extraction.
The BN layers BN1-BN6 are batch normalization layers (Batchnormalization) used to achieve that each layer of the neural network maintains the same distribution.
The pooling layers mp1-mp4 and ap1 are divided into a maximum pooling layer mp and an average pooling layer ap, which are respectively applied to different stages of the network architecture.
The number of output nodes of the full-connection layer f1 is 2, and the output nodes correspond to 2 classifications of a clear image set and a fuzzy image set of iris image evaluation. The specific network architecture is shown in fig. 3.
The output layer is used for outputting the classification result, and the two values are respectively clear and fuzzy probability values.
And (4): training a multilayer deep convolutional neural network model and determining an optimal model.
The iris image sample library is divided into a training set, a verification set and a test set. The training set is used for training neural network parameters, the verification set is used for testing and optimizing the network in the training process, and the test set is used for finally detecting the network effectiveness. The three are not mutually intersected. The training set, the verification set and the test set all contain final clear iris images and final fuzzy iris images, and the number of the final clear iris images and the number of the final fuzzy iris images in the training set are the same.
The training of the neural network model comprises two stages of network forward propagation and network backward propagation:
network forward propagation stage: the sample image enters through the input layer and then passes through the rolling layer c1- > BN layer BN1- > maximum pooling layer mp1- > rolling layer c2- > BN layer BN2- > maximum pooling layer mp2- > rolling layer c3- > BN layer BN3- > maximum pooling layer mp3- > rolling layer c4- > BN layer BN4- > maximum pooling layer mp4- > rolling layer c5- > BN layer BN5- > maximum pooling layer mp5- > rolling layer c6- > BN layer BN6- > average pooling layer ap1- > full connecting layer f1- > output layer (as shown in FIG. 3).
In the network back propagation stage, a cross entropy calculation loss function is adopted, and the weight and the bias of each layer in the network are reversely adjusted by adopting a random Gradient Descent (SGD) method. And training the multilayer deep convolutional neural network model when the training period is more than 90 times and the optimal accuracy of the verification set is saved, determining the optimal model, wherein the corresponding network parameter is the network parameter of the optimal model.
Wherein, the weights and the offsets of each layer are calculated by a network through a random gradient descent algorithm, and belong to the automatic adjustment of machine learning. The optimal setting of the weights and the offsets can only be determined by manually setting the lowest loss function values or the training periods, and the invention determines whether the optimal values have been reached by setting the highest training period.
And (5): and (5) testing and evaluating iris images.
And (4) inputting the images of the training set in the step (4) into the trained deep convolutional neural network for test evaluation, outputting results, judging the clear and fuzzy categories of the images of the training set according to the probability, wherein the results are the probabilities that the input iris region belongs to two clear and fuzzy types.
The effect of the method of the invention is evaluated to reach 99% accuracy for the training image and 94% accuracy for the verification image.
In this embodiment:
1. calculating out-of-focus fuzzy values of 25000 iris images according to the step (1), and dividing the images into a clear image set and a fuzzy image set according to a threshold value.
2. And (3) artificially screening the image set according to the step (2), and further dividing the clear and fuzzy images to obtain a final data set.
3. And (4) constructing a deep convolutional neural network according to the step (3), and normalizing the iris area into 192 pixels by 192 pixels and inputting the normalized iris area into the network.
4. And training the training set to obtain an optimal network model.
5. The network model is integrated into the iris acquisition equipment, once an iris image is acquired, the deep convolutional neural network model can be input for identification, and if the iris image is judged to be a fuzzy image, shooting can be prompted again.
As shown in fig. 4, the system for evaluating quality of an iris image based on a deep neural network in the present embodiment includes:
a sample database establishing module;
an iris image preprocessing module;
a building module of a multilayer deep convolutional neural network model;
the training module of the deep convolutional neural network model and the determining module of the optimal model;
and an iris image test evaluation module.
The protection of the present invention is not limited to the above embodiments. Variations and advantages that may occur to those skilled in the art may be incorporated into the invention without departing from the spirit and scope of the inventive concept, and the scope of the appended claims is intended to be protected.

Claims (4)

1. An iris image quality evaluation method based on a deep neural network is characterized by comprising the following steps:
(1) establishing an iris image sample database;
(2) preprocessing an iris image;
(3) constructing a multilayer deep convolutional neural network model;
(4) training a multilayer deep convolutional neural network model and determining an optimal model;
(5) iris image test and evaluation;
the step (1) specifically comprises the following steps:
(1.1) calculating the defocus blur value Q for a plurality of iris imagesblur: the method comprises the steps of firstly, constructing an isotropic filter with the size of p pixels, wherein the gradient amplitude of the edge of the isotropic filter is consistent when the isotropic filter detects high-frequency information in different directions, the original state of the high-frequency information in an iris image is not changed, the convolution value of the filter and an iris region of the image is calculated, the step of calculating the convolution value of the filter and the iris region of the image comprises the steps of moving the image from left to right and from top to bottom to the right or moving the image down by one pixel position in sequence, dividing the image into modules with the size of p pixels as large as a convolution operator, overlapping regions exist between the blocks, and multiplying the modules with a filter matrix operator respectively to obtain convolution results;
the defocus blur value Qblur=|I*H|2dxdy;
Wherein, I is an input iris image; h is p × p isotropic filter operator; dxdy represents two-dimensional filtering for the x and y directions;
(1.2) pre-classifying the iris image samples into a clear image set
Figure FDA0003000976570000011
And fuzzy image set
Figure FDA0003000976570000012
Setting a threshold value T _ good and a threshold value T _ bad according to the defocus blur value QblurValue Pre-classification of Iris images into clear image sets
Figure FDA0003000976570000013
And fuzzy image set
Figure FDA0003000976570000014
Two classes of pre-classified image sets;
(1.3) for image sets obtained by pre-classification
Figure FDA0003000976570000015
And
Figure FDA0003000976570000016
screening to obtain the final clear sample set D _ good ═ { dg ═1,dg2,...,dgND _ bad ═ db, and the final blurred sample set D _ bad ═ db1,db2,...,dbM}
Figure FDA0003000976570000017
The step (2) of preprocessing the iris image comprises an iris region detection step and a normalization step;
wherein the iris region detecting step is: detecting the final distinct sample set D _ good ═ { dg) obtained in step (1.3) using a full convolution network1,dg2,...,dgNAnd the final blurred sample set
Figure FDA0003000976570000021
Iris region in the iris image, thereby determining a smallest rectangle containing the iris region; wherein the normalization step is: reserving a minimum rectangular area containing an iris area in the image, and normalizing the minimum rectangular area into a gray image with the same pixel size;
the network structure of the multilayer deep convolutional neural network model in the step (3) is divided into an input layer, a hidden layer and an output layer; the input layer input data is a gray level image which is obtained in the step (2) and is normalized to the same pixel size, and the hidden layer comprises convolution layers c1-c6, BN layers BN1-BN6, pooling layers mp1-mp4, ap1 and a full connection layer f 1;
in the step (4), the training of the multilayer neural network model comprises two stages of network forward propagation and network backward propagation.
2. The evaluation method of claim 1, wherein the network forward propagation phase: inputting the gray-scale image sample normalized to the same pixel size obtained in the step (2) through an input layer, and sequentially passing through a convolution layer c1- > BN layer BN1- > maximum pooling layer mp1- > convolution layer c2- > BN layer BN2- > maximum pooling layer mp2- > convolution layer c3- > BN layer BN3- > maximum pooling layer mp3- > convolution layer c4- > BN layer BN4- > maximum pooling layer mp4- > convolution layer c5- > BN layer BN5- > maximum pooling layer mp5- > convolution layer c6- > BN layer BN6- > average pooling layer ap1- > full junction layer f1- > output layer.
3. The evaluation method of claim 1, wherein the network back propagation stage employs cross entropy to compute a loss function, and employs a stochastic gradient descent method to adjust weights and biases of layers in the network back.
4. An iris image quality evaluation system based on a deep neural network, characterized in that the quality evaluation is performed on an iris image by using the evaluation method of claim 1, 2 or 3, the system comprising:
an iris image sample database establishing module;
an iris image preprocessing module;
a multilayer deep convolutional neural network model building module;
the training module of the multilayer deep convolution neural network model and the determining module of the optimal model;
and an iris image test evaluation module.
CN201910269994.XA 2019-04-04 2019-04-04 Iris image quality evaluation method and system based on deep neural network Active CN111798404B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910269994.XA CN111798404B (en) 2019-04-04 2019-04-04 Iris image quality evaluation method and system based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910269994.XA CN111798404B (en) 2019-04-04 2019-04-04 Iris image quality evaluation method and system based on deep neural network

Publications (2)

Publication Number Publication Date
CN111798404A CN111798404A (en) 2020-10-20
CN111798404B true CN111798404B (en) 2021-06-18

Family

ID=72804829

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910269994.XA Active CN111798404B (en) 2019-04-04 2019-04-04 Iris image quality evaluation method and system based on deep neural network

Country Status (1)

Country Link
CN (1) CN111798404B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112651389B (en) * 2021-01-20 2023-11-14 北京中科虹霸科技有限公司 Correction model training, correction and recognition method and device for non-emmetropic iris image
CN113706469B (en) * 2021-07-29 2024-04-05 天津中科智能识别产业技术研究院有限公司 Iris automatic segmentation method and system based on multi-model voting mechanism
CN114140647A (en) * 2021-11-26 2022-03-04 蜂巢能源科技有限公司 Fuzzy image recognition algorithm for pole pieces of battery cell pole group
CN114971319A (en) * 2022-05-30 2022-08-30 中交第二航务工程局有限公司 Bridge engineering image data set acquisition system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160306A (en) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 Iris image blurring determination method and device
CN105260725A (en) * 2015-10-23 2016-01-20 北京无线电计量测试研究所 Iris recognition system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567744B (en) * 2011-12-29 2014-06-18 中国科学院自动化研究所 Method for determining quality of iris image based on machine learning
US9060718B2 (en) * 2012-02-13 2015-06-23 Massachusetts Institute Of Technology Methods and apparatus for retinal imaging
CN103577814A (en) * 2013-11-25 2014-02-12 中国科学院自动化研究所 Weighting comparison method for motion-blur iris recognition
CN106920229B (en) * 2017-01-22 2021-01-05 北京奇艺世纪科技有限公司 Automatic detection method and system for image fuzzy area
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160306A (en) * 2015-08-11 2015-12-16 北京天诚盛业科技有限公司 Iris image blurring determination method and device
CN105260725A (en) * 2015-10-23 2016-01-20 北京无线电计量测试研究所 Iris recognition system

Also Published As

Publication number Publication date
CN111798404A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
CN111798404B (en) Iris image quality evaluation method and system based on deep neural network
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN110148162A (en) A kind of heterologous image matching method based on composition operators
CN101339607B (en) Human face recognition method and system, human face recognition model training method and system
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
EP1229493A2 (en) Multi-mode digital image processing method for detecting eyes
CN105893946A (en) Front face image detection method
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN101558431A (en) Face authentication device
CN113592911B (en) Apparent enhanced depth target tracking method
CN110348289A (en) A kind of finger vein identification method based on binary map
CN109003275A (en) The dividing method of weld defect image
CN112560710B (en) Method for constructing finger vein recognition system and finger vein recognition system
CN108288276B (en) Interference filtering method in touch mode in projection interaction system
CN110021019A (en) A kind of thickness distributional analysis method of the AI auxiliary hair of AGA clinical image
CN116385832A (en) Bimodal biological feature recognition network model training method
Sujana et al. An effective CNN based feature extraction approach for iris recognition system
CN113420582A (en) Anti-counterfeiting detection method and system for palm vein recognition
CN108257148A (en) The target of special object suggests window generation method and its application in target following
CN116934734A (en) Image-based part defect multipath parallel detection method, device and related medium
CN109376782B (en) Support vector machine cataract classification method and device based on eye image features
CN107024480A (en) A kind of stereoscopic image acquisition device
CN116824345A (en) Bullet hole detection method and device based on computer vision
CN207319290U (en) One kind interference far field beam test system
CN206363347U (en) Based on Corner Detection and the medicine identifying system that matches

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant