CN111401145B - Visible light iris recognition method based on deep learning and DS evidence theory - Google Patents

Visible light iris recognition method based on deep learning and DS evidence theory Download PDF

Info

Publication number
CN111401145B
CN111401145B CN202010120383.1A CN202010120383A CN111401145B CN 111401145 B CN111401145 B CN 111401145B CN 202010120383 A CN202010120383 A CN 202010120383A CN 111401145 B CN111401145 B CN 111401145B
Authority
CN
China
Prior art keywords
layer
iris
iris image
neural network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010120383.1A
Other languages
Chinese (zh)
Other versions
CN111401145A (en
Inventor
孙水发
陈俊杰
汪方毅
吴义熔
徐义春
刘世焯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202010120383.1A priority Critical patent/CN111401145B/en
Publication of CN111401145A publication Critical patent/CN111401145A/en
Application granted granted Critical
Publication of CN111401145B publication Critical patent/CN111401145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention provides a visible light iris recognition method based on deep learning and DS (direct sequence) evidence theory, and provides a multi-feature fusion iris recognition method based on a convolutional neural network and Support Vector Machine (SVM) and DS (Shafer-Dempster) evidence theory, aiming at the problems of multiple kinds of iris image noise collected under visible light, poor single-feature recognition anti-noise capability and the like, which cause low recognition rate and low stability. Firstly, locating an eye area from an image; preprocessing the positioned iris image; building a seven-layer convolution neural network; sending the iris image into a network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image; constructing Basic Probability Assignment (BPA) by using 3 types of single-feature SVM classification results, and sending the BPA into a DS evidence theory for fusion; and giving out a final recognition result according to the fusion result and the classification judgment threshold.

Description

Visible light iris recognition method based on deep learning and DS evidence theory
Technical Field
The invention relates to the field of biological feature recognition application, in particular to a visible light iris recognition method based on deep learning and DS evidence theory.
Background
Iris recognition is an important research direction of biometric identification technology, and is widely applied to the fields of security and access control. Traditional iris recognition is to acquire iris images under Near Infrared (NIR) and constrained environments, and its basic algorithms and applications are relatively mature. In recent years, with the rapid development of mobile smart devices, the performance of corresponding image sensors is also improved, and the mobile devices can utilize their own conventional cameras to complete the acquisition of iris images in an unconstrained Visible Light (VL) environment, where the iris images include textures and other appearance information that can be used for recognition. Compared with near-infrared iris recognition, visible light iris recognition has the advantages of unlimited acquisition conditions, wide applicability, low cost and the like. However, due to the influence of environment, acquisition equipment, human factors and the like, the iris image acquired under visible light has many noise factors, such as: reflection, low contrast, blur, shadow, occlusion, etc. These influence the accuracy of iris recognition, and bring certain challenges to feature extraction and matching in the iris recognition system.
In the aspect of iris feature extraction, global feature extraction and local feature extraction can be mainly divided. Specifically, the method comprises the steps of utilizing depth sparse filtering to extract the characteristics of a segmented iris image; sampling the normalized iris downwards to lengths of different scales and directions of different angles to form iris characteristic vectors by using a method of local Radon transformation multi-scale sparse representation; and extracting features of iris textures and the like by using a Local Binary Pattern (LBP). The above methods are all to extract the single feature of the iris, and there may be phenomena such as incomplete extracted information or redundancy, which results in low recognition rate and failure to well eliminate the influence of noise.
In recognition, various classifiers are commonly used for the classification and recognition of the visible light iris, such as: euclidean distance, hamming distance, support vector machines, sparse representation classifiers, and the like. The methods are all directed to single features and cannot be used for multi-feature fusion recognition.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem of the prior art, and provides a visible light iris identification method based on deep learning and DS (Shafer-Dempster) evidence theory, which comprises the following steps:
step 1, inputting a visible light iris image;
step 2, locating an eye area in the image;
step 3, preprocessing the positioned iris image;
step 4, designing and building a seven-layer convolutional neural network;
step 5, sending the preprocessed iris image into a neural network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image;
step 6, constructing a basic Probability assignment BPA (basic Probability assigned) according to the SVM classification result of the 3 types of features, and sending the BPA (basic Probability assigned) to a DS evidence theory for fusion;
and 7, giving a final recognition result according to the fusion result and the classification judgment threshold in the step 6.
In step 1, the visible light iris image comprises an eye region and contains human face and hair characteristics.
In step 2, the eye region in the image is located using the vision.
The step 3 comprises the following steps:
step 3-1, selecting the positioned iris image to perform iris segmentation, wherein the segmentation method adopts an integral-differential operator, and the operator changes the radius r and the horizontal coordinate x of a circle0Ordinate y0Three parameters are used for identifying the circular contour with the maximum intensity change, and the specific formula is as follows:
Figure BDA0002392783740000021
wherein, is the volume integral; i (x, y) is an acquired iris image; gσ(r) is a gaussian smoothing function; r is the searched circle radius;
Figure 1
represents the differential over radius r;
Figure BDA0002392783740000023
is that I (x, y) is at a radius of r, (x)0,y0) Is the curve integral of the center of the circle. In an iris image, the gray values at the outer edge of the iris (iris-sclera boundary) and at the inner edge of the iris (iris-pupil boundary) both have the largest gradient change, i.e. the circular contour with the largest intensity change. By using the formula (1), r, x are continuously changed0,y0By identifying the outer and inner iris edges by searching, the identified inner and outer iris edges are marked on the iris image;
step 3-2, normalizing the iris image segmented in the step 3-1 by adopting a Rubber Sheet Model (Rubber Sheet Model), cutting the circular iris area from one point and stretching the circular iris area into a rectangle by polar coordinate mapping according to a formula (2):
Figure BDA0002392783740000024
wherein r ∈ [0,1]],θ∈[0,2π];xr(θ) is the abscissa before mapping, yr(θ) is the ordinate before mapping; x (r, θ) is the mapped abscissa, and y (r, θ) is the mapped ordinate.
3-3, selecting the iris image normalized in the step 3-2 for preprocessing: in order to eliminate the influence of noise such as eyelids, eyelashes and reflection, the lower half part of the normalized iris image is taken by referring to the existing visible light iris image preprocessing method (Liu-Hao, white rain, Yi-Si Lu. visible light iris recognition method [ J ] based on the convolutional-like neural network, 2017 (11): 2651-2658.); preprocessing the iris image of the lower half part by using an MSRCR algorithm (a multi-scale retinex color restoration algorithm), weakening the influence of illumination and improving the definition of the image; then graying the iris image; and performing histogram equalization processing on the grayed iris image to improve the image contrast.
Step 4 comprises the following steps:
through designing and building a seven-layer convolutional neural network, sending the preprocessed iris image into the neural network for training a network model, wherein the specific structure of the seven-layer neural network is as follows: an input layer for inputting an iris image; in the first layer, the size of a convolution kernel is 6 multiplied by 3, and the maximum pooling layer is 2 multiplied by 2; the second layer, the convolution kernel size is 32 multiplied by 5, and the maximum pooling layer is 2 multiplied by 2; in the third layer, the size of a convolution kernel is 64 multiplied by 5, and the maximum pooling layer is 2 multiplied by 2; the fourth layer, the convolution kernel size is 256 × 5 × 5; a fifth layer, size 1 × 1024, activation function ReLU; a sixth layer, the size is 1 × 1024, and the activation function is ReLU; a seventh layer, the activation function is Softmax; and the output layer outputs the final classification result.
The step 5 comprises the following steps:
the middle layer of the convolutional neural network can be visualized and extracted, the characteristics of each layer of expression are different, and the layers can play a good complementary role. In practical application, the convolutional neural network mainly comprises a convolutional layer, a pooling layer and a full-link layer.
And (3) rolling layers: and extracting a feature map of the image by using a convolution kernel.
If the ith layer of the seven-layer convolutional neural network is set as a convolutional layer, the jth feature map of the ith layer
Figure BDA0002392783740000031
The calculation formula of (2) is as follows:
Figure BDA0002392783740000032
wherein denotes a convolution operation;
Figure BDA0002392783740000033
is the ith characteristic diagram of the l-1 layer;
Figure BDA0002392783740000034
to represent
Figure BDA0002392783740000035
And
Figure BDA0002392783740000036
a convolution kernel for concatenation between;
Figure 2
to represent
Figure 3
Bias of (3); f (-) denotes the linear activation function ReLU; ml-1The number of the l-1 level feature maps is shown;
full connection layer: and carrying out nonlinear combination on the extracted features of the upper layer or calculating the score of each classification.
If the ith layer of the seven-layer convolutional neural network is set as a full-connected layer, the jth feature map of the ith layer
Figure BDA0002392783740000041
The calculation formula of (2) is as follows:
Figure BDA0002392783740000042
yl-1representing the weighting result of all characteristic graphs of the l-1 layer;
Figure 4
to represent
Figure 5
Bias of (3);
and (4) dividing all the sample images preprocessed in the step 3-3 into a test set and a training set. Respectively selecting the same number of sample images of the left eye and the right eye of each person or the same number of sample images of the indoor and outdoor of each person to form a test set, and forming a training set by the residual sample images. Inputting the training set sample image into the convolutional neural network built in the step 4 for training, storing the trained network model, and then extracting iris features by using the following steps:
step 5-1, inputting all sample images in the training set and the test set into a trained seven-layer convolutional neural network, and automatically learning the characteristics of all sample images;
step 5-2, obtaining all output characteristic graphs of the convolution neural network fourth layer convolution layer through a formula (3), and forming a one-dimensional characteristic vector Conv4 through the processing of a Flatten function;
step 5-3, obtaining a fifth layer fully-connected layer feature vector fc5 of the convolutional neural network through a formula (4);
and 5-4, obtaining a sixth layer fully-connected layer feature vector fc6 of the convolutional neural network by using the formula (4).
The step 6 comprises the following steps:
respectively sending the 3 eigenvectors Conv4, fc5 and fc6 obtained in the step 5 into an SVM classifier for soft decision output, and mapping the output f (x) of the SVM classifier to a [0,1] interval by using a sigmoid function as a connection function to realize probability output of the SVM classifier, wherein the output form is as follows:
Figure BDA0002392783740000045
where f (f) (x) is the standard SVM output, P (y 1| x) represents the probability that the classification is correct given an output value x, a and B are parameter values, which can be found by solving the minimum negative log-likelihood value f (z) of the parameter set:
Figure BDA0002392783740000046
wherein the content of the first and second substances,
Pi=p(yi=1|xi) (7)
Figure BDA0002392783740000047
in the formula (8), N+And N-Respectively the number of positive samples (y)iNumber of samples equal to 1) and number of negative samples (y)iThe number of samples is-1), the positive and negative samples output class labels by the SVM classifier by adopting a voting method, the output 1 (positive sample) meeting the class condition, and the output-1 (negative sample) not meeting the class condition; y isiIndicates the category of the ith sample, and l indicates the total number of samples; after learning the three features Conv4, fc5 and fc6 of the training set obtained in the step 5, the SVM classifier obtains optimal parameters A and B according to an expression (6), and constructs a posterior probability rho according to an expression (5)i
After the SVM classifier tests the three types of features Conv4, fc5 and fc6 of the test set obtained in the step 5, the identification accuracy E of the test set is obtainediDefining a basic probability assignment BPA function mi(A) Comprises the following steps:
mi(A)=ρiEi (9)
let m1,m2,…,mnRespectively different evidences A1,A2,…,AnBase probability assignment of BPA, mnRepresents the nth evidence AnThe basic probability of (2) is assigned to BPA, and the fused probability m (A) is obtained according to the formula (10), wherein the m (A) reflects the accurate trust degree of the evidence A;
Figure BDA0002392783740000051
wherein k is an uncertain factor,
Figure BDA0002392783740000052
the step 7 comprises the following steps:
setting up
Figure BDA0002392783740000053
U is the recognition frame, i.e. the set of all sample image types, A1,A2As two different individual samples; and m is a basic probability assignment function obtained after fusion, and satisfies the following conditions:
m(A1)=max{m(Ai),Ai∈U} (11)
m(A2)=max{m(Ai),Aiis e.g. U and Ai≠A1} (12)
If so:
Figure BDA0002392783740000054
then A is1I.e. the decision result, where e1And ε2Is a preset threshold.
Aiming at the problems of multiple types of iris image noise collected under visible light, poor single-feature recognition anti-noise capability and the like, the invention provides a multi-feature fusion iris recognition method based on a convolutional neural network combined with a Support Vector Machine (SVM) and DS (Shafer-Dempster) evidence theory. Firstly, locating an eye region from an image; carrying out operations such as segmentation, normalization, pretreatment and the like on the positioned iris image; then, designing and building a seven-layer convolutional neural network; sending the preprocessed iris image into a network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image; finally, constructing Basic Probability Assignment (BPA) according to the 3-type single-feature SVM classification result, and sending the BPA into a DS evidence theory for fusion; and giving out a final recognition result according to the fusion result and the classification judgment threshold.
Has the advantages that: the invention realizes the identification of the iris under visible light by utilizing deep learning and multi-feature fusion technology. The method reduces the influence of iris image noise to a certain extent, and further improves the iris recognition rate; the iris image acquired under different acquisition conditions has certain robustness; and the photographing identification can be completed by only using a mobile phone or a camera to take a single picture, so that not only is the cost low, but also the application field of iris identification is greatly expanded.
Drawings
The above and other advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1a is a schematic view of an iris image eye location, normalization;
FIG. 1b is a schematic diagram of iris image eye region detection, localization and normalization;
FIG. 2 is a schematic diagram of normalization;
FIG. 3 is a schematic diagram of iris image pre-processing;
FIG. 4 is a schematic diagram of a constructed convolutional neural network;
FIG. 5 is a schematic view of feature fusion;
fig. 6 is a flow chart of the present invention.
Detailed Description
As shown in fig. 6, the invention discloses a visible light Iris recognition method based on deep learning and DS evidence theory, the specific flow is shown in fig. 6, in this embodiment, images in warraw-BioBase-Smartphone-Iris v1.0(Wa rsaw-BioBase) and MICHE-i Iris atlas are selected, and these images are photographed by an iPhone 5s mobile phone camera. The examples were divided into three stages in total, and experimental data for each stage are shown in table 1. Since images in the MICHE-I library contain characteristics of partial human faces, hairs and the like, the eye region is positioned by using a vision.
TABLE 1
Figure BDA0002392783740000061
Fig. 1a and 1b respectively show the iris image positioning, segmentation and normalization processes in two image libraries, which are not described in the accumulated description, and the normalized iris images are directly used in the following examples.
As shown in fig. 2, the normalized iris image in the MICHE library includes noise such as eyelids, eyelashes, reflection, etc., and the lower half of the normalized iris image is taken; reducing the illumination influence by using an MSRCR algorithm (multiscale retinex color recovery algorithm); graying an iris image; image contrast is enhanced using histogram equalization.
As shown in fig. 3, the present example extracts a fourth convolutional layer feature Conv4, a fifth fully-connected layer feature fc5, and a sixth fully-connected layer feature fc6, respectively, by constructing a seven-layer convolutional neural network. The characteristic dimensions of the layers are shown in table 2.
TABLE 2
Figure BDA0002392783740000071
As shown in fig. 4 and 5, the three types of single features Conv4, fc5 and fc6 of the extracted training set are respectively fed into the SVM for model training. The kernel function of the SVM model is a Radial Basis Function (RBF), and a penalty variable c and a gamma function g are determined by adopting a grid parameter optimization algorithm as follows: a first stage, c is 5.2780, g is 0.0039; a second stage, wherein c is 9.1896, and g is 0.0039; in the third stage, c is 16 and g is 0.0039. And (4) obtaining BPA of the evidence by using the formula (9) according to the posterior probability output by the SVM and the identification accuracy of the test set, and obtaining the fused probability according to the formula (10).
And obtaining a final recognition result according to the judgment rule in the step 7. The decision threshold in the decision rule is obtained according to multiple experimental statistics: first stage of ∈1=0.6,ε20.4; second stage of,. epsilon1=0.8,ε20.9; third stage of,. epsilon1=0.9,ε20.3. Table 3 shows the single feature recognition rate and the fusion recognition rate. The recognition rate after fusion is basically over 90 percent and higher than the single-feature recognition rate. Therefore, the identification method can effectively identify the iris image obtained under visible light.
TABLE 3
Figure BDA0002392783740000072
Figure BDA0002392783740000081
The present invention provides a method for visible light iris recognition based on deep learning and DS evidence theory, and the method and the way for implementing the technical solution are many, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can be made, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in this embodiment can be implemented by the prior art.

Claims (1)

1. A visible light iris recognition method based on deep learning and DS evidence theory is characterized by comprising the following steps:
step 1, inputting a visible light iris image;
step 2, locating an eye area in the image;
step 3, preprocessing the positioned iris image;
step 4, designing and building a seven-layer convolutional neural network;
step 5, sending the preprocessed iris image into a neural network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image;
step 6, constructing a basic probability assignment BPA according to SVM classification results of the 3 types of characteristics, and sending the BPA to a DS evidence theory for fusion;
step 7, a final recognition result is given according to the fusion result and the classification judgment threshold in the step 6;
in step 1, the visible light iris image comprises an eye region;
in step 2, positioning an eye region in the image by adopting a vision.
The step 3 comprises the following steps:
step 3-1, selecting the positioned iris image to perform iris segmentation, wherein the segmentation method adopts an integral differential operator, and the operator changes the radius r and the horizontal coordinate x of a circle0Ordinate y0Three parameters are used for identifying the circular contour with the maximum intensity change, and the specific formula is as follows:
Figure FDA0003492786360000011
wherein, is the volume integral; i (x, y) is an acquired iris image; gσ(r) is a gaussian smoothing function; r is the searched circle radius;
Figure FDA0003492786360000012
represents the differential over radius r;
Figure FDA0003492786360000013
dS is the radius of I (x, y) at r, (x)0,y0) Curve integral as the center of circle; by using the formula (1), r, x are continuously changed0,y0Identifying an iris outer edge and an iris inner edge by searching, the identified iris inner and outer edges being marked on the iris image;
step 3-2, normalizing the iris image segmented in the step 3-1 by adopting a rubber sheet model, cutting a circular iris area from one point, and stretching the circular iris area into a rectangle by polar coordinate mapping according to a formula (2):
Figure FDA0003492786360000014
wherein r ∈ [0,1]],θ∈[0,2π];xr(θ) is the abscissa before mapping, yr(θ) is the ordinate before mapping; x: (r, theta) is the mapped abscissa, and y (r, theta) is the mapped ordinate;
3-3, selecting the iris image normalized in the step 3-2 for preprocessing: taking the lower half part of the normalized iris image, preprocessing the iris image of the lower half part by utilizing a MSRCR algorithm, and graying the iris image; carrying out histogram equalization processing on the grayed iris image;
step 4 comprises the following steps:
through designing and building a seven-layer convolutional neural network, sending the preprocessed iris image into the neural network for training a network model, wherein the specific structure of the seven-layer convolutional neural network is as follows: an input layer for inputting an iris image; in the first layer, the size of a convolution kernel is 6 multiplied by 3, and the maximum pooling layer is 2 multiplied by 2; the second layer, the convolution kernel size is 32 multiplied by 5, and the maximum pooling layer is 2 multiplied by 2; in the third layer, the size of a convolution kernel is 64 multiplied by 5, and the maximum pooling layer is 2 multiplied by 2; the fourth layer, the convolution kernel size is 256 × 5 × 5; a fifth layer, size 1 × 1024, activation function ReLU; a sixth layer, the size is 1 × 1024, and the activation function is ReLU; a seventh layer, the activation function is Softmax; an output layer for outputting the final classification result;
the step 5 comprises the following steps:
if the ith layer of the seven-layer convolutional neural network is set as a convolutional layer, the jth feature map of the ith layer
Figure FDA00034927863600000212
The calculation formula of (2) is as follows:
Figure FDA0003492786360000021
wherein denotes a convolution operation;
Figure FDA0003492786360000022
is the ith characteristic diagram of the l-1 layer;
Figure FDA0003492786360000023
to represent
Figure FDA0003492786360000024
And
Figure FDA0003492786360000025
a convolution kernel for concatenation between;
Figure FDA0003492786360000026
to represent
Figure FDA0003492786360000027
Bias of (3); f (-) denotes the linear activation function ReLU; ml-1The number of characteristic graphs of the l-1 layer is shown;
if the ith layer of the seven-layer convolutional neural network is set as a full-connected layer, the jth feature map of the ith layer
Figure FDA0003492786360000028
The calculation formula of (2) is as follows:
Figure FDA0003492786360000029
yl-1representing the weighting result of all characteristic graphs of the l-1 layer;
Figure FDA00034927863600000210
to represent
Figure FDA00034927863600000211
Bias of (3);
dividing all the sample images preprocessed in the step 3-3 into a test set and a training set, respectively selecting the same number of sample images of the left eye and the right eye of each person or the same number of sample images of the indoor and outdoor of each person to form the test set, and forming the training set by the residual sample images; inputting the training set sample image into the convolutional neural network built in the step 4 for training, storing the trained network model, and then extracting iris feature vectors by using the following steps:
step 5-1, inputting all sample images in the training set and the test set into a trained seven-layer convolutional neural network, and automatically learning the characteristics of all sample images;
step 5-2, obtaining all output characteristic graphs of the convolution neural network fourth layer convolution layer through a formula (3), and forming a one-dimensional characteristic vector Conv4 through the processing of a Flatten function;
step 5-3, obtaining a fifth layer fully-connected layer feature vector fc5 of the convolutional neural network through a formula (4);
step 5-4, obtaining a feature vector fc6 of a sixth layer full-connection layer of the convolutional neural network through a formula (4);
the step 6 comprises the following steps:
respectively sending the 3 eigenvectors Conv4, fc5 and fc6 obtained in the step 5 into an SVM classifier for soft decision output, and mapping the output f (x) of the SVM classifier to a [0,1] interval by using a sigmoid function as a connection function to realize probability output of the SVM classifier, wherein the output form is as follows:
Figure FDA0003492786360000031
where f (f) (x) is the standard SVM output, P (y 1| x) represents the probability that the classification is correct under the condition that the output value is x, a and B are parameter values, and are obtained by solving the minimum negative log-likelihood value f (z) of the parameter set:
Figure FDA0003492786360000032
wherein the content of the first and second substances,
Pi=p(yi=1|xi) (7)
Figure FDA0003492786360000033
in the formula (8), N+And N-Are respectively asThe number of the positive samples and the number of the negative samples are calculated, the positive samples and the negative samples output category labels by an SVM classifier by adopting a voting method, the output 1 is in line with the category condition, and the output-1 is not in line with the category condition; y isiIndicates the category of the ith sample, and l indicates the total number of samples; the kernel function of the SVM model is determined by a Radial Basis Function (RBF), a penalty variable c and a gamma function g are determined by a grid parameter optimization algorithm as follows: a first stage, c is 5.2780, g is 0.0039; a second stage, wherein c is 9.1896, and g is 0.0039; a third stage, c is 16, g is 0.0039;
after learning the three types of features Conv4, fc5 and fc6 of the training set obtained in the step 5, the SVM classifier obtains optimal parameters A and B according to an equation (6), and constructs a posterior probability rho according to an equation (5)i
After the SVM classifier tests the three types of features Conv4, fc5 and fc6 of the test set obtained in the step 5, the identification accuracy E of the test set is obtainediDefining a basic probability assignment BPA function mi(A) Comprises the following steps:
mi(A)=ρiEi (9)
let m1,m2,…,mnRespectively different evidences A1,A2,…,AnBase probability assignment of BPA, mnRepresents the nth evidence AnThe basic probability of (2) is assigned to BPA, and the fused probability m (A) is obtained according to the formula (10);
Figure FDA0003492786360000041
wherein, k is an uncertain factor,
Figure FDA0003492786360000042
the step 7 comprises the following steps:
setting up
Figure FDA0003492786360000043
U is an identification frame, A1,A2For two different single samples(ii) a And m is a basic probability assignment function obtained after fusion, and satisfies the following conditions:
m(A1)=max{m(Ai),Ai∈U} (11)
m(A2)=max{m(Ai),Aiis e.g. U and Ai≠A1} (12)
If so:
Figure FDA0003492786360000044
then A is1I.e. the decision result, where e1And ε2Is a preset threshold.
CN202010120383.1A 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory Active CN111401145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010120383.1A CN111401145B (en) 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010120383.1A CN111401145B (en) 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory

Publications (2)

Publication Number Publication Date
CN111401145A CN111401145A (en) 2020-07-10
CN111401145B true CN111401145B (en) 2022-05-03

Family

ID=71432134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010120383.1A Active CN111401145B (en) 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory

Country Status (1)

Country Link
CN (1) CN111401145B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052012B (en) * 2021-03-08 2021-11-19 广东技术师范大学 Eye disease image identification method and system based on improved D-S evidence
CN113706470B (en) * 2021-07-29 2023-12-15 天津中科智能识别产业技术研究院有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN113706469B (en) * 2021-07-29 2024-04-05 天津中科智能识别产业技术研究院有限公司 Iris automatic segmentation method and system based on multi-model voting mechanism
CN114330454A (en) * 2022-01-05 2022-04-12 东北农业大学 Live pig cough sound identification method based on DS evidence theory fusion characteristics
CN115457351B (en) * 2022-07-22 2023-10-20 中国人民解放军战略支援部队航天工程大学 Multi-source information fusion uncertainty judging method
CN117351579B (en) * 2023-10-18 2024-04-16 北京建筑大学 Iris living body detection method and device based on multi-source information fusion

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN107220598A (en) * 2017-05-12 2017-09-29 中国科学院自动化研究所 Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN107330395A (en) * 2017-06-27 2017-11-07 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks
CN107341447A (en) * 2017-06-13 2017-11-10 华南理工大学 A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor
CN108830296A (en) * 2018-05-18 2018-11-16 河海大学 A kind of improved high score Remote Image Classification based on deep learning
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network
CN110321844A (en) * 2019-07-04 2019-10-11 北京万里红科技股份有限公司 A kind of quick iris detection method based on convolutional neural networks
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN107220598A (en) * 2017-05-12 2017-09-29 中国科学院自动化研究所 Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN107341447A (en) * 2017-06-13 2017-11-10 华南理工大学 A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor
CN107330395A (en) * 2017-06-27 2017-11-07 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks
CN108830296A (en) * 2018-05-18 2018-11-16 河海大学 A kind of improved high score Remote Image Classification based on deep learning
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network
CN110321844A (en) * 2019-07-04 2019-10-11 北京万里红科技股份有限公司 A kind of quick iris detection method based on convolutional neural networks
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
An efficient approach towards iris recognition with modular neural network match score fusion;Arif Iqbal Mozumder等;《2016 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)》;20170508;1-6 *
Deep Feature Fusion for Iris and Periocular Biometrics on Mobile Devices;Qi Zhang等;《IEEE Transactions on Information Forensics and Security 》;20181130;第13卷(第11期);2897-2912 *
基于SVM和D-S证据理论的多特征融合杂草识别方法;李先锋等;《农业机械学报》;20111130;第42卷(第11期);摘要、第1-5节 *
基于仿生模式识别的虹膜识别算法研究;胡建慧;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20061215;第2006卷(第12期);摘要、第二章、第四章 *

Also Published As

Publication number Publication date
CN111401145A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN109800648B (en) Face detection and recognition method and device based on face key point correction
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
US11263435B2 (en) Method for recognizing face from monitoring video data
CN110321967B (en) Image classification improvement method based on convolutional neural network
CN105718889B (en) Based on GB (2D)2The face personal identification method of PCANet depth convolution model
CN104504366A (en) System and method for smiling face recognition based on optical flow features
CN105303150B (en) Realize the method and system of image procossing
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
CN106204651B (en) A kind of method for tracking target based on improved judgement with generation conjunctive model
CN103116749A (en) Near-infrared face identification method based on self-built image library
CN106980852A (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
Rouhi et al. A review on feature extraction techniques in face recognition
CN109360179B (en) Image fusion method and device and readable storage medium
CN107818299A (en) Face recognition algorithms based on fusion HOG features and depth belief network
CN111126240A (en) Three-channel feature fusion face recognition method
CN110348289A (en) A kind of finger vein identification method based on binary map
CN106874825A (en) The training method of Face datection, detection method and device
Efraty et al. Facial component-landmark detection
CN114821682B (en) Multi-sample mixed palm vein identification method based on deep learning algorithm
CN105893941B (en) A kind of facial expression recognizing method based on area image
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
Zuobin et al. Feature regrouping for cca-based feature fusion and extraction through normalized cut

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant