CN111401145A - Visible light iris recognition method based on deep learning and DS evidence theory - Google Patents

Visible light iris recognition method based on deep learning and DS evidence theory Download PDF

Info

Publication number
CN111401145A
CN111401145A CN202010120383.1A CN202010120383A CN111401145A CN 111401145 A CN111401145 A CN 111401145A CN 202010120383 A CN202010120383 A CN 202010120383A CN 111401145 A CN111401145 A CN 111401145A
Authority
CN
China
Prior art keywords
layer
iris
iris image
neural network
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010120383.1A
Other languages
Chinese (zh)
Other versions
CN111401145B (en
Inventor
孙水发
陈俊杰
汪方毅
吴义熔
徐义春
刘世焯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Three Gorges University CTGU
Original Assignee
China Three Gorges University CTGU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Three Gorges University CTGU filed Critical China Three Gorges University CTGU
Priority to CN202010120383.1A priority Critical patent/CN111401145B/en
Publication of CN111401145A publication Critical patent/CN111401145A/en
Application granted granted Critical
Publication of CN111401145B publication Critical patent/CN111401145B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention provides a visible light iris recognition method based on deep learning and DS (direct sequence) evidence theory, and provides a multi-feature fusion iris recognition method based on a convolutional neural network and Support Vector Machine (SVM) and DS (Shafer-Dempster) evidence theory, aiming at the problems of multiple kinds of iris image noise collected under visible light, poor single-feature recognition anti-noise capability and the like, which cause low recognition rate and low stability. Firstly, locating an eye area from an image; preprocessing the positioned iris image; constructing a seven-layer convolutional neural network; sending the iris image into a network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image; constructing Basic Probability Assignment (BPA) by using 3 types of single-feature SVM classification results, and sending the BPA into a DS evidence theory for fusion; and giving out a final recognition result according to the fusion result and the classification judgment threshold.

Description

Visible light iris recognition method based on deep learning and DS evidence theory
Technical Field
The invention relates to the field of biological feature recognition application, in particular to a visible light iris recognition method based on deep learning and DS evidence theory.
Background
In recent years, with the rapid development of mobile intelligent devices, the performance of corresponding image sensors is improved, and mobile devices can utilize conventional cameras carried by the mobile devices to complete the acquisition of iris images in an unconstrained Visible light (V L) environment, wherein the iris images comprise textures and other appearance information which can be used for identification.
The method comprises the steps of performing feature extraction on a segmented iris image by using depth sparse filtering, performing down-sampling on the normalized iris to the length of different scales and the direction of different angles to form an iris feature vector by using a method of local Radon transformation multi-scale sparse representation, extracting features of iris textures by using a local Binary pattern (L initial Binary pattern, L BP) and the like.
In recognition, various classifiers are commonly used for the classification and recognition of the visible light iris, such as: euclidean distance, hamming distance, support vector machines, sparse representation classifiers, and the like. The methods are all directed to single features and cannot be used for multi-feature fusion recognition.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the technical problem of the prior art, and provides a visible light iris identification method based on deep learning and DS (Shafer-Dempster) evidence theory, which comprises the following steps:
step 1, inputting a visible light iris image;
step 2, locating an eye area in the image;
step 3, preprocessing the positioned iris image;
step 4, designing and building a seven-layer convolutional neural network;
step 5, sending the preprocessed iris image into a neural network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image;
step 6, constructing a basic probability assignment BPA (basic Probability assigned) according to the SVM classification result of the 3 types of characteristics, and sending the BPA (basic Probability assigned) into a DS evidence theory for fusion;
and 7, giving a final recognition result according to the fusion result and the classification judgment threshold in the step 6.
In step 1, the visible light iris image comprises an eye region and contains human face and hair characteristics.
In step 2, the eye region in the image is located using the vision.
The step 3 comprises the following steps:
step 3-1, selecting the positioned iris image to perform iris segmentation,the segmentation method uses an integral-differential operator, which varies the radius r, the abscissa x of the circle0Ordinate y0Three parameters are used for identifying the circular contour with the maximum intensity change, and the specific formula is as follows:
Figure BDA0002392783740000021
wherein, is the volume integral; i (x, y) is an acquired iris image; gσ(r) is a gaussian smoothing function; r is the searched circle radius;
Figure 1
represents the differential over radius r;
Figure BDA0002392783740000023
is that I (x, y) is at a radius of r, (x)0,y0) Is the curve integral of the center of the circle. In an iris image, the gray values at the outer edge of the iris (iris-sclera boundary) and at the inner edge of the iris (iris-pupil boundary) both have the largest gradient change, i.e. the circular contour with the largest intensity change. By using the formula (1), r, x are continuously changed0,y0By identifying the outer and inner iris edges by searching, the identified inner and outer iris edges are marked on the iris image;
step 3-2, normalizing the iris image segmented in the step 3-1 by adopting a Rubber Sheet Model (Rubber Sheet Model), cutting the circular iris area from one point and stretching the circular iris area into a rectangle by polar coordinate mapping according to a formula (2):
Figure BDA0002392783740000024
wherein r ∈ [0,1]],θ∈[0,2π];xr(θ) is the abscissa before mapping, yr(θ) is the ordinate before mapping; x (r, θ) is the mapped abscissa, and y (r, θ) is the mapped ordinate.
3-3, selecting the iris image normalized in the step 3-2 for preprocessing: in order to eliminate the influence of noise such as eyelids, eyelashes and reflection, the lower half part of the normalized iris image is taken by referring to the existing visible light iris image preprocessing method (Liu-Hao, white rain, Yi-Si Lu. visible light iris recognition method [ J ] based on the convolutional-like neural network, 2017 (11): 2651-2658.); preprocessing the iris image of the lower half part by using an MSRCR algorithm (a multi-scale retinex color restoration algorithm), weakening the influence of illumination and improving the definition of the image; then graying the iris image; and performing histogram equalization processing on the grayed iris image to improve the image contrast.
Step 4 comprises the following steps:
the method comprises the steps of designing and building a seven-layer convolutional neural network, sending a preprocessed iris image into the neural network for training a network model, wherein the seven-layer neural network has the specific structure that an input layer is used for inputting the iris image, the first layer is provided with a convolutional kernel of 6 × 3 × 3 and a maximum pooling layer of 2 × 02, the second layer is provided with a convolutional kernel of 32 × 15 × 35 and a maximum pooling layer of 2 × 2, the third layer is provided with a convolutional kernel of 64 × 5 × 5 and a maximum pooling layer of 2 × 2, the fourth layer is provided with a convolutional kernel of 256 × 5 × 5, the fifth layer is provided with a size of 1 × 1024 and an activation function of Re × 2U, the sixth layer is provided with a size of 1 × 1024 and an activation function of Re × 4U, the seventh layer is provided with an activation function of Softmax, and the output layer outputs a final classification result.
The step 5 comprises the following steps:
the middle layer of the convolutional neural network can be visualized and extracted, the characteristics of each layer of expression are different, and the layers can play a good complementary role. In practical application, the convolutional neural network mainly comprises a convolutional layer, a pooling layer and a full-link layer.
And (3) rolling layers: and extracting a feature map of the image by using a convolution kernel.
If the ith layer of the seven-layer convolutional neural network is set as a convolutional layer, the jth feature map of the ith layer
Figure BDA0002392783740000031
The calculation formula of (2) is as follows:
Figure BDA0002392783740000032
wherein denotes a convolution operation;
Figure BDA0002392783740000033
is the ith characteristic diagram of the l-1 layer;
Figure BDA0002392783740000034
to represent
Figure BDA0002392783740000035
And
Figure BDA0002392783740000036
a convolution kernel for concatenation between;
Figure 2
to represent
Figure 3
F (-) represents the linear activation function Re L U, Ml-1The number of characteristic graphs of the l-1 layer is shown;
full connection layer: and carrying out nonlinear combination on the extracted features of the upper layer or calculating the score of each classification.
If the ith layer of the seven-layer convolutional neural network is set as a full-connected layer, the jth feature map of the ith layer
Figure BDA0002392783740000041
The calculation formula of (2) is as follows:
Figure BDA0002392783740000042
yl-1representing the weighting result of all characteristic graphs of the l-1 layer;
Figure 4
to represent
Figure 5
Bias of (3);
and (4) dividing all the sample images preprocessed in the step 3-3 into a test set and a training set. Respectively selecting the same number of sample images of the left eye and the right eye of each person or the same number of sample images of the indoor and outdoor of each person to form a test set, and forming a training set by the residual sample images. Inputting the training set sample image into the convolutional neural network built in the step 4 for training, storing the trained network model, and then extracting iris features by using the following steps:
step 5-1, inputting all sample images in the training set and the test set into a trained seven-layer convolutional neural network, and automatically learning the characteristics of all sample images;
step 5-2, obtaining all output characteristic graphs of the convolution neural network fourth layer convolution layer through a formula (3), and forming a one-dimensional characteristic vector Conv4 through the processing of a Flatten function;
step 5-3, obtaining a fifth layer fully-connected layer feature vector fc5 of the convolutional neural network through a formula (4);
and 5-4, obtaining a sixth layer fully-connected layer feature vector fc6 of the convolutional neural network by using the formula (4).
The step 6 comprises the following steps:
respectively sending the 3 eigenvectors Conv4, fc5 and fc6 obtained in the step 5 into an SVM classifier for soft decision output, and mapping the output f (x) of the SVM classifier to a [0,1] interval by using a sigmoid function as a connection function to realize probability output of the SVM classifier, wherein the output form is as follows:
Figure BDA0002392783740000045
where f (f) (x) is the standard SVM output, P (y 1| x) represents the probability that the classification is correct given an output value x, a and B are parameter values, which can be found by solving the minimum negative log-likelihood value f (z) of the parameter set:
Figure BDA0002392783740000046
wherein the content of the first and second substances,
Pi=p(yi=1|xi) (7)
Figure BDA0002392783740000047
in the formula (8), N+And N-Respectively the number of positive samples (y)iNumber of samples equal to 1) and number of negative samples (y)iThe number of samples is-1), the positive and negative samples output class labels by the SVM classifier by adopting a voting method, the output 1 (positive sample) meeting the class condition, and the output-1 (negative sample) not meeting the class condition; y isiIndicates the category of the ith sample, and l indicates the total number of samples; after learning the three types of features Conv4, fc5 and fc6 of the training set obtained in the step 5, the SVM classifier obtains optimal parameters A and B according to an expression (6), and constructs a posterior probability rho according to an expression (5)i
After the SVM classifier tests the three types of features Conv4, fc5 and fc6 of the test set obtained in the step 5, the identification accuracy E of the test set is obtainediDefining a basic probability assignment BPA function mi(A) Comprises the following steps:
mi(A)=ρiEi(9)
let m1,m2,…,mnRespectively different evidences A1,A2,…,AnBase probability assignment of BPA, mnRepresents the nth evidence AnThe basic probability of (A) is assigned to BPA, and the fused probability m (A) is obtained according to the formula (10), wherein m (A) reflects the accurate trust degree of the evidence A;
Figure BDA0002392783740000051
wherein k is an uncertain factor,
Figure BDA0002392783740000052
the step 7 comprises the following steps:
setting up
Figure BDA0002392783740000053
U isIdentification frame, i.e. set of classes of all sample images, A1,A2As two different individual samples; and m is a basic probability assignment function obtained after fusion, and satisfies the following conditions:
m(A1)=max{m(Ai),Ai∈U} (11)
m(A2)=max{m(Ai),Ai∈ U and Ai≠A1} (12)
If so:
Figure BDA0002392783740000054
then A is1Is the decision result, wherein1And2is a preset threshold.
Aiming at the problems of multiple types of iris image noise collected under visible light, poor single-feature recognition anti-noise capability and the like, the invention provides a multi-feature fusion iris recognition method based on a convolutional neural network combined with a Support Vector Machine (SVM) and DS (Shafer-Dempster) evidence theory. Firstly, locating an eye region from an image; carrying out operations such as segmentation, normalization, preprocessing and the like on the positioned iris image; then, designing and building a seven-layer convolutional neural network; sending the preprocessed iris image into a network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image; finally, constructing Basic Probability Assignment (BPA) according to the 3-type single-feature SVM classification result, and sending the BPA into a DS evidence theory for fusion; and giving out a final recognition result according to the fusion result and the classification judgment threshold.
Has the advantages that: the invention realizes the identification of the iris under visible light by utilizing deep learning and multi-feature fusion technology. The method reduces the influence of iris image noise to a certain extent, and further improves the iris recognition rate; the iris image acquired under different acquisition conditions has certain robustness; and the photographing identification can be completed by only using a mobile phone or a camera to take a single picture, so that not only is the cost low, but also the application field of iris identification is greatly expanded.
Drawings
The above and other advantages of the present invention will become more apparent from the following detailed description of the invention when taken in conjunction with the accompanying drawings.
FIG. 1a is a schematic view of an iris image eye location, normalization;
FIG. 1b is a schematic diagram of iris image eye region detection, localization and normalization;
FIG. 2 is a schematic diagram of normalization;
FIG. 3 is a schematic diagram of iris image pre-processing;
FIG. 4 is a schematic diagram of a constructed convolutional neural network;
FIG. 5 is a schematic view of feature fusion;
fig. 6 is a flow chart of the present invention.
Detailed Description
As shown in fig. 6, the invention discloses a visible light Iris recognition method based on deep learning and DS evidence theory, the specific flow is shown in fig. 6, in this embodiment, images in warraw-BioBase-Smartphone-Iris v1.0 (warraw-BioBase) and MICHE-i Iris atlas are selected, and these images are photographed by an iPhone 5s mobile phone camera. The examples were divided into three stages in total, and experimental data for each stage are shown in table 1. Since images in the MICHE-I library contain characteristics of partial human faces, hairs and the like, the eye region is positioned by using a vision.
TABLE 1
Figure BDA0002392783740000061
Fig. 1a and 1b respectively show the iris image positioning, segmentation and normalization processes in two image libraries, which are not described in the accumulated description, and the normalized iris images are directly used in the following examples.
As shown in fig. 2, the normalized iris image in the MICHE library includes noise such as eyelids, eyelashes, reflection, etc., and the lower half of the normalized iris image is taken; reducing the illumination influence by using an MSRCR algorithm (multiscale retinex color recovery algorithm); graying an iris image; image contrast is enhanced using histogram equalization.
As shown in fig. 3, the present example extracts a fourth convolutional layer feature Conv4, a fifth fully-connected layer feature fc5 and a sixth fully-connected layer feature fc6, respectively, by constructing a seven-layer convolutional neural network. The characteristic dimensions of the layers are shown in table 2.
TABLE 2
Figure BDA0002392783740000071
As shown in fig. 4 and fig. 5, the three types of single features Conv4, fc5 and fc6 of the extracted training set are respectively fed into the SVM for model training. The kernel function of the SVM model is a Radial Basis Function (RBF), and a penalty variable c and a gamma function g are determined by adopting a grid parameter optimization algorithm as follows: a first stage, c is 5.2780, g is 0.0039; a second stage, wherein c is 9.1896, and g is 0.0039; in the third stage, c is 16, and g is 0.0039. And (4) obtaining BPA of the evidence by using the formula (9) according to the posterior probability output by the SVM and the identification accuracy of the test set, and obtaining the fused probability according to the formula (10).
And obtaining a final recognition result according to the judgment rule in the step 7. The decision threshold in the decision rule is obtained according to multiple experimental statistics: in the first stage, the first stage is that,1=0.6,20.4; in the second stage of the process,1=0.8,20.9; in the third stage, the first step is that,1=0.9,20.3. Table 3 shows the single feature recognition rate and the fusion recognition rate. The recognition rate after fusion is basically over 90 percent and higher than the single-feature recognition rate. Therefore, the identification method can effectively identify the iris image obtained under visible light.
TABLE 3
Figure BDA0002392783740000072
Figure BDA0002392783740000081
The present invention provides a method for visible light iris recognition based on deep learning and DS evidence theory, and the method and the way for implementing the technical solution are many, and the above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can be made, and these improvements and modifications should also be regarded as the protection scope of the present invention. All the components not specified in the present embodiment can be realized by the prior art.

Claims (8)

1. A visible light iris recognition method based on deep learning and DS evidence theory is characterized by comprising the following steps:
step 1, inputting a visible light iris image;
step 2, locating an eye area in the image;
step 3, preprocessing the positioned iris image;
step 4, designing and building a seven-layer convolutional neural network;
step 5, sending the preprocessed iris image into a neural network for training and extracting a fourth convolution layer, a fifth full-connection layer and a sixth full-connection layer as 3 types of characteristics of the iris image;
step 6, constructing a basic probability assignment BPA according to SVM classification results of the 3 types of characteristics, and sending the BPA to a DS evidence theory for fusion;
and 7, giving a final recognition result according to the fusion result and the classification judgment threshold in the step 6.
2. The method of claim 1, wherein in step 1, the visible light iris image comprises an eye region.
3. The method of claim 2, wherein in step 2, a vision.
4. The method of claim 3, wherein step 3 comprises the steps of:
step 3-1, selecting the positioned iris image to perform iris segmentation, wherein the segmentation method adopts an integral differential operator, and the operator changes the radius r and the horizontal coordinate x of a circle0Ordinate y0Three parameters are used for identifying the circular contour with the maximum intensity change, and the specific formula is as follows:
Figure FDA0002392783730000011
wherein, is the volume integral; i (x, y) is an acquired iris image; gσ(r) is a gaussian smoothing function; r is the searched circle radius;
Figure FDA0002392783730000012
represents the differential over radius r;
Figure FDA0002392783730000013
is that I (x, y) is at a radius of r, (x)0,y0) Curve integral as the center of circle; by using the formula (1), r, x are continuously changed0,y0By identifying the outer and inner iris edges by searching, the identified inner and outer iris edges are marked on the iris image;
step 3-2, normalizing the iris image segmented in the step 3-1 by adopting a rubber sheet model, cutting a circular iris area from one point, and stretching the circular iris area into a rectangle by polar coordinate mapping according to a formula (2):
Figure FDA0002392783730000014
wherein, r ∈ [0,1],θ∈[0,2π];xr(θ) is the abscissa before mapping, yr(θ) is the ordinate before mapping; x (r, theta) is the mapped abscissa, and y (r, theta) is the mapped ordinate;
3-3, selecting the iris image normalized in the step 3-2 for preprocessing: taking the lower half part of the normalized iris image, preprocessing the iris image of the lower half part by utilizing a MSRCR algorithm, and graying the iris image; and performing histogram equalization processing on the grayed iris image.
5. The method of claim 4, wherein step 4 comprises:
the method comprises the steps of designing and building a seven-layer convolutional neural network, sending a preprocessed iris image into the neural network for training a network model, wherein the seven-layer neural network has the specific structure that an input layer is used for inputting the iris image, the first layer is provided with a convolutional kernel of 6 × 3 × 3 and a maximum pooling layer of 2 × 02, the second layer is provided with a convolutional kernel of 32 × 15 × 35 and a maximum pooling layer of 2 × 2, the third layer is provided with a convolutional kernel of 64 × 5 × 5 and a maximum pooling layer of 2 × 2, the fourth layer is provided with a convolutional kernel of 256 × 5 × 5, the fifth layer is provided with a size of 1 × 1024 and an activation function of Re × 2U, the sixth layer is provided with a size of 1 × 1024 and an activation function of Re × 4U, the seventh layer is provided with an activation function of Softmax, and the output layer outputs a final classification result.
6. The method of claim 5, wherein step 5 comprises:
if the ith layer of the seven-layer convolutional neural network is set as a convolutional layer, the jth feature map of the ith layer
Figure FDA0002392783730000021
The calculation formula of (2) is as follows:
Figure FDA0002392783730000022
wherein denotes a convolution operation;
Figure FDA0002392783730000023
is the ith characteristic diagram of the l-1 layer;
Figure FDA0002392783730000024
to represent
Figure FDA0002392783730000025
And
Figure FDA0002392783730000026
a convolution kernel for concatenation between;
Figure FDA0002392783730000027
to represent
Figure FDA0002392783730000028
F (-) represents the linear activation function Re L U, Ml-1The number of characteristic graphs of the l-1 layer is shown;
if the ith layer of the seven-layer convolutional neural network is set as a full-connected layer, the jth feature map of the ith layer
Figure FDA0002392783730000029
The calculation formula of (2) is as follows:
Figure FDA00023927837300000210
yl-1representing the weighting result of all characteristic graphs of the l-1 layer;
Figure FDA00023927837300000211
to represent
Figure FDA00023927837300000212
Bias of (3);
dividing all the sample images preprocessed in the step 3-3 into a test set and a training set, respectively selecting the same number of sample images of the left eye and the right eye of each person or the same number of sample images of the indoor and outdoor of each person to form the test set, and forming the training set by the residual sample images; inputting the training set sample image into the convolutional neural network built in the step 4 for training, storing the trained network model, and then extracting iris feature vectors by using the following steps:
step 5-1, inputting all sample images in the training set and the test set into a trained seven-layer convolutional neural network, and automatically learning the characteristics of all sample images;
step 5-2, obtaining all output characteristic graphs of the convolution neural network fourth layer convolution layer through a formula (3), and forming a one-dimensional characteristic vector Conv4 through the processing of a Flatten function;
step 5-3, obtaining a fifth layer fully-connected layer feature vector fc5 of the convolutional neural network through a formula (4);
and 5-4, obtaining a sixth layer fully-connected layer feature vector fc6 of the convolutional neural network by using the formula (4).
7. The method of claim 6, wherein step 6 comprises:
respectively sending the 3 eigenvectors Conv4, fc5 and fc6 obtained in the step 5 into an SVM classifier for soft decision output, and mapping the output f (x) of the SVM classifier to a [0,1] interval by using a sigmoid function as a connection function to realize probability output of the SVM classifier, wherein the output form is as follows:
Figure FDA0002392783730000031
where f (f) (x) is the standard SVM output, P (y 1| x) represents the probability that the classification is correct under the condition that the output value is x, a and B are parameter values, and are obtained by solving the minimum negative log-likelihood value f (z) of the parameter set:
Figure FDA0002392783730000032
wherein the content of the first and second substances,
Pi=p(yi=1|xi) (7)
Figure FDA0002392783730000033
in the formula (8), N+And N-Respectively positive sampleThe number of positive and negative samples is the number of negative samples, the positive and negative samples are output by the SVM classifier by adopting a voting method, the output 1 which meets the category condition is output, and the output-1 which does not meet the category condition is output; y isiIndicates the category of the ith sample, and l indicates the total number of samples; after learning the three types of features Conv4, fc5 and fc6 of the training set obtained in the step 5, the SVM classifier obtains optimal parameters A and B according to an expression (6), and constructs a posterior probability rho according to an expression (5)i
After the SVM classifier tests the three types of features Conv4, fc5 and fc6 of the test set obtained in the step 5, the identification accuracy E of the test set is obtainediDefining a basic probability assignment BPA function mi(A) Comprises the following steps:
mi(A)=ρiEi(9)
let m1,m2,…,mnRespectively different evidences A1,A2,…,AnBase probability assignment of BPA, mnRepresents the nth evidence AnThe basic probability of (2) is assigned to BPA, and the fused probability m (A) is obtained according to the formula (10);
Figure FDA0002392783730000041
wherein k is an uncertain factor,
Figure FDA0002392783730000042
8. the method of claim 7, wherein step 7 comprises:
setting up
Figure FDA0002392783730000043
U is an identification frame, A1,A2As two different individual samples; and m is a basic probability assignment function obtained after fusion, and satisfies the following conditions:
m(A1)=max{m(Ai),Ai∈U} (11)
m(A2)=max{m(Ai),Ai∈ U and Ai≠A1} (12)
If so:
Figure FDA0002392783730000044
then A is1Is the decision result, wherein1And2is a preset threshold.
CN202010120383.1A 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory Active CN111401145B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010120383.1A CN111401145B (en) 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010120383.1A CN111401145B (en) 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory

Publications (2)

Publication Number Publication Date
CN111401145A true CN111401145A (en) 2020-07-10
CN111401145B CN111401145B (en) 2022-05-03

Family

ID=71432134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010120383.1A Active CN111401145B (en) 2020-02-26 2020-02-26 Visible light iris recognition method based on deep learning and DS evidence theory

Country Status (1)

Country Link
CN (1) CN111401145B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052012A (en) * 2021-03-08 2021-06-29 广东技术师范大学 Eye disease image identification method and system based on improved D-S evidence
CN113706469A (en) * 2021-07-29 2021-11-26 天津中科智能识别产业技术研究院有限公司 Iris automatic segmentation method and system based on multi-model voting mechanism
CN113706470A (en) * 2021-07-29 2021-11-26 天津中科智能识别产业技术研究院有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN114330454A (en) * 2022-01-05 2022-04-12 东北农业大学 Live pig cough sound identification method based on DS evidence theory fusion characteristics
CN115457351A (en) * 2022-07-22 2022-12-09 中国人民解放军战略支援部队航天工程大学 Multi-source information fusion uncertainty judgment method
CN117351579A (en) * 2023-10-18 2024-01-05 北京建筑大学 Iris living body detection method and device based on multi-source information fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN107220598A (en) * 2017-05-12 2017-09-29 中国科学院自动化研究所 Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN107330395A (en) * 2017-06-27 2017-11-07 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks
CN107341447A (en) * 2017-06-13 2017-11-10 华南理工大学 A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor
CN108830296A (en) * 2018-05-18 2018-11-16 河海大学 A kind of improved high score Remote Image Classification based on deep learning
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network
CN110321844A (en) * 2019-07-04 2019-10-11 北京万里红科技股份有限公司 A kind of quick iris detection method based on convolutional neural networks
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106326874A (en) * 2016-08-30 2017-01-11 天津中科智能识别产业技术研究院有限公司 Method and device for recognizing iris in human eye images
CN107220598A (en) * 2017-05-12 2017-09-29 中国科学院自动化研究所 Iris Texture Classification based on deep learning feature and Fisher Vector encoding models
CN107341447A (en) * 2017-06-13 2017-11-10 华南理工大学 A kind of face verification mechanism based on depth convolutional neural networks and evidence k nearest neighbor
CN107330395A (en) * 2017-06-27 2017-11-07 中国矿业大学 A kind of iris image encryption method based on convolutional neural networks
CN108830296A (en) * 2018-05-18 2018-11-16 河海大学 A kind of improved high score Remote Image Classification based on deep learning
CN109409342A (en) * 2018-12-11 2019-03-01 北京万里红科技股份有限公司 A kind of living iris detection method based on light weight convolutional neural networks
CN110059589A (en) * 2019-03-21 2019-07-26 昆山杜克大学 The dividing method of iris region in a kind of iris image based on Mask R-CNN neural network
CN110321844A (en) * 2019-07-04 2019-10-11 北京万里红科技股份有限公司 A kind of quick iris detection method based on convolutional neural networks
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
ARIF IQBAL MOZUMDER等: "An efficient approach towards iris recognition with modular neural network match score fusion", 《2016 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND COMPUTING RESEARCH (ICCIC)》 *
QI ZHANG等: "Deep Feature Fusion for Iris and Periocular Biometrics on Mobile Devices", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 》 *
李先锋等: "基于SVM和D-S证据理论的多特征融合杂草识别方法", 《农业机械学报》 *
胡建慧: "基于仿生模式识别的虹膜识别算法研究", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113052012A (en) * 2021-03-08 2021-06-29 广东技术师范大学 Eye disease image identification method and system based on improved D-S evidence
CN113052012B (en) * 2021-03-08 2021-11-19 广东技术师范大学 Eye disease image identification method and system based on improved D-S evidence
CN113706469A (en) * 2021-07-29 2021-11-26 天津中科智能识别产业技术研究院有限公司 Iris automatic segmentation method and system based on multi-model voting mechanism
CN113706470A (en) * 2021-07-29 2021-11-26 天津中科智能识别产业技术研究院有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN113706470B (en) * 2021-07-29 2023-12-15 天津中科智能识别产业技术研究院有限公司 Iris image segmentation method and device, electronic equipment and storage medium
CN113706469B (en) * 2021-07-29 2024-04-05 天津中科智能识别产业技术研究院有限公司 Iris automatic segmentation method and system based on multi-model voting mechanism
CN114330454A (en) * 2022-01-05 2022-04-12 东北农业大学 Live pig cough sound identification method based on DS evidence theory fusion characteristics
CN115457351A (en) * 2022-07-22 2022-12-09 中国人民解放军战略支援部队航天工程大学 Multi-source information fusion uncertainty judgment method
CN115457351B (en) * 2022-07-22 2023-10-20 中国人民解放军战略支援部队航天工程大学 Multi-source information fusion uncertainty judging method
CN117351579A (en) * 2023-10-18 2024-01-05 北京建筑大学 Iris living body detection method and device based on multi-source information fusion
CN117351579B (en) * 2023-10-18 2024-04-16 北京建筑大学 Iris living body detection method and device based on multi-source information fusion

Also Published As

Publication number Publication date
CN111401145B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN111401145B (en) Visible light iris recognition method based on deep learning and DS evidence theory
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN106845510B (en) Chinese traditional visual culture symbol recognition method based on depth level feature fusion
CN108520226B (en) Pedestrian re-identification method based on body decomposition and significance detection
CN110321967B (en) Image classification improvement method based on convolutional neural network
WO2019080203A1 (en) Gesture recognition method and system for robot, and robot
CN104504366A (en) System and method for smiling face recognition based on optical flow features
CN110059586B (en) Iris positioning and segmenting system based on cavity residual error attention structure
CN110175615B (en) Model training method, domain-adaptive visual position identification method and device
US20230047131A1 (en) Contour shape recognition method
CN111179216B (en) Crop disease identification method based on image processing and convolutional neural network
CN106980852A (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN111191583A (en) Space target identification system and method based on convolutional neural network
CN112150493A (en) Semantic guidance-based screen area detection method in natural scene
CN105718889A (en) Human face identity recognition method based on GB(2D)2PCANet depth convolution model
CN109360179B (en) Image fusion method and device and readable storage medium
CN105303150A (en) Method and system for implementing image processing
CN112801146A (en) Target detection method and system
CN109344856B (en) Offline signature identification method based on multilayer discriminant feature learning
CN109815923B (en) Needle mushroom head sorting and identifying method based on LBP (local binary pattern) features and deep learning
CN107818299A (en) Face recognition algorithms based on fusion HOG features and depth belief network
CN113221956B (en) Target identification method and device based on improved multi-scale depth model
CN110046544A (en) Digital gesture identification method based on convolutional neural networks
CN105893941B (en) A kind of facial expression recognizing method based on area image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant