CN110175509A - A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution - Google Patents

A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution Download PDF

Info

Publication number
CN110175509A
CN110175509A CN201910281741.4A CN201910281741A CN110175509A CN 110175509 A CN110175509 A CN 110175509A CN 201910281741 A CN201910281741 A CN 201910281741A CN 110175509 A CN110175509 A CN 110175509A
Authority
CN
China
Prior art keywords
eye circumference
image
resolution
super
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910281741.4A
Other languages
Chinese (zh)
Other versions
CN110175509B (en
Inventor
曹志诚
庞辽军
赵恒�
赵远明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Institute Of Integrated Circuit Innovation Xi'an University Of Electronic Science And Technology
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910281741.4A priority Critical patent/CN110175509B/en
Publication of CN110175509A publication Critical patent/CN110175509A/en
Application granted granted Critical
Publication of CN110175509B publication Critical patent/CN110175509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to pattern-recognition, digital image processing techniques field more particularly to a kind of round-the-clock eye circumference recognition methods based on cascade super-resolution.Eye circumference Image Acquisition is carried out using multispectral camera, limited eye circumference data set is pre-processed, and carry out sample expansion;To eye circumference data set, uses the convolutional neural networks based on deep learning to carry out image super-resolution and operate to amplify eye circumference image;Image restoration is carried out to it using the image deconvolution technology based on deep learning for the eye circumference data set after handling;Based on the eye circumference data that image restoration obtains, enhanced using Laplace image sharpening;Based on obtained eye circumference data set, novel stack neural network model is constructed using deep learning theory;Eye circumference data set is divided into training set and test set;For the test set that division obtains, feature is calculated by trained convolutional neural networks model.The present invention has stronger robustness and good Generalization Capability.

Description

A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution
Technical field
The invention belongs to pattern-recognitions, digital image processing techniques field, more particularly to one kind is based on cascade super-resolution Round-the-clock eye circumference recognition methods.
Background technique
Eye circumference identification is a kind of new bio-identification mode with specific advantages occurred in recent years.Such as it can make For the supplement of recognition of face, and the most facial parts of effective information it have been proved to when face is blocked.This Outside, eye circumference identification can also be considered as alternative solution when iris recognition fails, because iris recognition needs acquisition quality very High iris image, the requirement are usually unable to satisfy under remote.
However so far, researcher only focuses on the eye circumference identification technology based on visible light, and the technology is in severe gas Wait and environment under usually perform poor, such as without under case of constraint and uneven illumination, night and sleet harsh weather.With The appearance applied under various complex environments in real world, the eye circumference identification technology based on visible light are increasingly difficult to meet and want It asks, such as the monitor task in real world frequently occurs under the atmospheric environment of night and sleet sky bad luck, in this environment Obtaining high definition facial image by visible light down is being difficult to complete of the task.Therefore it needs to develop new algorithm and system to come Promote the universality and robustness of eye circumference identification.Based on this, this patent particularly proposes to combine using visible light and infrared ray Multispectral eye circumference identification technology.The technology has the advantage of round-the-clock lower work, be suitable in daylight and at night, fine day and sleet etc. Under a variety of environment.
Still further aspect, since eye circumference image is usually intercepted from facial image, therefore its corresponding region is small, and picture size is small (i.e. resolution ratio is small).And existing eye circumference identification technology often directly against original size eye circumference image carry out feature extraction and Identification, does not consider influence of the eye circumference picture size to final recognition performance.Since the image of high-resolution and high quality is for knowing The promotion of other performance has significant meaning, and this patent is based on this and particularly proposes a kind of cascade super-resolution eye circumference identification technology, By the introducing of super-resolution, small size eye circumference image is successfully amplified into several times, so that feature mentions when increasing eye circumference identification The effective area taken.Further, since although simple super-resolution increases eye circumference area, but bring image quality decrease Side effect.In order to solve this problem, further perfect eye circumference super-resolution method: one kind is introduced after super-resolution and is based on The eye circumference image restoring technology of deep learning, and combine the image enhancement step sharpened based on Laplace.The complete procedure is claimed Are as follows: tandem type eye circumference super-resolution.Finally, tandem type eye circumference super-resolution technique combines multispectral round-the-clock characteristic above-mentioned, It is totally named are as follows: the round-the-clock eye circumference recognition methods based on cascade super-resolution.
Finally, due to up to the present, the feature extraction algorithm for eye circumference identification is all traditional approach, as PCA, LBP, WLD and SIFT etc..These traditional algorithms are engineer's mode, and common design very complicated is directed to particular condition performance Good but poor robustness, solves the problems such as illumination variation bad.As increasingly maturation, problem of deep learning need It is resolved.The Automatic Feature Extraction method of such as convolutional neural networks has more simplicity and robustness is more preferable.Therefore, originally Invention proposes the high convolutional neural networks StackConvNet for eye circumference feature extraction of a robustness for this problem.
To sum up, the eye circumference identification technology with round-the-clock advantage of complete set proposed by the present invention can solve traditional eye The narrow application range of all identification technologies, recognition performance be not high and many defects such as feature extraction poor robustness.The present invention is eye circumference The functionization of identification technology provides new theory and new algorithm and supports so that eye circumference identification technology become it is more practical, reliable and Universalness.The present invention can be widely applied to open air, night, sleet and attendance, civilian monitoring, public security under other complex environments Law enforcement, disengaging control, cell entry etc. application.
Summary of the invention
In view of the problems of the existing technology, the present invention provides a kind of round-the-clock eye circumference knowledges based on cascade super-resolution Other method.
The invention is realized in this way a kind of round-the-clock eye circumference recognition methods based on cascade super-resolution, described to be based on Cascade super-resolution round-the-clock eye circumference recognition methods include:
Step 1 carries out eye circumference Image Acquisition to certain individual simultaneously using multispectral camera, and to limited eye circumference number Necessary pretreatment is carried out according to collection, and sample expansion is carried out to it;
Step 2 uses the super-resolution based on deep learning for the eye circumference data set after step 1 processing Technology amplification;
Step 3 uses the image deconvolution skill based on deep learning for the eye circumference data set after handling through step 2 Art carries out image restoration to it;
Step 4 is enhanced based on the eye circumference data set that step 3 is handled using Laplace image sharpening;
Step 5 constructs novel stack mind using deep learning theory based on the eye circumference data set that step 4 obtains Through network model;
Eye circumference data set is divided into training set and test set by step 6, for training set, uses triple loss function Triplet loss obtains model using back-propagation algorithm training convolutional neural networks and saves;
Step 7 passes through the trained convolutional neural networks mould of step 6 for the test set divided through step 6 Feature is calculated in type, is calculated matching score matrix using Euclidean distance, and according to this matching score matrix calculate GAR with FAR value.
Further, the image preprocessing and sample of step 1, which expand, includes:
(1) eye circumference Image Acquisition is carried out using multispectral camera, the multispectral specific electromagnetic wave band of camera is visible light Wave band and infrared ray wave band.Wherein infrared ray wave band is again by 980nm near infrared band (NIR) and 1550nm near infrared band (SWIR) two sub- wave bands are constituted;
(2) it will be seen that light eye circumference image is converted to gray level image using following formula:
Igray=0.2989 × R+0.5870 × G+0.1140 × B;
It is normalized into [0,255] using following formula again:
I is eye circumference image, ImaxAnd IminMaximum and minimum gradation value in respectively eye circumference image I, InIt is defeated to normalize Out;
(3) infrared eye circumference image is enhanced using log operator, formula is as follows:
I=log (1+X);
Then it is normalized into [0,255] using method same in step (2) again;
(4) (RGB, HSV) and rotation process, which are counted, to be indicated using different color spaces to step (2) eye circumference image According to being extended for original 6 times.
Further, step 2 is carried out using eye circumference image of the super-resolution technique method based on deep learning to small size It rebuilds, this method is using the relationship between deep learning method and traditional method based on sparse coding as foundation, by three layers Convolutional neural networks are divided into image block extraction, Nonlinear Mapping and rebuild three parts.
Further, after being rebuild using the image deconvolution technology based on convolutional neural networks to super resolution technology in step 3 The reduction of eye circumference data set deblurring, the network is by uncoiling volume module, pseudomorphism removal module and rebuilds module totally three modules Composition.
Further, the eye circumference figure after step 4 restores image deconvolution technology using Laplace image sharpening techniques Specific step is as follows for image intensifying:
(1) second dervative is found using Laplace operator:
WhereinWithIt is respectively as follows:
Wherein I (x, y) is the eye circumference image restored by image deconvolution technology,WithRespectively along x-axis and y The directional derivative of axis;
(2) results added after handling original eye circumference image and Laplace operator:
Wherein Ish(x, y) is the eye circumference image after sharpening, and c is the weight for adjusting sharpness.
Further, novel stack neural network model is constructed using deep learning theory in step 5 StackConvNet, the network architecture are altogether including 10 convolutional layers, and 6 maximum pond layers, 2 full articulamentums, 2 connect entirely It connects and introduces Dropout layers in layer.
Further, data set is divided into training set and test set in step 6 and uses triple loss function Triplet loss carrys out training pattern, and specific step is as follows:
(1) data set is divided into training set and test set, ratio 7:3;
(2) it is directed to training set, using triple loss function Triplet loss, using back-propagation algorithm training convolutional Neural network obtains model and saves, and used triple loss function formula is as follows:
In above equationAndIndicate three images input of network model, whereinWithIt is from same Two images of class,WithIt is two images from foreign peoples, α is spacing parameter; And For the output feature of network model, triple is formed.
Another object of the present invention is to provide the eye circumference identification sides based on cascade super-resolution technique described in a kind of application The digital image processing system of method.
Another object of the present invention is to provide the eye circumference identification sides based on cascade super-resolution technique described in a kind of application The image steganalysis system of method.
In conclusion advantages of the present invention and good effect are as follows: tandem type eye circumference super-resolution technique of the invention is drawn Enter, final recognition accuracy is stepped up by the resolution technique of tandem type.Specifically: firstly, carrying out eye circumference knowledge Super-resolution amplification first is carried out to small size eye circumference image before not, to increase eye circumference image effective area, to be conducive to mention Rise eye circumference recognition performance;In addition, the eye circumference image enhancement technique based on image restoration is added, after super-resolution to alleviate eye All super-resolution bring image quality decrease problems;It is further to be eventually adding the image sharpening step based on Laplace operator Promote picture quality.
Compared with prior art, the invention has the following advantages that
(1) present invention proposes a kind of tandem type eye circumference super-resolution technique for eye circumference identification problem, passes through tandem type Amplify to step up eye circumference recognition accuracy periocular area.Eye circumference identification before first carry out super-resolution amplification eye circumference and into Row image restoring and sharpening are to enhance eye circumference quality.Experiment shows this method better than tradition enhancing algorithm.
(2) present invention devises a novel stack convolutional Neural net for the feature extraction problem in eye circumference identification Network StackConvNet.Experiment shows that the network architecture proposed can be extracted more robust compared with traditional characteristic extracts operator The eye circumference feature of property, recognition performance are higher.
(3) present invention difficulty small for eye circumference image effective area, by the super-resolution rebuilding skill based on deep learning Art method is applied among the amplification of eye circumference image, and experiment shows that this method can successfully realize eye circumference area enlarging function.
(4) present invention sharpens eye circumference image deblurring, Laplce (Laplace) and the organic knot of super-resolution technique It closes, proposes the frame of tandem type super-resolution technique, can successfully solve after super-resolution that picture quality is not even high to ask Topic, experiment show that the use of this method can significantly improve final discrimination.
Detailed description of the invention
Fig. 1 is the round-the-clock eye circumference recognition methods flow chart provided in an embodiment of the present invention based on cascade super-resolution.
Fig. 2 is each module rack composition of eye circumference identifying system provided in an embodiment of the present invention.
Fig. 3 is the example of the eye circumference data set provided in an embodiment of the present invention used, be different scenes shooting obtained by Image schematic diagram.
Fig. 4 is the eye circumference example images provided in an embodiment of the present invention carried out after super-resolution amplification on the basis of Fig. 3 Schematic diagram.
Fig. 5 is used super-resolution rebuilding technology SRCNN network diagram provided in an embodiment of the present invention.
Fig. 6 be it is provided in an embodiment of the present invention using super resolution technology amplification compared with the acutance mean value using interpolation amplification Result figure.
Fig. 7 is the used deconvolution technique network diagram based on deep learning provided in an embodiment of the present invention.
Fig. 8 is the specific ginseng of the used deconvolution technique network frame based on deep learning provided in an embodiment of the present invention Number figure.
Fig. 9 is novel stack neural network (StackConvNet) frame provided in an embodiment of the present invention based on building Figure.
Figure 10 is provided in an embodiment of the present invention and some conventional methods Comparative result schematic diagrames.
Figure 11 is GAR and the EER comparative result figure of this method provided in an embodiment of the present invention and other methods, including uses Result before and after super-resolution technique.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to embodiments, to the present invention It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to Limit the present invention.
Application principle of the invention is explained in detail with reference to the accompanying drawing.
As shown in Figure 1, the round-the-clock eye circumference recognition methods provided in an embodiment of the present invention based on cascade super-resolution includes Following steps:
S101: acquiring eye circumference image using multispectral camera, carries out necessary pretreatment to limited eye circumference data set, And sample expansion is carried out to it;
S102: for the eye circumference data set after processing, using the super-resolution technique based on deep learning to it Amplification;
S103: for the eye circumference data set after handling using the image deconvolution technology based on deep learning to its into Row image restoration;
S104: the eye circumference data set obtained based on processing is enhanced using Laplace image sharpening;
S105: based on obtained eye circumference data set, novel stack neural network mould is constructed using deep learning theory Type (is named as StackConvNet);
S106: being divided into training set and test set for eye circumference data set, for training set, uses triple loss function Triplet loss obtains model using back-propagation algorithm training convolutional neural networks and saves;
S107: for the test set divided through step S106, pass through the trained convolutional neural networks of step S106 Feature is calculated in model, matching score matrix is calculated using Euclidean distance, and calculate GAR according to this matching score matrix With FAR value.
Application principle of the invention is further described with reference to the accompanying drawing.
Eye circumference recognition methods provided in an embodiment of the present invention based on cascade super-resolution technique, process flow frame is such as Shown in Fig. 2, including the amplification of image preprocessing, super-resolution technique, deconvolution image restoration, Laplace image sharpening, StackConvNet feature extraction and last training and part of detecting, in order to enable statement is more clear, it is right individually below Each part is illustrated:
(1) image preprocessing: image preprocessing of the present invention the following steps are included:
(I) it will be seen that light eye circumference image is converted to gray level image using following formula:
Igray=0.2989 × R+0.5870 × G+0.1140 × B;
Then it is normalized into [0,255] using following formula again:
Here I is eye circumference image, ImaxAnd IminMaximum and minimum gradation value in respectively eye circumference image I, InFor normalizing Change output.
(II) infrared eye circumference image is enhanced using log operator, formula is as follows:
I=log (1+X);
Then it is normalized into [0,255] using method same in step (1) again.
(III) to step (II) eye circumference image using different color space indicate (RGB, HSV) and rotation process into Row data are extended for original 6 times.
Wherein eye circumference data set used in the present invention is as shown in figure 3, (a) figure is eye obtained by near-infrared is shot at 1.5m All images, (b) figure is eye circumference image obtained by visible light is shot at 1.5m, and (c) figure is near-infrared eye circumference image at 50m, (d) Figure is near-infrared eye circumference image and (e) figure is short-wave infrared eye circumference image at 50m at 106m.
(2) super-resolution technique is amplified: eye circumference image is after above step (1) pretreatment, using based on depth The super-resolution technique method of habit is amplified.Wherein the super-resolution technique network diagram based on deep learning is as shown in figure 5, the net 3 layer networks are divided into image block and extracted by network frame using the relationship between deep learning and traditional sparse coding as foundation (Patch extraction and representation), Nonlinear Mapping (Non-linear mapping) and final Reconstruction (Reconstruction).
It is wherein as shown in Fig. 4 based on the amplified eye circumference image of eye circumference data set of the super-resolution technique to attached drawing 3.Its It is middle that attached drawing 3 is amplified respectively using super resolution technology and traditional interpolation algorithm, it is for example attached then to calculate its acutance mean value result Shown in Fig. 6, it is seen that super resolution technology method provided by the present invention is better than traditional interpolation method.
(3) deconvolution image restoration: eye circumference image makes after the amplification of above step (2) super-resolution rebuilding technology Eye circumference data set deblurring after being rebuild with the image deconvolution technology based on convolutional neural networks to super resolution technology restores, should Network is by uncoiling volume module (Deconvolution module), pseudomorphism removal module (Artifact removing) and again Model block (Reconstruction module) totally three module compositions.Design parameter as shown in figure 8, share 11 layers, wherein before 3 layers are deconvolution module section, and 4 layers to 8 layers remove module section for pseudomorphism, and last 3 layers are image reconstruction module part.Uncoiling In volume module, first layer uses the convolution kernel of 1 × 45 size, and the second layer uses the convolution kernel of 41 × 1 sizes, uses 15 for the third time The convolution kernel of × 15 sizes.Pseudomorphism removal part uses 1 × 1 convolution kernel.The deconvolution structure that part first uses 2 × 2 is rebuild, Then 2 × 2 convolution kernel is used.
(4) Laplace image sharpening: eye circumference image is schemed after the enhancing of step (3) deconvolution technique using Laplace Eye circumference image enhancement after restoring as sharpening technique to image deconvolution technology, concrete operations are as follows:
(I) second dervative is found using Laplace operator first:
WhereinWithIt is respectively as follows:
Wherein I (x, y) is the eye circumference image restored by image deconvolution technology,WithRespectively along x-axis and y The directional derivative of axis.
(II) knot for the eye circumference image after being sharpened, after handling original eye circumference image and Laplace operator Fruit is added:
Wherein Ish(x, y) is the eye circumference image after sharpening, and c is the weight for adjusting sharpness.
(5) StackConvNet feature extraction: eye circumference image uses after the enhancing of step (4) Laplace image sharpening Novel stack neural network model StackConvNet, which is constructed, the present invention is based on deep learning theory extracts feature, StackConvNet network frame is as shown in figure 9, as shown, StackConvNet network includes altogether 10 convolutional layers, and 6 Maximum pond layer, 2 full articulamentums, in order to prevent over-fitting introduce Dropout layers in 2 full articulamentums.
As shown in Fig. 9, indicate that convolutional layer, the variable of MP beginning indicate maximum pond layer operation, FC with the variable that C starts The variable of beginning indicates full articulamentum.Wherein, all convolutional layers use 3*3 size convolution kernel, step number 1, in convolution process Padding is 0 filling;All pond layers are maximum pond layer, size 2*2, step number 2, and padding is 0 during pond Filling, every layer is followed successively by 16,16,32,32,32,64,64,64,64,64 using convolution kernel number, last two layers of full articulamentum Neuron number be respectively 512 and 128.
(6) eye circumference data set training and test: is divided into training set test set, ratio 7:3 first.Training process In, using triple loss function Triplet loss training pattern, used triple loss function is as follows:
In above equationAndIndicate three images input of network model, whereinWithIt is from same Two images of class,WithIt is two images from foreign peoples, α is spacing parameter.Correspondingly,With AndFor the output feature of network model, triple is formed.
So triple loss function needs to receive the feature of three images and corresponding label is used as input, purpose It is that through a large amount of triple training, the optimizer selected in training process is so that inter- object distance is less than between class distance Adam, learning rate 0.001, batch-size 128 check loss value and the number of iterations by Tensor Board plug-in unit Curve setting one suitable threshold value allows model deconditioning and preservation model.
Application effect of the invention is described in detail below with reference to test.
In test process, the present invention uses Euclidean distance as metric function, and test set passes through trained model extraction To after feature, the input of the feature of test set and its label adaptation function the most is obtained into matching score matrix, passes through matching Score calculates GAR and FAR, and step is specific as follows:
(I) according to the minimum value S of matching scoreminWith maximum value SmaxFor section [Smin,Smax] with fixed step size obtain one Serial threshold value Ti, wherein Smin≤Ti≤Smax, it is greater than threshold value TiIt is really to match, is less than threshold value TiIt is false matching.
(II) through step (I) according to different threshold value TiAcquire a series of true matching rate GARiWith false matching rate FARi
(III) pass through true matching rate GARiWith false matching rate FARiIt draws receiver operating curve (ROC), as a result such as Figure 10 institute Show.
It is described in detail below with reference to the application effect of the invention to comparison.
In order to prove present invention introduces cascade super-resolution network and StackConvNet network superiority, design is real It tests and compares demonstration in terms of following three: firstly, to prove using the eye circumference identification after cascade super resolution technology reduction Performance is better than the recognition performance without using the cascade super-resolution technique, respectively to using and without using cascade super-resolution Situation carries out eye circumference identification experiment, and its recognition result passed through into ROC curve respectively and calculate correct receptance (GAR) and Etc. the mode of error rates (EER) value be shown, as shown in attached drawing 10 and attached drawing 11.From preceding two ROC curves of attached drawing 10 And attached drawing 11 rear two row it can be concluded that, using cascade super-resolution technique recognition result be better than without using cascade oversubscription The result of resolution technology;Secondly, the advantage of the super-resolution based on deep learning in order to illustrate use, by itself and famous biography System interpolation method is compared: bilinearity and bi-cubic interpolation, and is calculated sharpness value and come quantitative measurment contrast effect, result As shown in Fig. 6.It is inserted it can be seen from the graph that the obtained acutance mean value of super-resolution technique based on deep learning is higher than tradition Value method demonstrates deep learning super-resolution technique method better than conventional method;Finally, being directed to feature for the demonstration present invention The superiority for extracting proposed StackConvNet network, respectively by itself and typical conventional face's recognition methods local binary Mode (LBP) and principal component analysis (PCA) are compared, as shown in attached drawing 10 and attached drawing 11.It is bent from the ROC of attached drawing 10 Two ROC curves that line can be seen that StackConvNet network proposed by the invention are apparently higher than PCA and LBP, and Obtained from the EER mono- of attached drawing 11 column, the EER value of StackConvNet network proposed by the invention significantly lower than LBP and PCA.In conclusion the eye circumference recognition methods proposed by the invention based on deep learning cascade super-resolution is better than its other party Method, and there is good robustness.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (9)

1. a kind of round-the-clock eye circumference recognition methods based on cascade super-resolution, which is characterized in that described based on cascade super-resolution The round-the-clock eye circumference recognition methods of rate includes:
Step 1 carries out eye circumference Image Acquisition to certain individual simultaneously using multispectral camera, and to limited eye circumference data set Necessary pretreatment is carried out, and sample expansion is carried out to it;
Step 2 uses the super-resolution technique based on deep learning for the eye circumference data set after step 1 processing Amplification;
Step 3 uses the image deconvolution technology pair based on deep learning for the eye circumference data set after handling through step 2 It carries out image restoration;
Step 4 is enhanced based on the eye circumference data set that step 3 is handled using Laplace image sharpening;
Step 5 constructs novel stack nerve net using deep learning theory based on the eye circumference data set that step 4 obtains Network model;
Eye circumference data set is divided into training set and test set by step 6, for training set, uses triple loss function Triplet loss obtains model using back-propagation algorithm training convolutional neural networks and saves;
Step 7 passes through the trained convolutional neural networks model meter of step 6 for the test set divided through step 6 Calculation obtains feature, matching score matrix is calculated using Euclidean distance, and calculate GAR and FAR according to this matching score matrix Value.
2. the round-the-clock eye circumference recognition methods as described in claim 1 based on cascade super-resolution, which is characterized in that step 1 Image preprocessing and sample expansion include:
(1) eye circumference Image Acquisition is carried out using multispectral camera, the multispectral specific electromagnetic wave band of camera is visible light wave range With infrared ray wave band.Wherein infrared ray wave band is again by 980nm near infrared band (NIR) and 1550nm near infrared band (SWIR) two A Asia wave band is constituted;
(2) it will be seen that light eye circumference image is converted to gray level image using following formula:
Igray=0.2989 × R+0.5870 × G+0.1140 × B;
It is normalized into [0,255] using following formula again:
I is eye circumference image, ImaxAnd IminMaximum and minimum gradation value in respectively eye circumference image I, InFor normalized output;
(3) infrared eye circumference image is enhanced using log operator, formula is as follows:
I=log (1+X);
Then it is normalized into [0,255] using method same in step (2) again;
(4) (RGB, HSV) and rotation process, which carry out data expansion, to be indicated using different color spaces to step (2) eye circumference image Fill is original 6 times.
3. the round-the-clock eye circumference recognition methods as described in claim 1 based on cascade super-resolution, which is characterized in that step 2 It is rebuild using eye circumference image of the super-resolution technique method based on deep learning to small size, this method is by deep learning Three-layer coil product neural network is divided into image block as foundation by the relationship between method and traditional method based on sparse coding Extraction, Nonlinear Mapping and reconstruction three parts.
4. the round-the-clock eye circumference recognition methods as described in claim 1 based on cascade super-resolution, which is characterized in that step 3 Middle use based on the image deconvolution technology of convolutional neural networks to super resolution technology rebuild after eye circumference data set deblurring also Original, by uncoiling volume module, pseudomorphism removal module and reconstruction module, totally three modules form the network.
5. the round-the-clock eye circumference recognition methods as described in claim 1 based on cascade super-resolution, which is characterized in that step 4 Specific step is as follows for eye circumference image enhancement after being restored using Laplace image sharpening techniques to image deconvolution technology:
(1) second dervative is found using Laplace operator:
WhereinWithIt is respectively as follows:
Wherein I (x, y) is the eye circumference image restored by image deconvolution technology,WithRespectively along x-axis and y-axis Directional derivative;
(2) results added after handling original eye circumference image and Laplace operator:
Wherein Ish(x, y) is the eye circumference image after sharpening, and c is the weight for adjusting sharpness.
6. the round-the-clock eye circumference recognition methods as described in claim 1 based on cascade super-resolution, which is characterized in that step 5 Middle that novel stack neural network model StackConvNet is constructed using deep learning theory, the network architecture is to wrap altogether Include 10 convolutional layers, 6 maximum pond layers, 2 full articulamentums, Dropout layers of introducing in 2 full articulamentums.
7. the round-the-clock eye circumference recognition methods as described in claim 1 based on cascade super-resolution, which is characterized in that step 6 It is middle data set to be divided into training set and test set and to carry out training pattern using triple loss function Triplet loss specific Steps are as follows:
(1) data set is divided into training set and test set, ratio 7:3;
(2) it is directed to training set, using triple loss function Triplet loss, using back-propagation algorithm training convolutional nerve Network obtains model and saves, and used triple loss function formula is as follows:
In above equationAndIndicate three images input of network model, whereinWithIt is from similar Two images,WithIt is two images from foreign peoples, α is spacing parameter; AndFor net The output feature of network model forms triple.
8. a kind of round-the-clock eye circumference recognition methods using based on cascade super-resolution described in claim 1~7 any one Digital image processing system.
9. a kind of round-the-clock eye circumference recognition methods using based on cascade super-resolution described in claim 1~7 any one Image steganalysis system.
CN201910281741.4A 2019-04-09 2019-04-09 All-weather eye circumference identification method based on cascade super-resolution Active CN110175509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910281741.4A CN110175509B (en) 2019-04-09 2019-04-09 All-weather eye circumference identification method based on cascade super-resolution

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910281741.4A CN110175509B (en) 2019-04-09 2019-04-09 All-weather eye circumference identification method based on cascade super-resolution

Publications (2)

Publication Number Publication Date
CN110175509A true CN110175509A (en) 2019-08-27
CN110175509B CN110175509B (en) 2022-07-12

Family

ID=67689669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910281741.4A Active CN110175509B (en) 2019-04-09 2019-04-09 All-weather eye circumference identification method based on cascade super-resolution

Country Status (1)

Country Link
CN (1) CN110175509B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079624A (en) * 2019-12-11 2020-04-28 北京金山云网络技术有限公司 Method, device, electronic equipment and medium for collecting sample information
CN114998976A (en) * 2022-07-27 2022-09-02 江西农业大学 Face key attribute identification method, system, storage medium and computer equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944379A (en) * 2017-11-20 2018-04-20 中国科学院自动化研究所 White of the eye image super-resolution rebuilding and image enchancing method based on deep learning
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944379A (en) * 2017-11-20 2018-04-20 中国科学院自动化研究所 White of the eye image super-resolution rebuilding and image enchancing method based on deep learning
CN109345449A (en) * 2018-07-17 2019-02-15 西安交通大学 A kind of image super-resolution based on converged network and remove non-homogeneous blur method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ANJALI SHARMA 等: "On cross spectral periocular recognition", 《2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING》 *
CAO ZHICHENG 等: "Fusion of operators for heterogeneous periocular recognition at varying ranges", 《PATTERN RECOGNITION LETTERS》 *
DONG CHAO 等: "Image Super-Resolution Using Deep Convolutional Networks", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
ZHAO ZIJING 等: "Accurate Periocular Recognition Under Less Constrained Environment Using Semantics-Assisted Convolutional Neural Network", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 *
ZHAO ZIJING 等: "Improving Periocular Recognition by Explicit Attention to Critical Regions in Deep Neural Network", 《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111079624A (en) * 2019-12-11 2020-04-28 北京金山云网络技术有限公司 Method, device, electronic equipment and medium for collecting sample information
CN111079624B (en) * 2019-12-11 2023-09-01 北京金山云网络技术有限公司 Sample information acquisition method and device, electronic equipment and medium
CN114998976A (en) * 2022-07-27 2022-09-02 江西农业大学 Face key attribute identification method, system, storage medium and computer equipment

Also Published As

Publication number Publication date
CN110175509B (en) 2022-07-12

Similar Documents

Publication Publication Date Title
CN106204779B (en) Check class attendance method based on plurality of human faces data collection strategy and deep learning
CN109685045B (en) Moving target video tracking method and system
CN108717524B (en) Gesture recognition system based on double-camera mobile phone and artificial intelligence system
CN109344883A (en) Fruit tree diseases and pests recognition methods under a kind of complex background based on empty convolution
CN109740721B (en) Wheat ear counting method and device
CN112200123B (en) Hyperspectral open set classification method combining dense connection network and sample distribution
CN114022383A (en) Moire pattern removing method and device for character image and electronic equipment
CN110610174A (en) Bank card number identification method under complex conditions
CN111415304A (en) Underwater vision enhancement method and device based on cascade deep network
Quan et al. Learn with diversity and from harder samples: Improving the generalization of CNN-based detection of computer-generated images
CN111709305B (en) Face age identification method based on local image block
CN111476727B (en) Video motion enhancement method for face-changing video detection
CN111178121A (en) Pest image positioning and identifying method based on spatial feature and depth feature enhancement technology
CN110175509A (en) A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution
CN117391981A (en) Infrared and visible light image fusion method based on low-light illumination and self-adaptive constraint
CN108764287B (en) Target detection method and system based on deep learning and packet convolution
CN105426847A (en) Nonlinear enhancing method for low-quality natural light iris images
CN113077452B (en) Apple tree pest and disease detection method based on DNN network and spot detection algorithm
Román et al. Image color contrast enhancement using multiscale morphology
CN115862121B (en) Face quick matching method based on multimedia resource library
CN114926348B (en) Device and method for removing low-illumination video noise
CN115797205A (en) Unsupervised single image enhancement method and system based on Retinex fractional order variation network
CN110858304A (en) Method and equipment for identifying identity card image
CN108133467B (en) Underwater image enhancement system and method based on particle calculation
CN106981055A (en) A kind of ICCD image de-noising methods

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230224

Address after: 400031 unit 1, building 1, phase 3, R & D building, Xiyong micro power park, Shapingba District, Chongqing

Patentee after: Chongqing Institute of integrated circuit innovation Xi'an University of Electronic Science and technology

Address before: 710071 Xi'an Electronic and Science University, 2 Taibai South Road, Shaanxi, Xi'an

Patentee before: XIDIAN University