CN106326874A - Method and device for recognizing iris in human eye images - Google Patents
Method and device for recognizing iris in human eye images Download PDFInfo
- Publication number
- CN106326874A CN106326874A CN201610776455.1A CN201610776455A CN106326874A CN 106326874 A CN106326874 A CN 106326874A CN 201610776455 A CN201610776455 A CN 201610776455A CN 106326874 A CN106326874 A CN 106326874A
- Authority
- CN
- China
- Prior art keywords
- image
- iris
- tested
- default
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/19—Sensors therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Ophthalmology & Optometry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a method for recognizing iris in human eye images, and the method comprises the steps: building a preset convolution neural network; selecting a plurality of human eye images in advance, and carrying out the preprocessing of the human eye images; carrying out the training of the preset convolution neural network till an obtained preset convolution neural network model is convergent; carrying out the second image preprocessing of a pair of to-be-tested human eye images, the iris recognition of which needs to be carried out, and obtaining corresponding to-be-tested iris film image pairs with the preset size; inputting the to-be-tested iris film image pairs into the trained preset convolution neural network model in the mode of two channels, obtaining the correlation score of the to-be-tested iris film image pairs, and judging whether the to-be-tested iris film image pairs are of the same type or not. The invention also discloses a device for recognizing iris in the human eye images. The method and device can carry out the timely and accurate recognition of the iris in the human eye images collected in a controllable scene and a non-controllable scene, meet the requirements of a user for the iris recognition, and can improve the work efficiency of the user.
Description
Technical field
The present invention relates to the technical field such as pattern recognition and computer vision, particularly relate to the rainbow in a kind of eye image
Film recognition methods and device thereof.
Background technology
At present, along with the development of human sciences's technology, iris recognition technology is more and more general in people's daily life
And, iris identification is the invariance of the texture utilizing iris, and uniqueness etc. identifies a kind of biological feather recognition method of identity, its
The fields such as country's security protection, border control, bank finance, access control and attendance and mobile terminal are successfully applied to it.It can be said that nothing
Opinion is in terms of artificial intelligence study or public safety applications, and iris recognition technology is always forward position, a hot technology, has
Very important status.
For iris recognition technology, in actual applications, the iris identification under controlled scene has been widely studied, its skill
Art also tends to maturation.But in actual applications, iris recognition technology also has a lot of challenge, especially for remote scene and
People are not the non-controllable scenes such as fully mated scene (such as moving scene) (i.e. complicated, uncontrollable scenes), are adopted
Collect to the eye image of people owing to there is illumination and distance change, therefore eye image has low resolution, strong noise, tiltedly
Eye, the fuzzy and characteristic such as be blocked.Additionally, along with the broad development of iris image acquisition device, multi-resources Heterogeneous iris identification is also
Surmount the disposal ability scope of traditional algorithm.Therefore to meet the demand of reality application, it would be highly desirable to propose more effective iris
Recognizer.
Currently for traditional iris recognition technology, its general workflow includes: Image Acquisition, Image semantic classification
(such as iris segmentation), feature extraction and pattern classification.Wherein, the iris feature extraction of robust is known for realizing iris accurately
Do not play key effect.It should be noted that typical iris feature has local feature and correlative character.Local feature closes
The grain details of one iris image of note, correlative character then pays close attention to the dependency of two images, it is judged that whether they are same
Class.Iris feature abstracting method in early days, based on engineer's wave filter, is not only wasted time and energy, and the most generally can not get optimum knot
Really.Be there is also by the method for feature selection acquisition optimum filter parameter and need to produce high-dimensional, to cross complete characteristics pond fraud
End.Therefore, the most traditional iris identification method recognition accuracy is low, the most intractable heterogeneous iris identification and remote,
The non-controllable application scenarios of mobile terminal etc..
Therefore, at present in the urgent need to developing a kind of technology, it can be to collection under controlled scene and non-controllable scene
Iris in eye image carries out in time, identifies accurately, meets user's requirement to iris identification, it is possible to increase the work of user
Make efficiency, save people's valuable time, accuracy rate when eye image is carried out iris identification is effectively ensured.
Summary of the invention
In view of this, it is an object of the invention to provide the iris identification method in a kind of eye image and device thereof, it can
So that the iris in the eye image gathered under controlled scene and non-controllable scene is carried out in time, to be identified accurately, meet user
Requirement to iris identification, it is possible to increase the work efficiency of user, saves people's valuable time, is effectively ensured eye image
Carry out accuracy rate during iris identification, be of great practical significance.
To this end, the iris identification method that the invention provides in a kind of eye image, including step:
The first step: setting up and preset convolutional neural networks, described convolutional neural networks includes entering the image inputted successively
The image that row processes is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum;
Second step: multiple eye image is pre-selected, carries out Image semantic classification to the plurality of eye image, it is thus achieved that multiple
Preset the iris image of size;
3rd step: by the plurality of iris image, previously according to the default classification of each iris image, choose classification phase
With any two iris images as positive sample pair, and choose any two iris images that classification differs as negative sample
This is right, is input to respectively in described default convolutional neural networks, enters described default convolutional neural networks in the way of two passages
Row training, until making the model of described default convolutional neural networks restrain;
4th step: the image needing the eye image to be tested carrying out iris identification to carry out a pair described in second step is located in advance
Reason, it is thus achieved that the iris image pair to be tested of corresponding default size;
5th step: by described iris image pair to be tested, is input in the 3rd step the completeest respectively in the way of two passages
Become in the described default convolutional neural networks of training, it is thus achieved that the relevance scores of the iris image pair described to be tested of input, and
Judge whether the relevance scores of described iris image pair to be tested is positioned at default intra-class correlation mark span, as
Fruit be, then judge described iris image to be tested to identical as classification, otherwise, it is judged that differ for classification.
Wherein, the 6th step is also included:
To described iris image to be tested to carrying out repeatedly translation, it is thus achieved that corresponding iris image multiple to be tested
Right, it is input to the 3rd step and is complete in the described default convolutional neural networks of training, obtain organizing the iris to be tested of input more
Then many group relevance scores are carried out mark fusion treatment by the relevance scores of image pair, export final iris identification knot
Really.
Wherein, in the 6th step, the fusion treatment operation of described many group relevance scores includes: to organizing relevance scores more
Average, minima or maximum.
Wherein, in the third step, from the plurality of iris image, any two iris figures that classification is identical are first chosen
As positive sample pair, then, in remaining iris image, randomly select quantity with positive sample to identical negative sample pair.
Wherein, in the third step, the step that described default convolutional neural networks is trained, particularly as follows: by structure well
Positive sample to and negative sample to the input as default convolutional neural networks, successively carry out convolution, pond and full attended operation,
Obtain the output of last layer, i.e. matching result, compare the error of itself and true tag, according to error backpropagation algorithm training volume
Long-pending neural network model, until model is restrained.
Additionally, the iris identification device that present invention also offers in a kind of eye image, including:
Network sets up unit, is used for setting up default convolutional neural networks, and described convolutional neural networks includes defeated to institute successively
The image that the image entered carries out processing is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum;
Image is pre-selected unit, is used for multiple eye image is pre-selected, the plurality of eye image is carried out image
Pretreatment operation, it is thus achieved that the iris image of multiple default sizes, is then sent to network training unit;
Network training unit, sets up unit with network, image is pre-selected unit and is connected respectively, for by described image
Multiple iris images that cell processing cross are pre-selected, previously according to the default classification of each iris image, choose classification phase
With any two iris images as positive sample pair, and choose any two iris images that classification differs as negative sample
This is right, is input to described network respectively and sets up in the default convolutional neural networks that unit is set up, to described in the way of two passages
Default convolutional neural networks is trained, until making the model of described default convolutional neural networks restrain;
Image pretreatment unit, for needing the eye image to be tested carrying out iris identification to carry out described figure by a pair
As pretreatment operation, it is thus achieved that the iris image pair to be tested of corresponding default size, it is then sent to image classification unit;
Image Classification and Identification judging unit, is connected with network training unit, image pretreatment unit respectively, and being used for will
The iris image pair to be tested that described image pretreatment unit processed, is input to described network respectively in the way of two passages
Training unit is complete in the described default convolutional neural networks of training, it is thus achieved that the iris image pair described to be tested of input
Relevance scores, and judge whether the relevance scores of described iris image pair to be tested is positioned at default intra-class correlation and divides
Number spans in, if it is, judge described iris image to be tested to identical as classification, otherwise, it is judged that for classification not phase
With.
Wherein, described image Classification and Identification judging unit, it is additionally operable to described iris image to be tested repeatedly putting down
Move operation, it is thus achieved that corresponding iris image pair multiple to be tested, be input to network training unit and be complete the described pre-of training
If in convolutional neural networks, obtain organizing the relevance scores of the iris image pair to be tested of input, then to organizing dependency more more
Mark carries out mark fusion treatment, exports final iris identification result.
Wherein, the fusion treatment operation of described many group relevance scores includes: average many group relevance scores,
Little value or maximum.
Wherein, described network training unit, for from the plurality of iris image, first choose identical any of classification
Two iris images are as positive sample pair, then in remaining iris image, randomly select quantity with positive sample to identical
Negative sample pair.
Wherein, described network training unit, for will the positive sample that construct to and negative sample to refreshing as default convolution
Through the input of network, successively carry out convolution, pond and full attended operation, obtain the output of last layer, i.e. matching result, compare it
With the error of true tag, according to error backpropagation algorithm training convolutional neural networks model, restrain until model.
The technical scheme provided from the above present invention, compared with prior art, the invention provides a kind of human eye
Iris identification method in image and device thereof, it can be in the eye image gathered under controlled scene and non-controllable scene
Iris carries out in time, identifies accurately, meets user's requirement to iris identification, it is possible to increase the work efficiency of user, saves
People's valuable time, is effectively ensured accuracy rate when eye image carries out iris identification, has great production practices meaning
Justice.
Accompanying drawing explanation
The flow chart of the iris identification method in a kind of eye image that Fig. 1 provides for the present invention;
In iris identification method in a kind of eye image that Fig. 2 provides for the present invention, input picture is in a pair wise manner
Send into the schematic diagram of convolutional neural networks;
In iris identification method in a kind of eye image that Fig. 3 provides for the present invention, the dependency signal of comparison between class
Figure;
In iris identification method in a kind of eye image that Fig. 4 provides for the present invention, the dependency signal of comparison in class
Figure;
The block diagram of the iris identification device in a kind of eye image that Fig. 5 provides for the present invention;
In iris identification method in a kind of eye image that Fig. 6 provides for the present invention and device thereof, budget convolutional Neural
The structural representation of network.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with the accompanying drawings with embodiment to this
Invention is described in further detail.
Along with the continuous acceleration of the hardware such as graphic process unit GPU, method based on degree of depth study gradually demonstrates the most comparable
The advantage intended: first, is trained by data, and degree of depth network model can learn automatically to for identifying maximally effective feature, nothing
Need to manually participate in;Second, degree of depth study is method end to end, and eliminate that feature extraction in traditional recognition method classifies again answers
Miscellaneous flow process.The important branch that convolutional neural networks (CNNs) learns as the degree of depth, has weights and shares and partially connected etc.
Feature, has been successfully applied to the field such as recognition of face, object detection.
The present invention is by being applied to convolutional neural networks in iris identification, it is possible to promote the accurate of iris identification further
Rate, especially can be effectively applied to the most intractable heterogeneous iris identification of current traditional method and end remote, mobile
End etc. non-controllable application scenarios.
The flow chart of the iris identification method in a kind of eye image that Fig. 1 provides for the present invention;
Seeing Fig. 1, the iris identification method in a kind of eye image that the present invention provides, the method is by with a pair iris
Image can directly detect their dependency as input data, it may be judged whether for similar, the present invention can also solve little rule
The over-fitting problem that mould data base's training convolutional neural networks easily occurs, thus obtain than traditional recognition method higher accurately
Rate, preferably tackles heterogeneous iris identification and the application of non-controllable scene.
Iris identification method in a kind of eye image that the present invention provides, specifically includes following steps:
The first step: setting up and preset convolutional neural networks, described convolutional neural networks includes entering the image inputted successively
The image that row processes is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum;
In the present invention, it should be noted that in actual applications, iris recognition technology also has a lot of challenge, especially
, such as remote and mobile terminal application etc., there is illumination and apart from change, strong noise, low resolution and mould in non-controllable scene
The interference such as paste.Additionally, along with the ubiquitous development of iris image acquisition device, multi-resources Heterogeneous iris identification has also surmounted traditional algorithm
Disposal ability scope.Traditional iris feature abstracting method, based on engineer's wave filter, is not only wasted time and energy, the most usual
Less than optimal result.By feature selection obtain the method for optimum filter parameter there is also need to produce high-dimensional, the most complete
The drawback of feature pool.The present invention proposes a kind of iris identification method based on convolutional neural networks, makees with a pair iris image
For input, directly obtained the relevance scores of input picture pair by mode end to end, it is judged that in class or between class, simultaneously this
Method also solves the over-fitting problem that database training convolutional neural networks easily occurs on a small scale.
Second step: multiple eye image is pre-selected, carries out Image semantic classification to the plurality of eye image, it is thus achieved that multiple
Preset the iris image of size;
3rd step: by the plurality of iris image, previously according to the default classification of each iris image, choose classification phase
Any two iris images of same (the most similar, namely to belong to eyes together) are as positive sample pair, and choose classification not phase
Any two iris images of same (i.e. inhomogeneity does not belong to eyes together) are as negative sample pair, respectively in the way of two passages
(each passage one image of input) is input in described default convolutional neural networks (as shown in Figure 2), to described default convolution
Neutral net is trained, until making the model of described default convolutional neural networks restrain;
4th step: the image needing the eye image to be tested carrying out iris identification to carry out a pair described in second step is located in advance
Reason, it is thus achieved that the iris image pair to be tested of corresponding default size;
5th step: by described iris image pair to be tested, is input in the 3rd step the completeest respectively in the way of two passages
Become in the described default convolutional neural networks of training (i.e. training), it is thus achieved that the phase of the iris image pair described to be tested of input
Closing property mark (in i.e. a pair image, the similarity between two images, percentage ratio), and judge described iris image pair to be tested
Relevance scores whether be positioned at default intra-class correlation mark span, if it is, judge described rainbow to be tested
Film image is to for classification identical (the most similar, to belong to the iris image of eyes together), otherwise, it is judged that differ for classification.
In the present invention, implementing, described default intra-class correlation mark span can be according to user's
Need to be configured in advance, such as, can be 60%~100%.
Implement, for the present invention, also include the 6th step: to described iris image to be tested to repeatedly translating
Operation, it is thus achieved that corresponding iris image pair multiple to be tested, is input to the 3rd step and is complete the institute of training (i.e. training)
State in default convolutional neural networks, obtain organizing the relevance scores of the iris image pair to be tested of input, then to organizing phase more more
Closing property mark carries out mark fusion treatment methods such as (such as average) minima or maximums, exports final iris
Recognition result.It is to say, by the Image semantic classification (processing containing human eye detection, rim detection and normalization etc.) through second step
Multiple iris images to be tested, after carrying out translation process, are input to be complete the described default volume of training (i.e. training)
In long-pending neutral net, obtain organizing the relevance scores of the iris image pair to be tested of input, then to organizing relevance scores more more
Carry out mark fusion treatment (such as average, the method such as minima or maximum), export final iris identification result.
It should be noted that the present invention, it is considered to the rotational differential of iris image to be tested, therefore, to through second step figure
As the iris image to be tested of pretreatment (i.e. after normalized) is to translating, by the iris image to be tested after translation also
It is input in the convolutional neural networks trained, can obtain organizing the relevance scores of the iris image pair to be tested of input more,
Then, many group relevance scores are carried out mark fusion treatment methods such as (such as average) minima or maximums, gram
Take the adverse effect that rotational differential brings, export final iris identification result.
In the present invention, it should be noted that described default convolutional neural networks includes entering the image inputted successively
The image that row processes is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum.Described output layer is
Refer to last layer of network.Wherein, described image to input layer, for by two width iris images in the way of two passages (each
Passage one image of input) it is input in described default convolutional neural networks, the purpose of input picture pair is directly to measure them
Dependency.Described convolutional layer, is used for input picture carrying out convolution, and each convolution filter shares identical parameters, fall
The parameter amount of Di Liao network model, can obtain the characteristic pattern of input picture pair by convolutional layer.Described pond layer, employs
Big value pondization and meansigma methods pond, can reduce data volume to be processed, can ensure that extracted feature has space-invariance simultaneously.
Described full articulamentum: the feature of higher-dimension is converted into greater compactness of one-dimensional characteristic vector by the way of full connection.
Implementing, each layer comprised for described default convolutional neural networks, the output of preceding layer is rear one
The input of layer.
For the present invention, it should be noted that described default convolutional neural networks inputs one by the way of with two passages
To iris image, their relevance scores can be directly obtained, it may be judged whether for similar.Automatically learnt by neutral net,
Solve the drawback that traditional method engineer's wave filter is wasted time and energy.Shared by weights and can reduce network parameter.Pond layer
The feature that data volume obtains having space-invariance simultaneously can be reduced.
In second step, implementing, described Image semantic classification comprises the following steps:
Eye image for being pre-selected or obtaining uses human eye detection device, has detected whether that human eye occurs, if
Have, then provide Position Approximate and the yardstick of human eye, then use rim detection that the inside and outside circle border of iris is positioned,
To the inside and outside center of circle and radius, it is thus achieved that the iris image in eye image, then iris image is normalized, obtains phase
With the iris image of size, as obtained the iris image of 128 × 128 pixels.
In the present invention, implementing, described human eye detection device is existing a kind of human eye detection device, for example:
The human eye detection device based on Like-Fenton Oxidation and AdaBoost that Viola et al. proposes.Whether have in one image of detection
Human eye.
In the present invention, implement, described in provide human eye Position Approximate on image and yardstick, i.e. pass through human eye
Detector, provides the bounding box of human eye.
In the present invention, implementing, described rim detection can be existing one general rim detection.Example
As being: based on gradient the general rim detection detection iris edge point that Wildes et al. proposes, then based on obtaining
Marginal point carries out Hough transformation, thus obtains iris inner and outer boundary parameter of curve.
In the present invention, implementing, the described inside and outside center of circle and radius can be obtained by rim detection, be used for into
Row iris normalization.
In the present invention, implement, iris image is normalized, square will be expanded into by ring-type iris
Shape shape, such as the rubber rubber moulding type using Daugman to propose.Normalized purpose is the chi being adjusted to the size of iris fix
Very little, reduce the impact of iris deformation as far as possible.
For the present invention, in second step, for an image, obtain the position of eyes first by human eye detection device,
Then use rim detection to obtain the inside and outside center of circle and the radius of iris from people's eye pattern, carry out iris further according to the center of circle and radius
Normalization, obtains the iris image of same size.
For the present invention, in second step, the positive sample of described structure to and negative sample to belonging to training process.Training data
Comprise classification information.I.e. it is known that when image acquisition which image is same class (belonging to eyes together), which figure
Picture is inhomogeneity.Be in particular in image name on, Image Name have several for identified category information, from same class
Those several that identify classification information in the title of all images of (the most same eyes) identical.
In the third step, particularly as follows: choose two identical iris images of classification as positive sample pair, classification is chosen different
Two iris images as negative sample pair, owing between class, (i.e. iris image is not belonging to same eyes, the most inhomogeneous
Between iris image) comparison number of times will be far more than (i.e. iris image belongs to same eyes, the most similar iris in class
Between image) comparison number of times, if throw the reins to chooses, it will positive sample occurs to situation very little, and then cause base
When the model of convolutional neural networks is in training, over-fitting occurs, therefore, for the present invention, in the third step, implements,
(i.e. from the plurality of iris image, any two iris images first choosing classification identical are made in full comparison in first choosing class
For positive sample pair, comparison is i.e. compared), then, in remaining iris image, randomly select quantity with positive sample to (class internal ratio
To number of times) suitable negative sample is to (comparison between class);
Constructed positive negative sample is input in convolutional neural networks in the way of two passages, as in figure 2 it is shown, obtain
Output result can be expressed with following formula:
Wherein, (X1,X2) it is input picture pair, W1,jAnd W2,jIt is the paired wave filter of jth, BjFor bias term.
For the present invention, by choose two positive samples of picture construction to and negative sample pair by the way of, effectively expand
Sample space, can solve the over-fitting problem that database training convolutional neural networks easily occurs on a small scale, such as, train figure
As having 200 classes, every class to have 30 images, then in the class of total 200*30* (30-1)/2=87000 couple, sample is to (i.e. positive sample
This to).
In the third step, the step that described default convolutional neural networks is trained, particularly as follows: the image that will have constructed
To (positive sample to and negative sample to) as presetting the input of convolutional neural networks, successively carry out convolution, pond and entirely connect behaviour
Make, obtain the output of last layer, i.e. matching result, compare the error of itself and true tag, instruct according to error backpropagation algorithm
Practice convolutional neural networks model, until model is restrained.
In the third step, implement, using the image that constructed to the input as default convolutional neural networks, successively
Carry out the operations such as convolution, Chi Hua, full connection, obtain the output of last layer, i.e. matching result, such that it is able to compare itself and true mark
Error between label.
It should be noted that described matching result is the output valve presetting convolutional neural networks last layer, this output valve is
Bivector, the most one-dimensional representing input images respectively probit to belonging to similar and foreign peoples.By default convolutional neural networks
Automatically study and reversely regulation, can make to preset the output approaching to reality label as far as possible of convolutional neural networks.Purpose is to use
Two images in checking input are similar or foreign peoples.
Also, it should be noted described true tag is used to characterize input picture to whether being similar label.According to
The classification information of known input picture, i.e. may compare and show whether two input pictures belong to same class.This true tag is real
It is the learning target of convolutional neural networks on border, is used for supervising the learning training process of convolutional neural networks, allow neutral net
Export approaching to reality label as much as possible.
In the present invention, described training is the learning process of a neutral net, i.e. by the output of comparative neural network
With the process that the error of true tag carrys out self-regulating networks parameter.
In the present invention, neutral net is i.e. the most successively calculated by described propagated forward algorithm.Described reversely
Propagation algorithm refers to according to the error between network output result and true tag, the most successively mistake of regulating networks parameter
Journey, carrying out this operation is to optimize network parameter, reduces the error between output result and true tag.
In the present invention, low when default convolutional neural networks model output in training set and the error between true tag
When a certain threshold value, i.e. it is believed that model is restrained.Depending on the setting of this threshold value can be according to the experience of user and different training sets.
In the 4th step, implement, to needing the people's eye pattern to be tested the carrying out iris identification institute to processing
Stating Image semantic classification, the processing procedure with second step is consistent, specifically includes following steps:
Need a pair the eye image to be tested carrying out iris identification to use human eye detection device, detected whether that human eye goes out
Existing, if it has, then provide Position Approximate and the yardstick of human eye, then use rim detection that the inside and outside circle border of iris is carried out
Location, obtains the inside and outside center of circle and radius, it is thus achieved that the iris image in eye image, then iris image is normalized place
Reason, obtains the iris image of same size, as obtained the iris image of 128 × 128 pixels.
In the 5th step, implementing, iris image to be tested, to according to such as second step and the mode of the 3rd step, enters
Row Image semantic classification and structure sample, to rear, are input to be complete in the 3rd step the described default volume of training (i.e. training)
In long-pending neutral net, the dependency (iris image described to be tested i.e. inputted each of input picture pair can be directly obtained
Positive sample is to the relevance scores with each negative sample pair), as shown in Figure 3, Figure 4, respectively between class, (iris image is not belonging to same
One eye) and class in the dependency graph of (iris image belongs to same eyes) comparison (it should be noted that the left side of a people
Iris of right eye differs, and the right and left eyes iris of a people is two classes), wherein dark colour (atrous) represents low-response, meaning
Taste higher similarity, and therefore, the dependency graph color of comparison between class is the most shallow and dependency graph color of comparison in class is the deepest.
In the 6th step, implement, to described iris image to be tested to carrying out repeatedly translation, acquisition is right
The iris image pair multiple to be tested answered.If using X1And X2Represent an iris image pair to be tested of input, it is assumed that input
Iris image to be tested to one pixel of left, be designated as X1L, to one pixel X of right translation1R, consider that input is to be measured simultaneously
The order of examination iris image pair, then produce altogether six groups of input picture pair: X1-X2、X1L-X2、X1R-X2、X2-X1、X2-X1L、X2-
X1R, afterwards, then six groups of results are carried out fractional layer fusion, as taken the methods such as average, minima and maximum fusion, thus obtain
The relevance scores (similarity) of the final iris image pair to be tested inputted, it is thus achieved that final iris identification result.
Implementing, the fusion treatment operation of described many group relevance scores includes: make even many group relevance scores
Average, minima or maximum.
It should be noted that when the relevance scores (similarity) of iris image pair to be tested reaches preset value, permissible
Judge that two iris images to be tested of this iris image centering to be tested, as belonging to same person, have very high correlation, belong to
In similar.Otherwise, belong to inhomogeneity.
In the 6th step, it should be noted that due in actual use, it is difficult to it is accurately fixed to carry out iris image
Position calibration.For simulating this species diversity, can artificially add disturbance, as image translated in different directions some pictures when training
Element.
Based on the iris identification method in a kind of eye image that the invention described above provides, seeing Fig. 4, the present invention provides
A kind of iris identification device in eye image, including:
Network sets up unit 501, is used for setting up default convolutional neural networks, and described convolutional neural networks includes successively to institute
The image that the image of input carries out processing is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum;
Image is pre-selected unit 502, is used for multiple eye image is pre-selected, the plurality of eye image is carried out figure
As pretreatment operation, it is thus achieved that the iris image of multiple default sizes, it is then sent to network training unit 503;
Network training unit 503, sets up unit 501 with network, image is pre-selected unit 502 and is connected respectively, is used for
Described image is pre-selected multiple iris images that unit 502 processed, previously according to the default class of each iris image
Not, choose any two iris images of classification identical (the most similar, namely to belong to eyes together) as positive sample pair, and
Choose classification and differ any two iris images of (i.e. inhomogeneity does not belong to eyes together) as negative sample pair, respectively with
The mode (each passage one image of input) of two passages is input to described network and sets up the default convolutional Neural that unit 501 is set up
, described default convolutional neural networks is trained, until making described default convolutional neural networks in network (as shown in Figure 2)
Model convergence;
Image pretreatment unit 504, for needing the eye image to be tested carrying out iris identification to carry out institute by a pair
State image pretreatment operation (image pretreatment operation in unit 502 being pre-selected with image), it is thus achieved that corresponding default size
Iris image pair to be tested, be then sent to image classification unit 505;
Image Classification and Identification judging unit 505, is connected with network training unit 503, image pretreatment unit 504 respectively
Connect, for the iris image pair to be tested that described image pretreatment unit 504 was processed, defeated in the way of two passages respectively
Enter in the described default convolutional neural networks being complete training (i.e. training) in described network training unit 503, obtain
The relevance scores of the iris image pair described to be tested that must input (in i.e. a pair image, the similarity between two images, hundred
Proportion by subtraction), and judge whether the relevance scores of described iris image pair to be tested is positioned at default intra-class correlation mark value
In the range of, if it is, judge described iris image to be tested to as classification identical (the most similar, belong to the iris of eyes together
Image), otherwise, it is judged that differ for classification.
In the present invention, implementing, described default intra-class correlation mark span can be according to user's
Need to be configured in advance, such as, can be 60%~100%.
Implement, for the present invention, described image Classification and Identification judging unit 505, it is additionally operable to described to be tested
Iris image is to carrying out repeatedly translation, it is thus achieved that corresponding iris image pair multiple to be tested, is input to network training unit
In the 503 described default convolutional neural networks being complete training (i.e. training), obtain organizing the iris to be tested of input more
The relevance scores of image pair, then many group relevance scores are carried out mark fusion treatment (such as average, minima or
The methods such as person's maximum), export final iris identification result.It is to say, will through image pretreatment unit 504 (i.e.
Through normalization) iris image multiple to be tested that processes after carrying out translation process, be input to network training unit 503
Complete to train in the described default convolutional neural networks of (i.e. training), obtain organizing the iris image pair to be tested of input more
Relevance scores, then many group relevance scores are carried out mark fusion treatment (such as average, minima or maximum
Etc. method), export final iris identification result.
Implementing, the fusion treatment operation of described many group relevance scores includes: make even many group relevance scores
Average, minima or maximum.
In the present invention, described network sets up unit 501, image is pre-selected unit 502, network training unit 503, figure
As pretreatment unit 504 and image Classification and Identification judging unit 505 can be respectively on apparatus of the present invention mainboard in installation
Central processor CPU, digital signal processor DSP or single-chip microprocessor MCU.
In the present invention, described network sets up unit 501, image is pre-selected unit 502, network training unit 503, figure
As pretreatment unit 504 and image Classification and Identification judging unit 505 can be the device being separately provided, it is also possible to integrally disposed
Together.
It should be noted that the present invention, it is considered to the rotational differential of iris image to be tested, therefore, to through image in advance
Select the iris image to be tested that in unit 502, Image semantic classification (i.e. after normalized) operates to translating, will translate
After iris image to be tested also enter in the convolutional neural networks trained, can obtain organizing the iris to be tested of input more
The relevance scores of image pair, then, many group relevance scores are carried out mark fusion treatment (such as average, minima
Or the methods such as maximum), overcome the adverse effect that rotational differential brings, export final iris identification result.
In the present invention, it should be noted that in actual applications, iris recognition technology also has a lot of challenge, especially
, such as remote and mobile terminal application etc., there is illumination and apart from change, strong noise, low resolution and mould in non-controllable scene
The interference such as paste.Additionally, along with the ubiquitous development of iris image acquisition device, multi-resources Heterogeneous iris identification has also surmounted traditional algorithm
Disposal ability scope.Traditional iris feature abstracting method, based on engineer's wave filter, is not only wasted time and energy, the most usual
Less than optimal result.By feature selection obtain the method for optimum filter parameter there is also need to produce high-dimensional, the most complete
The drawback of feature pool.The present invention proposes a kind of iris identification method based on convolutional neural networks, makees with a pair iris image
For input, directly obtained the relevance scores of input picture pair by mode end to end, it is judged that in class or between class, simultaneously this
Method also solves the over-fitting problem that database training convolutional neural networks easily occurs on a small scale.
In the present invention, it should be noted that described default convolutional neural networks includes entering the image inputted successively
The image that row processes is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum.Described output layer is
Refer to last layer of network.Wherein, described image to input layer, for by two width iris images in the way of two passages (each
Passage one image of input) it is input in described default convolutional neural networks, the purpose of input picture pair is directly to measure them
Dependency.Described convolutional layer, is used for input picture carrying out convolution, and each convolution filter shares identical parameters, fall
The parameter amount of Di Liao network model, can obtain the characteristic pattern of input picture pair by convolutional layer.Described pond layer, employs
Big value pondization and meansigma methods pond, can reduce data volume to be processed, can ensure that extracted feature has space-invariance simultaneously.
Described full articulamentum: the feature of higher-dimension is converted into greater compactness of one-dimensional characteristic vector by the way of full connection.
Implementing, each layer comprised for described default convolutional neural networks, the output of preceding layer is rear one
The input of layer.
For the present invention, it should be noted that described default convolutional neural networks inputs one by the way of with two passages
To iris image, their relevance scores can be directly obtained, it may be judged whether for similar.Automatically learnt by neutral net,
Solve the drawback that traditional method engineer's wave filter is wasted time and energy.Shared by weights and can reduce network parameter.Pond layer
The feature that data volume obtains having space-invariance simultaneously can be reduced.
In image is pre-selected unit 502, implementing, described Image semantic classification comprises the following steps:
Eye image for being pre-selected or obtaining uses human eye detection device, has detected whether that human eye occurs, if
Have, then provide Position Approximate and the yardstick of human eye, then use rim detection that the inside and outside circle border of iris is positioned,
To the inside and outside center of circle and radius, it is thus achieved that the iris image in eye image, then iris image is normalized, obtains phase
With the iris image of size, as obtained the iris image of 128 × 128 pixels.
In the present invention, implementing, described human eye detection device is existing a kind of human eye detection device, for example:
The human eye detection device based on Like-Fenton Oxidation and AdaBoost that Viola et al. proposes.Whether have in one image of detection
Human eye.
In the present invention, implement, described in provide human eye Position Approximate on image and yardstick, i.e. pass through human eye
Detector, provides the bounding box of human eye.
In the present invention, implementing, described rim detection can be existing one general rim detection.Example
As being: based on gradient the general rim detection detection iris edge point that Wildes et al. proposes, then based on obtaining
Marginal point carries out Hough transformation, thus obtains iris inner and outer boundary parameter of curve.
In the present invention, implementing, the described inside and outside center of circle and radius can be obtained by rim detection, be used for into
Row iris normalization.
In the present invention, implement, iris image is normalized, square will be expanded into by ring-type iris
Shape shape, such as the rubber rubber moulding type using Daugman to propose.Normalized purpose is the chi being adjusted to the size of iris fix
Very little, reduce the impact of iris deformation as far as possible.
For the present invention, unit 502 is pre-selected for image, for an image, obtains first by human eye detection device
To the position of eyes, rim detection is then used to obtain the inside and outside center of circle and the radius of iris from people's eye pattern, further according to the center of circle
Carry out iris normalization with radius, obtain the iris image of same size.
For the present invention, unit 502 is pre-selected for image, the positive sample of described structure to and negative sample to belonging to training
Process.Training data comprises classification information.I.e. it is known that when image acquisition which image is that same class (belongs to one together
Eyes), which image is inhomogeneity.Be in particular in image name on, Image Name have several for identified category information,
Those several that identify classification information in the title of all images of same class (the most same eyes) identical.
In network training unit 503, concrete process operation is: choose two identical iris images of classification as positive sample
This is right, chooses two different iris images of classification as negative sample pair, owing between class, (i.e. iris image is not belonging to same eye
Eyeball, between the most inhomogeneous iris image) comparison number of times will be far more than (i.e. iris image belongs to same eye in class
Eyeball, between the most similar iris image) comparison number of times, if throw the reins to chooses, it will positive sample occurs to too
Few situation, and then cause model based on convolutional neural networks, when training, over-fitting occurs, therefore, for the present invention, right
In described network training unit 503, implementing, in first choosing class, full comparison is (i.e. from the plurality of iris image, first
Any two iris images choosing classification identical i.e. compare as positive sample pair, comparison), then, at remaining iris image
In, randomly select quantity and positive sample to (comparison number of times in class) suitable negative sample to (comparison between class);
Constructed positive negative sample is input in convolutional neural networks in the way of two passages, as in figure 2 it is shown, obtain
Output result can be expressed with following formula:
Wherein, (X1,X2) it is input picture pair, W1,jAnd W2,jIt is the paired wave filter of jth, BjFor bias term.
For the present invention, by choose two positive samples of picture construction to and negative sample pair by the way of, effectively expand
Sample space, can solve the over-fitting problem that database training convolutional neural networks easily occurs on a small scale, such as, train figure
As having 200 classes, every class to have 30 images, then in the class of total 200*30* (30-1)/2=87000 couple, sample is to (i.e. positive sample
This to).
In network training unit 503, the step that described default convolutional neural networks is trained, particularly as follows: by structure
The image made as presetting the input of convolutional neural networks, successively carries out convolution, Chi Hua to (positive sample to and negative sample to)
Operate with full connection etc., obtain the output of last layer, i.e. matching result, compare the error of itself and true tag, anti-according to error
To propagation algorithm training convolutional neural networks model, until model is restrained.
In network training unit 503, implement, using the image that constructed to as default convolutional neural networks
Input, successively carries out the operations such as convolution, Chi Hua, full connection, obtains the output of last layer, i.e. matching result, such that it is able to compare
Error between itself and true tag.
It should be noted that described matching result is the output valve presetting convolutional neural networks last layer, this output valve is
Bivector, the most one-dimensional representing input images respectively probit to belonging to similar and foreign peoples.By default convolutional neural networks
Automatically study and reversely regulation, can make to preset the output approaching to reality label as far as possible of convolutional neural networks.Purpose is to use
Two images in checking input are similar or foreign peoples.
Also, it should be noted described true tag is used to characterize input picture to whether being similar label.According to
The classification information of known input picture, i.e. may compare and show whether two input pictures belong to same class.This true tag is real
It is the learning target of convolutional neural networks on border, is used for supervising the learning training process of convolutional neural networks, allow neutral net
Export approaching to reality label as much as possible.
In the present invention, described training is the learning process of a neutral net, i.e. by the output of comparative neural network
With the process that the error of true tag carrys out self-regulating networks parameter.
In the present invention, neutral net is i.e. the most successively calculated by described propagated forward algorithm.Described reversely
Propagation algorithm refers to according to the error between network output result and true tag, the most successively mistake of regulating networks parameter
Journey, carrying out this operation is to optimize network parameter, reduces the error between output result and true tag.
In the present invention, low when default convolutional neural networks model output in training set and the error between true tag
When a certain threshold value, i.e. it is believed that model is restrained.Depending on the setting of this threshold value can be according to the experience of user and different training sets.
In image pretreatment unit 504, implement, to needing to carry out the eye image to be tested of iris identification
To the described Image semantic classification processed, image pretreatment operation in unit 502, processing procedure one are pre-selected with image
Cause, specifically include following steps:
Need a pair the eye image to be tested carrying out iris identification to use human eye detection device, detected whether that human eye goes out
Existing, if it has, then provide Position Approximate and the yardstick of human eye, then use rim detection that the inside and outside circle border of iris is carried out
Location, obtains the inside and outside center of circle and radius, it is thus achieved that the iris image in eye image, then iris image is normalized place
Reason, obtains the iris image of same size, as obtained the iris image of 128 × 128 pixels.
In image Classification and Identification judging unit 505, implement, iris image to be tested to according to as image pre-
First select the mode of unit 502 and network training unit 503 to carry out Image semantic classification and build sample to rear, being input to described net
Network training unit 503 is complete in the described default convolutional neural networks of training (i.e. training), can directly obtain
(each positive sample of the iris image described to be tested i.e. inputted is to the phase with each negative sample pair for the dependency of input picture pair
Closing property mark), as shown in Figure 3, Figure 4, respectively between class, in (film image is not belonging to same eyes) and class, (iris image belongs to
Same eyes) comparison dependency graph (it should be noted that the right and left eyes iris of a people differs, a people's
Right and left eyes iris is two classes), wherein dark colour (atrous) represents low-response, it is meant that higher similarity, therefore, compares between class
To dependency graph color the most shallow and dependency graph color of comparison in class is the deepest.
In image Classification and Identification judging unit 505, implement, to described iris image to be tested to carrying out repeatedly
Translation, will obtain corresponding iris image pair multiple to be tested.If using X1And X2Represent a rainbow to be tested of input
Film image pair, it is assumed that the iris image to be tested of input, to one pixel of left, is designated as X1L, to one pixel X of right translation1R,
Consider to input the order of iris image pair to be tested, then produce altogether six groups of input picture pair: X simultaneously1-X2、X1L-X2、X1R-
X2、X2-X1、X2-X1L、X2-X1R, afterwards, then six groups of results are carried out fractional layer fusion, melt as taken average, minima and maximum
The methods such as conjunction, thus obtain the relevance scores (similarity) of the final iris image pair to be tested inputted, it is thus achieved that final
Iris identification result.
It should be noted that when the relevance scores (similarity) of iris image pair to be tested reaches preset value, permissible
Judge that two iris images to be tested of this iris image centering to be tested, as belonging to same person, have very high correlation, belong to
In similar.Otherwise, belong to inhomogeneity.
In the 6th step, it should be noted that due in actual use, it is difficult to it is accurately fixed to carry out iris image
Position calibration.For simulating this species diversity, can artificially add disturbance, as image translated in different directions some pictures when training
Element.
For being more fully understood that technical scheme, it is described further below in conjunction with specific embodiment.
Embodiment 1
Iris identification method in a kind of eye image that the present invention provides and device thereof, based on convolutional neural networks,
Application in heterogeneous iris identification.
Present invention can apply to promote the accuracy rate of heterogeneous iris identification.Along with science and technology and image acquiring device send out
Exhibition, iris image presents multi-resources Heterogeneous, there is the aspects such as light source wave band, sensor, image resolution ratio, distance not
With, this can cause difference in the biggest class, makes false rejection rate increase, and greatly reduces the accuracy rate of system identification.Such as one
Individual uses high definition iris image acquisition device when registration, is closely gathering, and uses portable iris figure when identifying
As acquisition device, do not limit distance and gather, thus be accordingly used in registration and the iris image identified exist resolution, distance etc. heterogeneous because of
Element.Traditional method uses identical filter template for heterogeneous iris image, does not carries out special design for heterogeneous source,
Often can not get optimum, and the present invention is based on convolutional neural networks, can heterogeneous to be derived from dynamic study the suitableeest according to different
The wave filter closed, and by the way of input picture pair, effectively expand sample space, data have been carried out sufficient utilization,
It is thus possible to obtain more higher accuracy rate than traditional method.
Embodiment 2
Iris identification method in a kind of eye image that the present invention provides and device thereof, based on convolutional neural networks,
The application of mobile terminal.
Present invention can apply to mobile terminal.Mobile device has been widely used in daily life, such as mobile phone
Pay, storage personal information etc., how to ensure that its safety increasingly receives publicity.Living things feature recognition is compared to input password
Etc. method there is the advantages such as user friendly is good, reliability is high, and the mode that being to discriminate between property of iris is the strongest, antifalsification is best,
Iris identification has become as the new technique ensureing mobile device safety.But the iris image quality of mobile terminal acquisition is relatively low, deposits
In factors such as strong noise, low resolution, out of focus and motion blurs, traditional recognition method is difficult to obtain higher accuracy rate.Along with
The development of the hardware such as GPU, degree of depth learning method has been able to apply at mobile terminal.The present invention is based on convolutional neural networks, energy
Enough automatically extracting identifying maximally effective feature, robustness is higher, and the present invention is recognition methods end to end, eliminates biography
System method feature extracts the Complicated Flow classified again, and therefore the efficiency of the present invention is higher, is particularly suited for registration and identifies sample
Number less mobile terminal application.
Therefore, for the iris identification method in a kind of eye image that the present invention provides and device thereof, it is based on many chis
Spending full convolutional neural networks, significant for improving the accuracy rate of iris identification, it has the beneficial effect that following several
Individual aspect:
1, the present invention is to be used in iris identification by convolutional neural networks first, it is possible to is automatically learned and has most identifying
The feature of effect, it is not necessary to manually participate in.
2, the present invention is method end to end, eliminates the Complicated Flow of conventional iris identification, it is possible to directly obtain input
The relevance scores of image pair, it is judged that whether they are similar.
3 need the data having label in a large number different from training convolutional neural networks in the past, and the present invention passes through input picture pair
Method effectively expand sample space, crossing of can solving that on a small scale database training convolutional neural networks easily occurs is intended
Conjunction problem.
In sum, compared with prior art, the invention provides the iris identification method in a kind of eye image and
Its device, the iris in the eye image gathered under controlled scene and non-controllable scene can be carried out in time, know accurately by it
Not, meet user's requirement to iris identification, it is possible to increase the work efficiency of user, save people's valuable time, effectively protect
Card carries out accuracy rate during iris identification to eye image, is of great practical significance.
The technology that the application of the invention provides, so that the convenience of people's work and life obtains the biggest proposing
Height, drastically increases the living standard of people.
The above is only the preferred embodiment of the present invention, it is noted that for the ordinary skill people of the art
For Yuan, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications also should
It is considered as protection scope of the present invention.
Claims (10)
1. the iris identification method in an eye image, it is characterised in that include step:
The first step: setting up and preset convolutional neural networks, described convolutional neural networks includes at successively to the image inputted
The image of reason is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum;
Second step: multiple eye image is pre-selected, carries out Image semantic classification to the plurality of eye image, it is thus achieved that multiple default
The iris image of size;
3rd step: by the plurality of iris image, previously according to the default classification of each iris image, choose classification identical
Any two iris images are as positive sample pair, and choose any two iris images that classification differs as negative sample
Right, it is input in the way of two passages respectively in described default convolutional neural networks, described default convolutional neural networks is carried out
Training, until making the model of described default convolutional neural networks restrain;
4th step: need the eye image to be tested carrying out iris identification to carry out the Image semantic classification described in second step by a pair,
Obtain the iris image pair to be tested of corresponding default size;
5th step: by described iris image pair to be tested, is input to be complete instruction in the 3rd step respectively in the way of two passages
In the described default convolutional neural networks practiced, it is thus achieved that the relevance scores of the iris image pair described to be tested of input, and judge
Whether the relevance scores of described iris image pair to be tested is positioned at default intra-class correlation mark span, if
Be, then judge described iris image to be tested to identical as classification, otherwise, it is judged that differ for classification.
2. the method for claim 1, it is characterised in that also include the 6th step:
To described iris image to be tested to carrying out repeatedly translation, it is thus achieved that corresponding iris image pair multiple to be tested, defeated
Enter and be complete in the described default convolutional neural networks of training to the 3rd step, obtain organizing the iris image pair to be tested of input more
Relevance scores, then many group relevance scores are carried out mark fusion treatment, export final iris identification result.
3. method as claimed in claim 2, it is characterised in that in the 6th step, at the fusion of described many group relevance scores
Reason operation includes: average many group relevance scores, minima or maximum.
4. method as claimed any one in claims 1 to 3, it is characterised in that in the third step, from the plurality of iris figure
In Xiang, first choose identical any two iris images of classification as positive sample pair, then, in remaining iris image,
Randomly select quantity with positive sample to identical negative sample pair.
5. method as claimed any one in claims 1 to 3, it is characterised in that in the third step, to described default convolution god
Through the step that network is trained, particularly as follows: using the positive sample that constructed to and negative sample to as default convolutional neural networks
Input, successively carry out convolution, pond and full attended operation, obtain the output of last layer, i.e. matching result, compare it with true
The error of label, according to error backpropagation algorithm training convolutional neural networks model, until model is restrained.
6. the iris identification device in an eye image, it is characterised in that including:
Network sets up unit, be used for setting up default convolutional neural networks, and described convolutional neural networks includes successively to being inputted
The image that image carries out processing is to input layer, default multiple convolutional layers, default multiple ponds layer, default full articulamentum;
Image is pre-selected unit, is used for multiple eye image is pre-selected, and the plurality of eye image is carried out image and locates in advance
Reason operation, it is thus achieved that the iris image of multiple default sizes, is then sent to network training unit;
Network training unit, sets up unit with network, image is pre-selected unit and is connected respectively, for by described image in advance
The multiple iris images selecting cell processing to cross, previously according to the default classification of each iris image, choose classification identical
Any two iris images are as positive sample pair, and choose any two iris images that classification differs as negative sample
Right, in the way of two passages, it is input to described network respectively sets up in the default convolutional neural networks that unit is set up, to described pre-
If convolutional neural networks is trained, until making the model of described default convolutional neural networks restrain;
Image pretreatment unit, pre-for needing the eye image to be tested carrying out iris identification to carry out described image by a pair
Process operation, it is thus achieved that the iris image pair to be tested of corresponding default size, be then sent to image classification unit;
Image Classification and Identification judging unit, is connected with network training unit, image pretreatment unit respectively, for by described
The iris image pair to be tested that image pretreatment unit processed, is input to described network training respectively in the way of two passages
Unit is complete in the described default convolutional neural networks of training, it is thus achieved that the phase of the iris image pair described to be tested of input
Closing property mark, and judge whether the relevance scores of described iris image pair to be tested is positioned at default intra-class correlation mark and takes
In the range of value, if it is, judge described iris image to be tested to identical as classification, otherwise, it is judged that differ for classification.
7. device as claimed in claim 6, it is characterised in that described image Classification and Identification judging unit, is additionally operable to described
Iris image to be tested is to carrying out repeatedly translation, it is thus achieved that corresponding iris image pair multiple to be tested, is input to network instruction
Practice unit to be complete in the described default convolutional neural networks of training, obtain organizing the phase of the iris image pair to be tested of input more
Then many group relevance scores are carried out mark fusion treatment, export final iris identification result by closing property mark.
8. device as claimed in claim 7, it is characterised in that the fusion treatment operation of described many group relevance scores includes:
Many group relevance scores are averaged, minima or maximum.
9. the device as according to any one of claim 6 to 8, it is characterised in that described network training unit, for from described
In multiple iris images, first choose identical any two iris images of classification as positive sample pair, then at remaining rainbow
In film image, randomly select quantity with positive sample to identical negative sample pair.
10. the device as according to any one of claim 6 to 8, it is characterised in that described network training unit, for by structure
The positive sample made to and negative sample to the input as default convolutional neural networks, successively carry out convolution, pond and entirely connect
Operation, obtains the output of last layer, i.e. matching result, compares the error of itself and true tag, according to error backpropagation algorithm
Training convolutional neural networks model, until model is restrained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610776455.1A CN106326874A (en) | 2016-08-30 | 2016-08-30 | Method and device for recognizing iris in human eye images |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610776455.1A CN106326874A (en) | 2016-08-30 | 2016-08-30 | Method and device for recognizing iris in human eye images |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106326874A true CN106326874A (en) | 2017-01-11 |
Family
ID=57788476
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610776455.1A Pending CN106326874A (en) | 2016-08-30 | 2016-08-30 | Method and device for recognizing iris in human eye images |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106326874A (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107342962A (en) * | 2017-07-03 | 2017-11-10 | 北京邮电大学 | Deep learning intelligence Analysis On Constellation Map method based on convolutional neural networks |
CN107342810A (en) * | 2017-07-03 | 2017-11-10 | 北京邮电大学 | Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks |
CN107507286A (en) * | 2017-08-02 | 2017-12-22 | 五邑大学 | A kind of bi-mode biology feature based on face and handwritten signature is registered system |
CN107680088A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
CN108446631A (en) * | 2018-03-20 | 2018-08-24 | 北京邮电大学 | The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks |
CN108710832A (en) * | 2018-04-26 | 2018-10-26 | 北京万里红科技股份有限公司 | It is a kind of without refer to definition of iris image detection method |
CN108734102A (en) * | 2018-04-18 | 2018-11-02 | 佛山市顺德区中山大学研究院 | A kind of right and left eyes recognizer based on deep learning |
CN109165586A (en) * | 2018-08-11 | 2019-01-08 | 石修英 | intelligent image processing method for AI chip |
CN109190505A (en) * | 2018-08-11 | 2019-01-11 | 石修英 | The image-recognizing method that view-based access control model understands |
CN109409342A (en) * | 2018-12-11 | 2019-03-01 | 北京万里红科技股份有限公司 | A kind of living iris detection method based on light weight convolutional neural networks |
CN109635669A (en) * | 2018-11-19 | 2019-04-16 | 北京致远慧图科技有限公司 | Image classification method, the training method of device and disaggregated model, device |
CN110110189A (en) * | 2018-02-01 | 2019-08-09 | 北京京东尚科信息技术有限公司 | Method and apparatus for generating information |
CN110276333A (en) * | 2019-06-28 | 2019-09-24 | 上海鹰瞳医疗科技有限公司 | Eyeground identification model training method, eyeground personal identification method and equipment |
CN110321844A (en) * | 2019-07-04 | 2019-10-11 | 北京万里红科技股份有限公司 | A kind of quick iris detection method based on convolutional neural networks |
CN111027464A (en) * | 2019-12-09 | 2020-04-17 | 大连理工大学 | Iris identification method for convolutional neural network and sequence feature coding joint optimization |
CN111161276A (en) * | 2019-11-27 | 2020-05-15 | 天津中科智能识别产业技术研究院有限公司 | Iris normalized image forming method |
CN111401145A (en) * | 2020-02-26 | 2020-07-10 | 三峡大学 | Visible light iris recognition method based on deep learning and DS evidence theory |
CN111723222A (en) * | 2019-03-19 | 2020-09-29 | Sap欧洲公司 | Image search and training system |
CN112580530A (en) * | 2020-12-22 | 2021-03-30 | 泉州装备制造研究所 | Identity recognition method based on fundus images |
CN113033582A (en) * | 2019-12-09 | 2021-06-25 | 杭州海康威视数字技术股份有限公司 | Model training method, feature extraction method and device |
CN113139404A (en) * | 2020-01-18 | 2021-07-20 | 西安艾瑞生物识别科技有限公司 | Fast recognition technology based on deep learning iris recognition algorithm |
CN113486804A (en) * | 2021-07-07 | 2021-10-08 | 科大讯飞股份有限公司 | Object identification method, device, equipment and storage medium |
CN113591747A (en) * | 2021-08-06 | 2021-11-02 | 合肥工业大学 | Multi-scene iris recognition method based on deep learning |
CN113706469A (en) * | 2021-07-29 | 2021-11-26 | 天津中科智能识别产业技术研究院有限公司 | Iris automatic segmentation method and system based on multi-model voting mechanism |
CN113780239A (en) * | 2021-09-27 | 2021-12-10 | 上海聚虹光电科技有限公司 | Iris recognition method, iris recognition device, electronic equipment and computer readable medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120016827A1 (en) * | 2010-07-19 | 2012-01-19 | Lockheed Martin Corporation | Biometrics with mental/ physical state determination methods and systems |
CN105760821A (en) * | 2016-01-31 | 2016-07-13 | 中国石油大学(华东) | Classification and aggregation sparse representation face identification method based on nuclear space |
-
2016
- 2016-08-30 CN CN201610776455.1A patent/CN106326874A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120016827A1 (en) * | 2010-07-19 | 2012-01-19 | Lockheed Martin Corporation | Biometrics with mental/ physical state determination methods and systems |
CN105760821A (en) * | 2016-01-31 | 2016-07-13 | 中国石油大学(华东) | Classification and aggregation sparse representation face identification method based on nuclear space |
Non-Patent Citations (2)
Title |
---|
ABHISHEK GANGWAR 等: "DeepIrisNet: Deep iris representation with applications in iris recognition and cross-sensor iris recognition", 《IEEE》 * |
NIANFENG LIU: "DeepIris: Learning Pairwise Filter Bank for Heterogeneous Iris Verification", 《ELSEVIER电子期刊全文》 * |
Cited By (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107342810A (en) * | 2017-07-03 | 2017-11-10 | 北京邮电大学 | Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks |
CN107342810B (en) * | 2017-07-03 | 2019-11-19 | 北京邮电大学 | Deep learning Brilliant Eyes figure analysis method based on convolutional neural networks |
CN107342962A (en) * | 2017-07-03 | 2017-11-10 | 北京邮电大学 | Deep learning intelligence Analysis On Constellation Map method based on convolutional neural networks |
CN107507286A (en) * | 2017-08-02 | 2017-12-22 | 五邑大学 | A kind of bi-mode biology feature based on face and handwritten signature is registered system |
CN107680088A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Method and apparatus for analyzing medical image |
CN110110189A (en) * | 2018-02-01 | 2019-08-09 | 北京京东尚科信息技术有限公司 | Method and apparatus for generating information |
CN108446631A (en) * | 2018-03-20 | 2018-08-24 | 北京邮电大学 | The smart frequency spectrum figure analysis method of deep learning based on convolutional neural networks |
CN108734102A (en) * | 2018-04-18 | 2018-11-02 | 佛山市顺德区中山大学研究院 | A kind of right and left eyes recognizer based on deep learning |
CN108710832B (en) * | 2018-04-26 | 2021-07-30 | 北京万里红科技股份有限公司 | Reference-free iris image definition detection method |
CN108710832A (en) * | 2018-04-26 | 2018-10-26 | 北京万里红科技股份有限公司 | It is a kind of without refer to definition of iris image detection method |
CN109190505A (en) * | 2018-08-11 | 2019-01-11 | 石修英 | The image-recognizing method that view-based access control model understands |
CN109165586A (en) * | 2018-08-11 | 2019-01-08 | 石修英 | intelligent image processing method for AI chip |
CN109165586B (en) * | 2018-08-11 | 2021-09-03 | 湖南科瑞特科技有限公司 | Intelligent image processing method for AI chip |
CN109635669A (en) * | 2018-11-19 | 2019-04-16 | 北京致远慧图科技有限公司 | Image classification method, the training method of device and disaggregated model, device |
CN109409342A (en) * | 2018-12-11 | 2019-03-01 | 北京万里红科技股份有限公司 | A kind of living iris detection method based on light weight convolutional neural networks |
CN111723222A (en) * | 2019-03-19 | 2020-09-29 | Sap欧洲公司 | Image search and training system |
CN110276333B (en) * | 2019-06-28 | 2021-10-15 | 上海鹰瞳医疗科技有限公司 | Eye ground identity recognition model training method, eye ground identity recognition method and equipment |
CN110276333A (en) * | 2019-06-28 | 2019-09-24 | 上海鹰瞳医疗科技有限公司 | Eyeground identification model training method, eyeground personal identification method and equipment |
CN110321844A (en) * | 2019-07-04 | 2019-10-11 | 北京万里红科技股份有限公司 | A kind of quick iris detection method based on convolutional neural networks |
CN111161276A (en) * | 2019-11-27 | 2020-05-15 | 天津中科智能识别产业技术研究院有限公司 | Iris normalized image forming method |
CN111161276B (en) * | 2019-11-27 | 2023-04-18 | 天津中科智能识别产业技术研究院有限公司 | Iris normalized image forming method |
CN113033582B (en) * | 2019-12-09 | 2023-09-26 | 杭州海康威视数字技术股份有限公司 | Model training method, feature extraction method and device |
CN111027464A (en) * | 2019-12-09 | 2020-04-17 | 大连理工大学 | Iris identification method for convolutional neural network and sequence feature coding joint optimization |
CN111027464B (en) * | 2019-12-09 | 2023-07-18 | 大连理工大学 | Iris recognition method for jointly optimizing convolutional neural network and sequence feature coding |
CN113033582A (en) * | 2019-12-09 | 2021-06-25 | 杭州海康威视数字技术股份有限公司 | Model training method, feature extraction method and device |
CN113139404A (en) * | 2020-01-18 | 2021-07-20 | 西安艾瑞生物识别科技有限公司 | Fast recognition technology based on deep learning iris recognition algorithm |
CN111401145B (en) * | 2020-02-26 | 2022-05-03 | 三峡大学 | Visible light iris recognition method based on deep learning and DS evidence theory |
CN111401145A (en) * | 2020-02-26 | 2020-07-10 | 三峡大学 | Visible light iris recognition method based on deep learning and DS evidence theory |
CN112580530A (en) * | 2020-12-22 | 2021-03-30 | 泉州装备制造研究所 | Identity recognition method based on fundus images |
CN113486804A (en) * | 2021-07-07 | 2021-10-08 | 科大讯飞股份有限公司 | Object identification method, device, equipment and storage medium |
CN113486804B (en) * | 2021-07-07 | 2024-02-20 | 科大讯飞股份有限公司 | Object identification method, device, equipment and storage medium |
CN113706469A (en) * | 2021-07-29 | 2021-11-26 | 天津中科智能识别产业技术研究院有限公司 | Iris automatic segmentation method and system based on multi-model voting mechanism |
CN113706469B (en) * | 2021-07-29 | 2024-04-05 | 天津中科智能识别产业技术研究院有限公司 | Iris automatic segmentation method and system based on multi-model voting mechanism |
CN113591747A (en) * | 2021-08-06 | 2021-11-02 | 合肥工业大学 | Multi-scene iris recognition method based on deep learning |
CN113591747B (en) * | 2021-08-06 | 2024-02-23 | 合肥工业大学 | Multi-scene iris recognition method based on deep learning |
CN113780239A (en) * | 2021-09-27 | 2021-12-10 | 上海聚虹光电科技有限公司 | Iris recognition method, iris recognition device, electronic equipment and computer readable medium |
CN113780239B (en) * | 2021-09-27 | 2024-03-12 | 上海聚虹光电科技有限公司 | Iris recognition method, iris recognition device, electronic device and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106326874A (en) | Method and device for recognizing iris in human eye images | |
CN108268859A (en) | A kind of facial expression recognizing method based on deep learning | |
CN106503687A (en) | The monitor video system for identifying figures of fusion face multi-angle feature and its method | |
CN106778506A (en) | A kind of expression recognition method for merging depth image and multi-channel feature | |
CN104361313B (en) | A kind of gesture identification method merged based on Multiple Kernel Learning heterogeneous characteristic | |
CN107657233A (en) | Static sign language real-time identification method based on modified single multi-target detection device | |
CN104103033B (en) | View synthesis method | |
CN102844766A (en) | Human eyes images based multi-feature fusion identification method | |
CN106548159A (en) | Reticulate pattern facial image recognition method and device based on full convolutional neural networks | |
CN110059741A (en) | Image-recognizing method based on semantic capsule converged network | |
CN101359365A (en) | Iris positioning method based on Maximum between-Cluster Variance and gray scale information | |
CN109359608A (en) | A kind of face identification method based on deep learning model | |
CN105095870A (en) | Pedestrian re-recognition method based on transfer learning | |
CN109117897A (en) | Image processing method, device and readable storage medium storing program for executing based on convolutional neural networks | |
CN104021384B (en) | A kind of face identification method and device | |
CN108921019A (en) | A kind of gait recognition method based on GEI and TripletLoss-DenseNet | |
CN105426695A (en) | Health status detecting system and method based on irises | |
CN109033953A (en) | Training method, equipment and the storage medium of multi-task learning depth network | |
CN108171318A (en) | One kind is based on the convolutional neural networks integrated approach of simulated annealing-Gaussian function | |
CN102136024A (en) | Biometric feature identification performance assessment and diagnosis optimizing system | |
CN113221655B (en) | Face spoofing detection method based on feature space constraint | |
CN106909938A (en) | Viewing angle independence Activity recognition method based on deep learning network | |
CN112906550B (en) | Static gesture recognition method based on watershed transformation | |
CN109977887A (en) | A kind of face identification method of anti-age interference | |
CN110046544A (en) | Digital gesture identification method based on convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170111 |