CN107506754A - Iris identification method, device and terminal device - Google Patents

Iris identification method, device and terminal device Download PDF

Info

Publication number
CN107506754A
CN107506754A CN201710846244.5A CN201710846244A CN107506754A CN 107506754 A CN107506754 A CN 107506754A CN 201710846244 A CN201710846244 A CN 201710846244A CN 107506754 A CN107506754 A CN 107506754A
Authority
CN
China
Prior art keywords
iris
image
sub
distance
rectangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710846244.5A
Other languages
Chinese (zh)
Inventor
陈书楷
杨奇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Central Intelligent Information Technology Co Ltd
Original Assignee
Xiamen Central Intelligent Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Central Intelligent Information Technology Co Ltd filed Critical Xiamen Central Intelligent Information Technology Co Ltd
Priority to CN201710846244.5A priority Critical patent/CN107506754A/en
Publication of CN107506754A publication Critical patent/CN107506754A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention is applied to image procossing and mode identification technology, there is provided a kind of iris identification method, device and terminal device, methods described include:Obtain the eyes image of user;The ring part that inner circle and cylindrical border using fast algorithm of detecting detection iris, the inner circle and cylindrical border are formed is region corresponding to iris;The iris image of annular is normalized to the rectangle iris image of default size;Piecemeal is carried out to rectangle iris image and obtains multiple iris sub-images, and feature extraction is carried out to each iris sub-image, obtains the characteristic vector of iris sub-image;Characteristic vector progress to two corresponding iris sub-images is distance weighted, the distance after weighting is mapped as into similarity, and carry out iris recognition according to recognition threshold and the similarity.The method of above-mentioned iris recognition solves the problems, such as that iris recognition accuracy rate is not high caused by Iris Location is inaccurate and denoising effect is bad by iris image piecemeal.

Description

Iris identification method, device and terminal device
Technical field
The invention belongs to image procossing and mode identification technology, more particularly to a kind of iris identification method, device and Terminal device.
Background technology
Study hotspot of the iris recognition as field of biological recognition, there is stability, accuracy and the security of height, not Security device and mobile terminal device etc. will be widely used in.In iris authentication system, the pretreatment pair of iris image Most important in iris recognition, preprocessing process mainly includes Iris Location, denoising and normalization etc..It is only accurate in Iris Location Effective iris feature really and in the case of removing the interference such as eyelid, eyelashes could be extracted, and then obtains accurate iris recognition As a result.
However, during iris image acquiring, due to by the condition of collecting device limited and gatherer process in illumination Influence, the quality of the iris image of collection is not often high, and this just brings certain difficulty to Iris Location and denoising, and then leads Causing the accuracy rate of iris recognition reduces.Therefore, for this kind of Iris Location it is inaccurate and the problem of denoising effect is bad, it is necessary to set Count a kind of effective iris identification method.
The content of the invention
In view of this, it is existing to solve the embodiments of the invention provide a kind of iris identification method, device and terminal device Iris Location is inaccurate in technology and denoising effect it is bad caused by iris recognition accuracy rate it is not high the problem of.
The first aspect of the embodiment of the present invention provides a kind of iris identification method, including:
The eyes image of user is obtained, the eyes image includes iris;
The inner circle of the iris and cylindrical border are detected using fast algorithm of detecting, the inner circle and cylindrical border are formed Ring part be region corresponding to the iris;
The iris image of annular is normalized to the rectangle iris image of default size;
Piecemeal is carried out to the rectangle iris image and obtains multiple iris sub-images, and to each iris sub-block figure As carrying out feature extraction, the characteristic vector of each iris sub-image is obtained;
Characteristic vector progress to two corresponding iris sub-images is distance weighted, and the distance after weighting is mapped as into phase Iris recognition is carried out like degree, and according to recognition threshold and the similarity.
Optionally, the iris image by annular is normalized to the rectangle iris image of default size, is specially:
The inner circle of iris is calculated with cylindrical central coordinate of circle average value and using the central coordinate of circle average value as in pupil Heart coordinate (x0,y0), according to ring-data equation by annular iris image around pupil center expand into a length of 360 pixel, it is a height of in Justify the rectangle with exradius difference r, then the rectangular image of expansion is normalized to the rectangle iris image of default size, wherein round Ring parametric equation is:
Optionally, it is described that rectangle iris image progress piecemeal is handled to obtain multiple iris sub-images, be specially:
It is M*N iris sub-image by the rectangle iris segmentation, wherein, adjacent two iris sub-images Adjacent edge equal in magnitude and adjacent two iris sub-image subregion overlap, M and N are positive integer.
Optionally, it is described that feature extraction is carried out to each iris sub-image, specifically include:
Convolution staggeredly and pondization operation are carried out to each iris sub-image using convolutional neural networks, obtains M*N The characteristic vector of individual L dimensions, L is positive integer.
Optionally, the characteristic vector to two corresponding iris sub-images is carried out distance weighted, is specifically included:
M*N distance of the characteristic vector of two iris sub-images corresponding to calculating, and by every a line of characteristic vector N number of distance be divided into m part, be weighted operation with the distance of m part respectively with m different coefficients;Wherein, it is described The coefficient value that m coefficient is disposed proximate to middle section is big, and the coefficient value close to two end regions is small.
The second aspect of the embodiment of the present invention provides a kind of iris identification device, including:Image collection module, image become Change the mold block, characteristic extracting module, Pattern Matching Module;
Image collection module, for obtaining user's eyes image and detecting the inner circle of the iris and outer by detection algorithm The ring part that round edge circle, acquisition inner circle and cylindrical border are formed is annular iris image;
Image transform module, for the annular iris image to be normalized to the rectangle iris image of default size;
Characteristic extracting module, for rectangle iris image to be divided into multiple iris sub-images, and to each iris Sub-image carries out feature extraction, obtains the characteristic vector of each iris sub-image;
Pattern Matching Module, it is distance weighted for being carried out to the characteristic vector of two corresponding iris sub-images, it will add Distance after power is mapped as similarity, and draws recognition result according to recognition threshold and the similarity.
Optionally, the characteristic extracting module includes:
Image block unit, for being M*N iris sub-image by the rectangle iris segmentation, wherein, it is adjacent Two iris sub-images adjacent edge it is equal in magnitude, and two neighboring iris sub-image subregion overlap, M and N is positive integer;
CNN converter units, for each iris sub-image is carried out using convolutional neural networks convolution staggeredly and Pondization operates, and obtains the characteristic vector of M*N L dimension, L is positive integer.
Optionally, the Pattern Matching Module includes:
Distance weighted unit, for calculating N number of distance of corresponding two iris block features vector, and by characteristic vector N number of distance per a line is divided into m part, and operation is weighted with the distance of m part respectively with m different coefficients;Its In, the coefficient value that the m coefficient is disposed proximate to middle section is big, and the coefficient value close to two end regions is small.
Similarity calculated, for the distance after weighting to be mapped as into similarity;
Recognition unit, for compared with recognition threshold, the similarity to be obtained into recognition result.
The third aspect of the embodiment of the present invention provide a kind of iris recognition terminal device, including memory, processor with And it is stored in the computer program that can be run in the memory and on the processor, it is characterised in that the processor The step of any one iris identification method as described above is realized when performing the computer program.
The fourth aspect of the embodiment of the present invention provides a kind of computer-readable recording medium, the computer-readable storage Media storage has computer program, it is characterised in that any one as described above is realized when the computer program is executed by processor The step of iris identification method.
Existing beneficial effect is the embodiment of the present invention compared with prior art:The embodiment of the present invention is calculated by quick detection Annular iris image, to determine region corresponding to iris, is then normalized to pre- by the inner circle of method detection iris and cylindrical border If the rectangle iris image of size, and rectangle iris image progress piecemeal is obtained into multiple iris sub-images, and then to multiple Iris sub-image carries out feature extraction, the characteristic vector of iris sub-image is obtained, to two corresponding iris sub-images Characteristic vector carry out distance weighted, then the distance after weighting is mapped as similarity and carries out iris recognition.Above-mentioned iris is known Other method carrys out the impurity pair due to position inaccurate and picture by using piecemeal and to the distance weighted mode of characteristic vector The influence of iris recognition, so as to improve iris recognition accuracy rate.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art In the required accompanying drawing used be briefly described, it should be apparent that, drawings in the following description be only the present invention some Embodiment, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these Accompanying drawing obtains other accompanying drawings.
Fig. 1 is the implementation process schematic diagram of iris identification method provided in an embodiment of the present invention;
Fig. 2 is user's eye original image schematic diagram provided in an embodiment of the present invention;
Fig. 3 is the Iris Location schematic diagram provided in an embodiment of the present invention to original image;
Fig. 4 is the exemplary plot that iris image provided in an embodiment of the present invention is normalized to rectangular image;
Fig. 5 is the schematic diagram of the rectangle iris image provided in an embodiment of the present invention from vertical direction expansion;
Fig. 6 is the schematic diagram of the rectangle iris image provided in an embodiment of the present invention deployed from horizontal direction;
Fig. 7 is the schematic diagram of rectangle iris image piecemeal provided in an embodiment of the present invention;
Fig. 8 is the structural representation of multichannel convolutive neutral net provided in an embodiment of the present invention;
Fig. 9 is the schematic diagram of iris identification device provided in an embodiment of the present invention;
Figure 10 is the schematic diagram of iris recognition terminal device provided in an embodiment of the present invention.
Embodiment
In describing below, in order to illustrate rather than in order to limit, it is proposed that such as tool of particular system structure, technology etc Body details, thoroughly to understand the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention can also be realized in the other embodiments of details.In other situations, omit to well-known system, device, electricity Road and the detailed description of method, in case unnecessary details hinders description of the invention.
In order to illustrate technical solutions according to the invention, illustrated below by specific embodiment.
Embodiment one
Referring to Fig. 1, there is provided a kind of one embodiment implementation process schematic diagram of iris identification method, details are as follows:
Step S101, obtains the eyes image of user, and the eyes image includes iris.
Wherein, step S101 can be embodied as:The eyes of user to be detected are shot, acquisition includes eyes Image.In specific implementation process, the approximate region that detection algorithm determines eye is first passed through, then ocular is carried out once Or repeatedly shooting, obtain one or more image for including eyes.As shown in Fig. 2 as user's eye original image.Adopt It can be visible image capturing head to collect equipment, but is not limited to visible image capturing head.
Step S102, the inner circle of the iris and cylindrical border are detected using fast algorithm of detecting, the inner circle and cylindrical The ring part that border is formed is region corresponding to the iris.
Wherein, after the eyes image of user is obtained, the inner circle of the iris and cylindrical is detected by fast algorithm of detecting Border.Here fast algorithm of detecting can be convolutional neural networks algorithm, but be not limited to convolutional neural networks algorithm, all Meet that the high fast algorithm of detecting of discrimination can be to Iris Location.
It is exemplary, here by taking convolutional neural networks as an example, illustrate to detect iris inner circle position and the cylindrical position of iris Process.Using convolutional neural networks detection iris inner circle position, including training stage and test phase, in the training stage, by Labeled good inner circle position iris image sample input convolutional neural networks, obtain convolutional neural networks connection weight and partially Value is put, and preserves obtained connection weight and bias.In test phase, iris image input to be detected is trained In convolutional neural networks, iris inner circle position is positioned by the connection weight and bias that train.Use convolution god Process through the cylindrical position of network detection iris is similar to the process of detection iris inner circle position, repeats no more here.Such as Fig. 3 institutes Show, after the inner circle of iris and cylindrical border determine, the ring part that the inner circle and cylindrical border are formed is iris pair The region answered.
Step S103, the iris image of annular is normalized to the rectangle iris image of default size.
Wherein, in image acquisition process, annular iris region can cause pupil due to being influenceed by collection illumination Zoom in or out, changed so as to influence iris region area, and then important influence is produced to iris recognition result.Cause This is, it is necessary to which the annular iris image of different scale to be converted to the rectangle iris image of a default size.Due to iris region For annular image, and processing of the computer to image is completed under rectangular co-ordinate, therefore according to polar coordinates and rectangular co-ordinate Between mapping relations, annular iris image is mapped as rectangle iris image.
Optionally, the iris image by annular is normalized to the rectangle iris image of default size, is specially:
The inner circle of iris is calculated with cylindrical central coordinate of circle average value and using the central coordinate of circle average value as in pupil Heart coordinate (x0,y0), according to ring-data equation by annular iris image around pupil center expand into a length of 360 pixel, it is a height of in Justify the rectangle with exradius difference r, then the rectangular image of expansion is normalized to the rectangle iris image of default size, wherein round Ring parametric equation is:
Wherein, exemplary, the process that annular iris image is converted to rectangle iris image is:First, it is step with 1 degree It is long, the inner boundary of iris and external boundary are evenly dividing since the position in 90 degree of directions as 360 parts, the cut-point table of inner boundary It is shown as A1,A2,…,An(1≤n≤360), the cut-point of external boundary are expressed as B1,B2,…,Bn(1≤n≤360), from inner boundary Cut-point be affine lines A to the cut-point of external boundaryiBi, iris region is divided into 360 parts by the affine lines, then will be each Affine lines AiBiAnnular radii length r parts are divided into, form a 360*r pixel, the coordinate of each pixel can be according to circle Ring parametric equation determines.There is gray value by being put determined by ring-data equation, that is, with certain limit , therefore, we pass through the gray value of resulting annulus grey scale mapping to obtain corresponding rectangle iris image again, and then Improve the display effect of obtained rectangle iris image.Again by annular iris image since straight line vertically upward, along inverse It is the rectangle iris image after normalizing that clockwise, which deploys iris image,.If it should be noted that as shown in figure 4, rainbow Film image from straight line vertically upward proceeds by expansion when deploying, and can obtain rectangular image as shown in Figure 5, according to obtaining Rectangular image can see, the area distribution of impurity can will likely occur on a left side for rectangle iris image by this operation Right hand edge part, so as to which in follow-up processing, by the setting of weighting parameter, the part that reduction is likely to occur extrinsic region is right The proportion of whole iris image contribution.Otherwise, if as shown in figure 4, since the direction of x-axis positive axis, with clockwise direction Annular iris image is deployed, rectangle iris image as shown in Figure 6 can be obtained, then impurity part (such as eyelid, eyelashes) Side will be concentrated in, impurity part can not be then effectively reduced to iris image during follow-up weighting parameter is set The proportion of contribution, so as to influence iris recognition result.
Because different user's pupil sizes differs, by annular iris image be converted to rectangle iris image it Afterwards, the width of the rectangle iris image differs.For the ease of subsequently dividing in feature extraction rectangle iris image Block, the rectangle iris image after expansion is normalized to the size of default size, specifically, can be answered using interpolation arithmetic Some pixel values.For example, the size of each rectangular image is normalized to 360*80 size.
Step S104, piecemeal is carried out to the rectangle iris image and obtains multiple iris sub-images, and to each described Iris sub-image carries out feature extraction, obtains the characteristic vector of each iris sub-image.
Wherein, whole iris region contains the noises such as hot spot, eyelid and eyelashes, therefore, contains in using only iris When the less a certain iris sub-block of noise carries out feature extraction, it is possible to reduce influence of the noise to recognition result.Further, originally To the piecemeal of iris image using the overlapping piecemeal in subregion between adjacent sub-blocks in application, use overlap partition can be with The loss of iris information is reduced, while strengthens the feature of the iris image close to middle section.
Optionally, after rectangle iris image is obtained, piecemeal is carried out to iris image.It is described to the rectangle iris figure Handle to obtain multiple iris sub-images as carrying out piecemeal, be specially:
It is M*N iris sub-image by the rectangle iris segmentation, wherein, adjacent two iris sub-images Adjacent edge equal in magnitude and adjacent two iris sub-image subregion overlap, M and N are positive integer.
Exemplary, referring to Fig. 7, the square of different thicknesses represents the iris sub-block do not gone together, when rectangle iris image Size when being 360*80, by the length of wherein each iris sub-block and it is wide be set to 40, the overlapping portion of two neighboring iris sub-block Divide a length of 20, therefore whole iris image will be divided into 3*17 iris sub-block, and adjacent sub-blocks have subregion weight It is folded.Here do not limited for the length of iris sub-block, width and sub-block number, sub-block parameters can be adjusted according to experimental result Value, until finding suitable sub-block length.
Optionally, it is described that feature extraction is carried out to each iris sub-image, specifically include:
Convolution staggeredly and pondization operation are carried out to each iris sub-image using convolutional neural networks, obtains M*N The characteristic vector of individual L dimensions, L is positive integer.
Convolutional neural networks have local connection, weights shared and pond after translation invariance the advantages of, in feature extraction Aspect can obtain preferable effect.Referring to Fig. 8, for the structural representation of multichannel convolutive neutral net provided by the invention.
In the present embodiment, convolutional neural networks model can include two convolutional layers.The passage of first convolutional layer Number is arranged to 8, and convolution kernel is arranged to 5*5, and convolution step-length is a pixel.The channel number of second convolutional layer is arranged to 16, Convolution kernel is arranged to 3*3, and convolution step-length is a pixel.
Preferably, it is provided with first maximum pond layer after first convolutional layer, first maximum pond layer is adopted Sample window is arranged to 2*2, sliding step 2.Second maximum pond layer is provided with after second convolutional layer, second most The sampling window of great Chiization layer is arranged to 4*4, sliding step 4.
Preferably, in convolutional neural networks model, ReLU activation primitives, i.e. line are both provided with after each convolutional layer Property amending unit.Compared to conventional sigmoid activation primitives and tanh activation primitives, using ReLU as activation primitive more Meet the feature of biological neuron, it is easier to study optimization.
Preferably, in convolutional neural networks model, a full articulamentum is provided with after second maximum pond layer, entirely The number of articulamentum neuron is 256.
Preferably, in convolutional neural networks model, an output layer, output layer nerve are provided with after full articulamentum First number is 20.
Therefore, by 40*40 iris sub-block after first convolutional layer of the convolutional neural networks model The first iris block feature figure of 8 36*36 sizes will be obtained, then 8 will be obtained after first maximum pond layer Second iris block feature figure of 18*18 sizes, then the 3rd of 16 16*16 sizes will be obtained after second convolutional layer Iris block feature figure, then the 4th iris block feature of 16 4*4 sizes will be obtained after second maximum pond layer Figure.4th modular character figure is obtaining 256*1 column vector after the mapping of full articulamentum.Pass through output layer again, will The column vector that full articulamentum obtains is mapped as the characteristic vector of one 20 dimension.
It should be noted that setting of the present invention for above-mentioned parameter is only an example, here to convolutional Neural net The parameter of network model is not limited.
Step S105, the characteristic vectors of two corresponding iris sub-images is carried out it is distance weighted, by after weighting away from From being mapped as similarity, and iris recognition is carried out according to recognition threshold and the similarity.
Wherein, after the characteristic vector of each iris sub-block is obtained, the corresponding sub-block of two iris images of calculating away from From.Specifically, the distance between first sub-block of the first width iris image and first sub-block of the second width iris image is calculated, Calculate distance between second sub-block of the first width iris image and second sub-block of the second width iris image, etc..Due to Some iris sub-blocks contain more impurity, it is not necessary to extract the too many feature of these iris sub-blocks;And some iris sub-block tools Standby important identification feature is, it is necessary to extract the more feature of these iris sub-blocks.Therefore, the distance of corresponding sub-block is being calculated Afterwards, different weights are multiplied by respectively, are identification distance by the distance definition after weighting, then identification distance is mapped as similarity, And then carry out iris recognition.
Optionally, the characteristic vector to two corresponding iris sub-images is carried out distance weighted, is specifically included:
M*N distance of the characteristic vector of two iris sub-images corresponding to calculating, and by every a line of characteristic vector N number of distance be divided into m part, be weighted operation with the distance of m part respectively with m different coefficients;Wherein, it is described The coefficient value that m coefficient is disposed proximate to middle section is big, and the coefficient value close to two end regions is small.
Wherein, after feature extraction is carried out to each iris sub-block using convolutional neural networks, M*N feature can be obtained Vector.Two iris images to be identified are expressed as C and D, then each iris sub-block can be expressed as Ci,j(1≤i≤M,1≤ J≤N) and Di,j(1≤i≤M, 1≤j≤N), represent the row vector of one 20 dimension, calculate respectively corresponding iris sub-block away from From.Here with first iris sub-block C of two iris images1,1And D1,1Exemplified by, C1,1(x can be expressed as11,x12,..., x1n), D1,1(y can be expressed as11,y12,...,y1n), then the Euclidean distance between sub-block can be expressed as:
Wherein, n is expressed as the characteristic length 20 of each iris sub-block.Each iris sub-block corresponding to calculating it is European away from From then the Euclidean distance of two iris images to be identified can be expressed as EM*N.It is worth noting that, the calculating of distance herein It can also be mahalanobis distance, standardization Euclidean distance, manhatton distance etc., not limit herein.
Resulting distance is the matrix of a M rows N row, and N number of distance of every a line is divided into m part, different with m Coefficient be multiplied and sum with the distance of calculating respectively, obtain the identification distance of two iris images to be identified.
Exemplary, when matrix arrangement is that 3 rows 17 arrange, the numerical value of every the first row and the third line can be divided into 3 Individual part, wherein Part I are the 1st to 4 piece, and Part II is the 5th to 13 piece, and Part III is the 14th to 17 piece.By second 3 capable parts can be divided into:1st to 2 piece is Part I, and the 3rd to 15 piece is Part II, and the 16th to 17 piece is the 3rd Part.Distance matrix is divided into after different parts, different weights are set respectively to different parts.Exemplary, It can be set for the weights of the first row and the third line as follows:Weights corresponding to four pieces of Part I could be arranged to 0,0, 10,10;Weights corresponding to 9 pieces of Part II could be arranged to 15, weights corresponding to four pieces of Part III can be set It is set to 10,10,0,0;For the second row weights can set it is as follows:Weights corresponding to two pieces of Part I could be arranged to 0 and 10, weights corresponding to the tridecyne of Part II could be arranged to 15, and weights corresponding to two pieces of Part III can be set For 10 and 0.
It is determined that after weight coefficient corresponding to distance matrix and each distance, weights are multiplied and summed with corresponding distance Again divided by all weight coefficients sum, the identification distances of two iris images to be identified is calculated.
Represent that two iris images to be identified have larger similarity based on less iris recognition distance is calculated, and Calculate larger iris recognition distance represent two iris images to be identified have less similarity the fact that, then According to similarity compared with recognition threshold, recognition result is drawn.
Optionally, the calculating of similarity can use equation below:
Wherein, DmBe expressed as it is multiple identification distance in apart from intermediate value, DminIt is expressed as distance of multiple identifications in most Small value, d are expressed as the distance of iris image to be identified.
Here the calculation formula of similarity is only a kind of example, is not intended as the restriction of Similarity Measure.
After the similarity of two iris distances to be identified is obtained, compared with recognition threshold, wherein, recognition threshold is One empirical data.When the similarity of calculating is more than or equal to recognition threshold, then judge that two iris images to be identified come from Same individual, when the similarity of calculating is less than recognition threshold, then judge two iris images to be identified from different individuals.
Above-mentioned iris identification method, obtains the image of iris corresponding region by detection algorithm, then by annular iris region The rectangle iris image of default size is normalized to, so as to facilitate the feature extraction of the iris in subsequent process, by iris image Carry out piecemeal and obtain multiple iris sub-images, using the overlapping partitioned mode in subregion during piecemeal, so as to Can strengthen to the subregional feature extraction of rectangle iris image central portion, then obtain iris sub-image characteristic vector it Afterwards, the distance between iris sub-block corresponding to two iris images to be identified is calculated, and adjusts the distance and is weighted, wherein Take the weights close to fringe region small in weights distribution, the characteristic distributions big close to the weights of middle section, it is non-to reach reduction Influence of the iris region to iris recognition.By the operation that iris feature and unbalanced weights are extracted to iris image piecemeal To improve iris recognition accuracy rate.
It should be understood that the size of the sequence number of each step is not meant to the priority of execution sequence, each process in above-described embodiment Execution sequence should determine that the implementation process without tackling the embodiment of the present invention forms any limit with its function and internal logic It is fixed.
Embodiment two
Corresponding to the iris identification method described in foregoing embodiments one, the iris of the embodiment of the present invention two is shown in Fig. 9 The structured flowchart of identification device.For convenience of description, it illustrate only part related to the present embodiment.
The device includes image collection module 110, image transform module 120, characteristic extracting module 130 and pattern match mould Block 140.
Image collection module 110, for obtaining user's eyes image and the inner circle of the iris being detected by detection algorithm With cylindrical border, it is annular iris figure to obtain the ring part that inner circle and cylindrical border are formed.
Image transform module 120, for the annular iris image to be normalized to the rectangle iris image of default size.
Characteristic extracting module 130, for rectangle iris image to be divided into multiple iris sub-images, and to each rainbow Film sub-image carries out feature extraction, obtains the characteristic vector of each iris sub-image.
Pattern Matching Module 140, it is distance weighted for being carried out to the characteristic vector of two corresponding iris sub-images, will Distance after weighting is mapped as similarity, and draws recognition result according to recognition threshold and the similarity.
Optionally, the characteristic extracting module 130 includes image block unit 131 and CNN converter units 132.
Image block unit 131, for being M*N iris sub-image by the rectangle iris segmentation, wherein, phase The adjacent edge of two adjacent iris sub-images it is equal in magnitude, and the subregion of two neighboring iris sub-image overlaps, M It is positive integer with N;
CNN converter units 132, for carrying out convolution staggeredly to each iris sub-image using convolutional neural networks Operated with pondization, obtain the characteristic vector of M*N L dimension, L is positive integer.
Optionally, the Pattern Matching Module 140 includes distance weighted unit 141, similarity calculated 142 and identification Unit 143.
Distance weighted unit 141, for calculating N number of distance of corresponding two iris block features vector, and will be per a line N number of distance be divided into m part, be weighted operation with the distance of m part respectively with m different coefficients;Wherein, it is described The coefficient value that m coefficient is disposed proximate to middle section is big, and the coefficient value close to two end regions is small.
Similarity calculated 142, for the Euclidean distance after weighting to be mapped as into similarity;
Recognition unit 143, for compared with recognition threshold, the similarity to be obtained into recognition result.
Above-mentioned iris identification device, obtains the image of iris corresponding region by detection algorithm, then by annular iris region The rectangle iris image of default size is normalized to, so as to facilitate the feature extraction of the iris in subsequent process, by iris image Carry out piecemeal and obtain multiple iris sub-images, using the overlapping partitioned mode in subregion during piecemeal, so as to Can strengthen to the subregional feature extraction of rectangle iris image central portion, then obtain iris sub-image characteristic vector it Afterwards, the distance between iris sub-block corresponding to two iris images to be identified is calculated, and adjusts the distance and is weighted, wherein Take the weights close to fringe region small in weights distribution, the characteristic distributions big close to the weights of middle section, it is non-to reach reduction Influence of the iris region to iris recognition.By the operation that iris feature and unbalanced weights are extracted to iris image piecemeal To improve iris recognition accuracy rate.
Embodiment three
Figure 10 is the schematic diagram for the iris recognition terminal device that one embodiment of the invention provides.As shown in Figure 10, the implementation The iris recognition terminal device 10 of example includes:Processor 100, memory 101 and it is stored in the memory 101 and can be The computer program 102 run on the processor 100, such as the program of iris recognition.The processor 100 performs the meter The step in above-mentioned each iris identification method embodiment, such as the step S101- shown in Fig. 1 are realized during calculation machine program 102 S105.Or the processor 100 realizes each module in above-mentioned each device embodiment/mono- when performing the computer program 102 The function of member, such as the function of module 110 to 140 shown in Fig. 9.
Exemplary, the computer program 102 can be divided into one or more module/units, it is one or Multiple module/the units of person are stored in the memory 101, and are performed by the processor 100, to complete the present invention.Institute It can be the series of computation machine programmed instruction section that can complete specific function to state one or more module/units, the instruction segment For describing implementation procedure of the computer program 102 in the iris recognition terminal device 10.For example, the computer Program 102 can be divided into image collection module, image transform module, characteristic extracting module and Pattern Matching Module, each mould Block concrete function is as follows:
Image collection module, for obtaining user's eyes image and detecting the inner circle of the iris and outer by detection algorithm The ring part that round edge circle, acquisition inner circle and cylindrical border are formed is annular iris image;
Image transform module, for the annular iris image to be normalized to the rectangle iris image of default size;
Characteristic extracting module, for rectangle iris image to be divided into multiple iris sub-images, and to each iris Sub-image carries out feature extraction, obtains the characteristic vector of each iris sub-image;
Pattern Matching Module, it is distance weighted for being carried out to the characteristic vector of two corresponding iris sub-images, it will add Distance after power is mapped as similarity, and draws recognition result according to recognition threshold and the similarity.
The characteristic extracting module can be divided into image block unit and CNN converter units, the specific work(of unit Can be as follows:
Image block unit, for being M*N iris sub-image by the rectangle iris segmentation, wherein, it is adjacent Two iris sub-images adjacent edge it is equal in magnitude, and two neighboring iris sub-image subregion overlap, M and N is positive integer;
CNN converter units, for each iris sub-image is carried out using convolutional neural networks convolution staggeredly and Pondization operates, and obtains the characteristic vector of M*N L dimension, L is positive integer.
The Pattern Matching Module can be divided into distance weighted unit, similarity calculated and recognition unit, respectively Individual unit concrete function is as follows:
Distance weighted unit, for calculating N number of Euclidean distance of corresponding two iris block features vector, and will be per a line N number of Euclidean distance be divided into m part, be weighted operation with the Euclidean distance of m part respectively with m different coefficients; Wherein, the coefficient value that the m coefficient is disposed proximate to middle section is big, and the coefficient value close to two end regions is small.
Similarity calculated, for the Euclidean distance after weighting to be mapped as into similarity;
Recognition unit, for compared with recognition threshold, the similarity to be obtained into recognition result.
The iris recognition terminal device 10 can be desktop PC, notebook, palm PC and cloud server Deng computing device.The iris recognition terminal device may include, but be not limited only to, processor 100, memory 101.This area skill Art personnel are appreciated that Figure 10 is only the example of iris recognition terminal device 10, do not form to iris recognition terminal device It 10 restriction, can include than illustrating more or less parts, either combine some parts or different parts, such as The iris recognition terminal device can also include input-output equipment, network access equipment, bus etc..
Alleged processor 100 can be CPU (Central Processing Unit, CPU), can also be Other general processors, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field- Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic, Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor Deng.
The memory 101 can be the internal storage unit of the iris recognition terminal device 10, such as iris recognition The hard disk or internal memory of terminal device 10.The memory 101 can also be the external storage of the iris recognition terminal device 10 The plug-in type hard disk being equipped with equipment, such as the iris recognition terminal device 10, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Further, it is described to deposit Reservoir 101 can also both include the internal storage unit of the iris recognition terminal device 10 or including External memory equipment.Institute Memory 101 is stated to be used to store the computer program and other program sums needed for the iris recognition terminal device According to.The memory 101 can be also used for temporarily storing the data that has exported or will export.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work( Can unit, module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device are divided into different functional units or module, more than completion The all or part of function of description.Each functional unit, module in embodiment can be integrated in a processing unit, also may be used To be that unit is individually physically present, can also two or more units it is integrated in a unit, it is above-mentioned integrated Unit can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.In addition, each function list Member, the specific name of module are not limited to the protection domain of the application also only to facilitate mutually distinguish.Said system The specific work process of middle unit, module, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and is not described in detail or remembers in some embodiment The part of load, it may refer to the associated description of other embodiments.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein Member and algorithm steps, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually Performed with hardware or software mode, application-specific and design constraint depending on technical scheme.Professional and technical personnel Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/terminal device and method, can be with Realize by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute The division of module or unit is stated, only a kind of division of logic function, there can be other dividing mode when actually realizing, such as Multiple units or component can combine or be desirably integrated into another system, or some features can be ignored, or not perform.Separately A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be by some interfaces, device Or INDIRECT COUPLING or the communication connection of unit, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs 's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated module/unit realized in the form of SFU software functional unit and as independent production marketing or In use, it can be stored in a computer read/write memory medium.Based on such understanding, the present invention realizes above-mentioned implementation All or part of flow in example method, by computer program the hardware of correlation can also be instructed to complete, described meter Calculation machine program can be stored in a computer-readable recording medium, and the computer program can be achieved when being executed by processor The step of stating each embodiment of the method..Wherein, the computer program includes computer program code, the computer program Code can be source code form, object identification code form, executable file or some intermediate forms etc..Computer-readable Jie Matter can include:Can carry any entity or device of the computer program code, recording medium, USB flash disk, mobile hard disk, Magnetic disc, CD, computer storage, read-only storage (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It is it should be noted that described The content that computer-readable medium includes can carry out appropriate increasing according to legislation in jurisdiction and the requirement of patent practice Subtract, such as in some jurisdictions, electric carrier signal and electricity are not included according to legislation and patent practice, computer-readable medium Believe signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to foregoing reality Example is applied the present invention is described in detail, it will be understood by those within the art that:It still can be to foregoing each Technical scheme described in embodiment is modified, or carries out equivalent substitution to which part technical characteristic;And these are changed Or replace, the essence of appropriate technical solution is departed from the spirit and scope of various embodiments of the present invention technical scheme, all should Within protection scope of the present invention.

Claims (10)

1. a kind of iris identification method, it is characterised in that methods described includes:
The eyes image of user is obtained, the eyes image includes iris;
The inner circle of the iris and cylindrical border, the circle that the inner circle and cylindrical border are formed are detected using fast algorithm of detecting Loop section is region corresponding to the iris;
The iris image of annular is normalized to the rectangle iris image of default size;
Piecemeal is carried out to the rectangle iris image and obtains multiple iris sub-images, and each iris sub-image is entered Row feature extraction, obtain the characteristic vector of each iris sub-image;
Characteristic vector progress to two corresponding iris sub-images is distance weighted, the distance after weighting is mapped as similar Degree, and iris recognition is carried out according to recognition threshold and the similarity.
2. iris identification method as claimed in claim 1, it is characterised in that the iris image by annular is normalized to pre- If the rectangle iris image of size, it is specially:
The inner circle for calculating iris is sat with cylindrical central coordinate of circle average value and using the central coordinate of circle average value as pupil center Mark (x0,y0), according to ring-data equation by annular iris image around pupil center expand into a length of 360 pixel, a height of inner circle with Exradius difference r rectangle, then the rectangular image of expansion is normalized to the rectangle iris image of default size, wherein annulus is joined Counting equation is:
3. iris identification method as claimed in claim 1, it is characterised in that described that piecemeal is carried out to the rectangle iris image Processing obtains multiple iris sub-images, is specially:
It is M*N iris sub-image by the rectangle iris segmentation, wherein, the phase of adjacent two iris sub-images The subregion of the equal in magnitude and adjacent two iris sub-image of adjacent side overlaps, and M and N are positive integer.
4. iris identification method as claimed in claim 3, it is characterised in that described that feature is carried out to each iris sub-image Extraction, is specifically included:
Convolution staggeredly and pondization operation are carried out to each iris sub-image using convolutional neural networks, obtains M*N L dimension Characteristic vector, L is positive integer.
5. iris identification method as claimed in claim 3, it is characterised in that described to two corresponding iris sub-images Characteristic vector progress is distance weighted, specifically includes:
M*N distance of the characteristic vector of two iris sub-images corresponding to calculating, and by the N number of of every a line of characteristic vector Distance is divided into m part, and operation is weighted with the distance of m part respectively with m different coefficients;Wherein, the m system The coefficient value that number is disposed proximate to middle section is big, and the coefficient value close to two end regions is small.
A kind of 6. iris identification device, it is characterised in that including:Image collection module, image transform module, feature extraction mould Block, Pattern Matching Module;
Image collection module, for obtaining user's eyes image and the inner circle and outer circumferential edge of the iris being detected by detection algorithm The ring part that boundary, acquisition inner circle and cylindrical border are formed is annular iris image;
Image transform module, for the annular iris image to be normalized to the rectangle iris image of default size;
Characteristic extracting module, for rectangle iris image to be divided into multiple iris sub-images, and to each iris sub-block Image carries out feature extraction, obtains the characteristic vector of each iris sub-image;
Pattern Matching Module, it is distance weighted for being carried out to the characteristic vector of two corresponding iris sub-images, after weighting Distance be mapped as similarity, and recognition result is drawn according to recognition threshold and the similarity.
7. iris identification device as claimed in claim 6, it is characterised in that the characteristic extracting module includes:
Image block unit, for being M*N iris sub-image by the rectangle iris segmentation, wherein, adjacent two The adjacent edge of individual iris sub-image it is equal in magnitude, and two neighboring iris sub-image subregion overlap, M and N are equal For positive integer;
CNN converter units, for carrying out convolution staggeredly and pond to each iris sub-image using convolutional neural networks Operation, the characteristic vector of M*N L dimension is obtained, L is positive integer.
8. iris identification device as claimed in claim 6, it is characterised in that the Pattern Matching Module includes:
Distance weighted unit, for calculating N number of distance of corresponding two iris block features vector, and by each of characteristic vector Capable N number of distance is divided into m part, and operation is weighted with the distance of m part respectively with m different coefficients;Wherein, institute State m coefficient be disposed proximate to middle section coefficient value it is big, close to two end regions coefficient value it is small.
Similarity calculated, for the distance after weighting to be mapped as into similarity;
Recognition unit, for compared with recognition threshold, the similarity to be obtained into recognition result.
9. a kind of iris recognition terminal device, including memory, processor and it is stored in the memory and can be described The computer program run on processor, it is characterised in that realize such as right described in the computing device during computer program It is required that the step of any one of 1 to 5 methods described.
10. a kind of computer-readable recording medium, the computer-readable recording medium storage has computer program, and its feature exists In when the computer program is executed by processor the step of realization such as any one of claim 1 to 5 methods described.
CN201710846244.5A 2017-09-19 2017-09-19 Iris identification method, device and terminal device Pending CN107506754A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710846244.5A CN107506754A (en) 2017-09-19 2017-09-19 Iris identification method, device and terminal device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710846244.5A CN107506754A (en) 2017-09-19 2017-09-19 Iris identification method, device and terminal device

Publications (1)

Publication Number Publication Date
CN107506754A true CN107506754A (en) 2017-12-22

Family

ID=60697877

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710846244.5A Pending CN107506754A (en) 2017-09-19 2017-09-19 Iris identification method, device and terminal device

Country Status (1)

Country Link
CN (1) CN107506754A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108416772A (en) * 2018-03-07 2018-08-17 汕头大学 A kind of strabismus detection method based on concatenated convolutional neural network
CN110059520A (en) * 2018-01-18 2019-07-26 北京京东金融科技控股有限公司 The method, apparatus and iris authentication system that iris feature extracts
CN110647796A (en) * 2019-08-02 2020-01-03 中山市奥珀金属制品有限公司 Iris identification method and device
CN110674669A (en) * 2019-03-12 2020-01-10 浙江大学 Method for identifying specific circle under complex background
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110866507A (en) * 2019-11-20 2020-03-06 北京工业大学 Method for protecting mobile phone chatting content based on iris recognition
CN111242230A (en) * 2020-01-17 2020-06-05 腾讯科技(深圳)有限公司 Image processing method and image classification model training method based on artificial intelligence
CN111507996A (en) * 2020-03-24 2020-08-07 北京万里红科技股份有限公司 Iris image evaluation method and device, and iris recognition method and device
CN112907593A (en) * 2021-04-17 2021-06-04 湖南健坤激光科技有限公司 Method and device for identifying colloid fault position of mobile phone lens and related equipment
CN112949518A (en) * 2021-03-09 2021-06-11 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN113592959A (en) * 2021-08-17 2021-11-02 北京博视智动技术有限公司 Visual processing-based diaphragm laminating method and system
CN113609973A (en) * 2021-08-04 2021-11-05 河南华辰智控技术有限公司 Social security platform wind control management system based on biological recognition technology
CN113673460A (en) * 2021-08-26 2021-11-19 青岛熙正数字科技有限公司 Method and device for iris recognition, terminal equipment and storage medium
CN113688874A (en) * 2021-07-29 2021-11-23 天津中科智能识别产业技术研究院有限公司 Method and system for automatically segmenting iris region in human eye iris image
CN116343320A (en) * 2023-03-31 2023-06-27 西南大学 Iris recognition method based on phase change and diffusion neural network

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584915A (en) * 2004-06-15 2005-02-23 沈阳工业大学 Human iris identifying method
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Method for recognizing iris with matched characteristic and graph based on partial bianry mode
CN101447025A (en) * 2008-12-30 2009-06-03 东南大学 Method for identifying iris of large animals
CN101894256A (en) * 2010-07-02 2010-11-24 西安理工大学 Iris identification method based on odd-symmetric 2D Log-Gabor filter
CN102902980A (en) * 2012-09-13 2013-01-30 中国科学院自动化研究所 Linear programming model based method for analyzing and identifying biological characteristic images
CN105550661A (en) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 Adaboost algorithm-based iris feature extraction method
CN106709431A (en) * 2016-12-02 2017-05-24 厦门中控生物识别信息技术有限公司 Iris recognition method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1584915A (en) * 2004-06-15 2005-02-23 沈阳工业大学 Human iris identifying method
CN101154265A (en) * 2006-09-29 2008-04-02 中国科学院自动化研究所 Method for recognizing iris with matched characteristic and graph based on partial bianry mode
CN101447025A (en) * 2008-12-30 2009-06-03 东南大学 Method for identifying iris of large animals
CN101894256A (en) * 2010-07-02 2010-11-24 西安理工大学 Iris identification method based on odd-symmetric 2D Log-Gabor filter
CN102902980A (en) * 2012-09-13 2013-01-30 中国科学院自动化研究所 Linear programming model based method for analyzing and identifying biological characteristic images
CN105550661A (en) * 2015-12-29 2016-05-04 北京无线电计量测试研究所 Adaboost algorithm-based iris feature extraction method
CN106709431A (en) * 2016-12-02 2017-05-24 厦门中控生物识别信息技术有限公司 Iris recognition method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李菊艳: "基于DSP的虹膜识别系统的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110059520A (en) * 2018-01-18 2019-07-26 北京京东金融科技控股有限公司 The method, apparatus and iris authentication system that iris feature extracts
CN110059520B (en) * 2018-01-18 2024-04-09 京东科技控股股份有限公司 Iris feature extraction method, iris feature extraction device and iris recognition system
CN108416772A (en) * 2018-03-07 2018-08-17 汕头大学 A kind of strabismus detection method based on concatenated convolutional neural network
CN110674669B (en) * 2019-03-12 2021-09-03 浙江大学 Method for identifying specific circle under complex background
CN110674669A (en) * 2019-03-12 2020-01-10 浙江大学 Method for identifying specific circle under complex background
CN110647796A (en) * 2019-08-02 2020-01-03 中山市奥珀金属制品有限公司 Iris identification method and device
CN110688951A (en) * 2019-09-26 2020-01-14 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN110688951B (en) * 2019-09-26 2022-05-31 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
US11532180B2 (en) 2019-09-26 2022-12-20 Shanghai Sensetime Intelligent Technology Co., Ltd. Image processing method and device and storage medium
WO2021056808A1 (en) * 2019-09-26 2021-04-01 上海商汤智能科技有限公司 Image processing method and apparatus, electronic device, and storage medium
CN110866507A (en) * 2019-11-20 2020-03-06 北京工业大学 Method for protecting mobile phone chatting content based on iris recognition
CN111242230A (en) * 2020-01-17 2020-06-05 腾讯科技(深圳)有限公司 Image processing method and image classification model training method based on artificial intelligence
CN111507996B (en) * 2020-03-24 2023-12-01 北京万里红科技有限公司 Iris image evaluation method and device and iris recognition method and device
CN111507996A (en) * 2020-03-24 2020-08-07 北京万里红科技股份有限公司 Iris image evaluation method and device, and iris recognition method and device
CN112949518A (en) * 2021-03-09 2021-06-11 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN112949518B (en) * 2021-03-09 2024-04-05 上海聚虹光电科技有限公司 Iris image processing method, device, equipment and storage medium
CN112907593B (en) * 2021-04-17 2023-09-22 湖南健坤激光科技有限公司 Method and device for identifying colloid fault position of mobile phone lens and related equipment
CN112907593A (en) * 2021-04-17 2021-06-04 湖南健坤激光科技有限公司 Method and device for identifying colloid fault position of mobile phone lens and related equipment
CN113688874A (en) * 2021-07-29 2021-11-23 天津中科智能识别产业技术研究院有限公司 Method and system for automatically segmenting iris region in human eye iris image
CN113688874B (en) * 2021-07-29 2024-05-31 天津中科智能识别产业技术研究院有限公司 Automatic iris region segmentation method and system in human eye iris image
CN113609973A (en) * 2021-08-04 2021-11-05 河南华辰智控技术有限公司 Social security platform wind control management system based on biological recognition technology
CN113609973B (en) * 2021-08-04 2024-02-20 河南华辰智控技术有限公司 Social security platform wind control management system based on biological recognition technology
CN113592959A (en) * 2021-08-17 2021-11-02 北京博视智动技术有限公司 Visual processing-based diaphragm laminating method and system
CN113592959B (en) * 2021-08-17 2023-11-28 北京博视智动技术有限公司 Visual processing-based membrane lamination method and system
CN113673460A (en) * 2021-08-26 2021-11-19 青岛熙正数字科技有限公司 Method and device for iris recognition, terminal equipment and storage medium
CN116343320A (en) * 2023-03-31 2023-06-27 西南大学 Iris recognition method based on phase change and diffusion neural network
CN116343320B (en) * 2023-03-31 2024-06-07 西南大学 Iris recognition method

Similar Documents

Publication Publication Date Title
CN107506754A (en) Iris identification method, device and terminal device
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN104866868B (en) Metal coins recognition methods based on deep neural network and device
CN104484658A (en) Face gender recognition method and device based on multi-channel convolution neural network
CN105354866A (en) Polygon contour similarity detection method
CN108009554A (en) A kind of image processing method and device
CN106919944A (en) A kind of wide-angle image method for quickly identifying based on ORB algorithms
CN110135438B (en) Improved SURF algorithm based on gradient amplitude precomputation
JP5289412B2 (en) Local feature amount calculation apparatus and method, and corresponding point search apparatus and method
CN107066961B (en) Fingerprint method for registering and device
CN107729820A (en) A kind of finger vein identification method based on multiple dimensioned HOG
CN106991380A (en) A kind of preprocess method based on vena metacarpea image
CN106709431A (en) Iris recognition method and device
CN104298995A (en) Three-dimensional face identification device and method based on three-dimensional point cloud
CN112132812B (en) Certificate verification method and device, electronic equipment and medium
CN104919491A (en) Improvements in or relating to image processing
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
Marco-Detchart et al. Neuro-inspired edge feature fusion using Choquet integrals
WO2017070923A1 (en) Human face recognition method and apparatus
CN107704797A (en) Real-time detection method and system and equipment based on pedestrian in security protection video and vehicle
Bondre et al. Review on leaf diseases detection using deep learning
CN116051957A (en) Personal protection item detection network based on attention mechanism and multi-scale fusion
CN104268550B (en) Feature extracting method and device
Kausar et al. Multi-scale deep neural network for mitosis detection in histological images
CN108256520A (en) A kind of method, terminal device and computer readable storage medium for identifying the coin time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 1301, No.132, Fengqi Road, phase III, software park, Xiamen City, Fujian Province

Applicant after: Xiamen Entropy Technology Co., Ltd

Address before: 361000, Xiamen three software park, Fujian Province, 8 North Street, room 2001

Applicant before: XIAMEN ZKTECO BIOMETRIC IDENTIFICATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
RJ01 Rejection of invention patent application after publication

Application publication date: 20171222

RJ01 Rejection of invention patent application after publication