CN110738174B - Finger vein recognition method, device, equipment and storage medium - Google Patents

Finger vein recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN110738174B
CN110738174B CN201910986288.7A CN201910986288A CN110738174B CN 110738174 B CN110738174 B CN 110738174B CN 201910986288 A CN201910986288 A CN 201910986288A CN 110738174 B CN110738174 B CN 110738174B
Authority
CN
China
Prior art keywords
image
finger
finger vein
input
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910986288.7A
Other languages
Chinese (zh)
Other versions
CN110738174A (en
Inventor
秦传波
王璠
曾军英
朱伯远
朱京明
翟懿奎
甘俊英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuyi University
Original Assignee
Wuyi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuyi University filed Critical Wuyi University
Priority to CN201910986288.7A priority Critical patent/CN110738174B/en
Publication of CN110738174A publication Critical patent/CN110738174A/en
Application granted granted Critical
Publication of CN110738174B publication Critical patent/CN110738174B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a finger vein recognition method, a device, equipment and a storage medium, which comprise the following steps: acquiring a finger image; detecting a region of interest of the finger image; acquiring a difference image of an input image and a registered image; inputting the differential image into a convolutional neural network for training; and performing finger vein recognition based on the convolutional neural network output. The invention can have robustness to various database types and environmental changes (including misalignment and shadow), and greatly improves the recognition accuracy of finger veins.

Description

Finger vein recognition method, device, equipment and storage medium
Technical Field
The present invention relates to the technical field of neural network algorithms, and in particular, to a finger vein recognition method, device, apparatus, and storage medium.
Background
As the requirements of people on identity recognition are higher and higher, finger vein recognition is an emerging biological recognition technology, and has attracted a great deal of attention in the field of biological authentication. The finger vein technology has several distinct advantages over other biometric identification technologies (such as face, gait, and fingerprint): higher user friendliness, activity detection, high security, and small device size, which makes it very suitable for high security and user friendly applications.
In finger vein recognition systems, there are generally two factors that degrade recognition performance: misalignment due to finger translation and rotation (which occurs when capturing finger vein images) and shadows on finger vein images. The first factor relates to inconsistencies between the vein pattern in the registered image and the identification image due to movement and rotation of the finger on the finger-vein image capturing device during the identification attempt. The second factor relates to a change in image quality due to shadows generated in the input image, which are caused by pressure of the finger touching the finger vein image capturing device, because, generally, for the finger vein image capturing device, a Near Infrared (NIR) Light Emitting Diode (LED) illuminates the finger from above or from the side.
To solve these problems, the conventional finger vein recognition algorithm recognizes based on the finger vein lines extracted from the input image or the enhanced image and the texture features extracted from the finger vein image. However, in these cases, inaccurate detection of the finger vein line reduces the recognition accuracy.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides a finger vein recognition method, a device, equipment and a storage medium, which can have robustness to various database types and environmental changes (including misalignment and shadow), and greatly improve the recognition accuracy of finger veins.
The finger vein recognition method according to the embodiment of the first aspect of the present invention comprises the following steps:
acquiring a finger image;
detecting a region of interest of the finger image;
acquiring a difference image of an input image and a registered image;
inputting the differential image into a convolutional neural network for training;
and performing finger vein recognition based on the convolutional neural network output.
The finger vein recognition method provided by the embodiment of the invention has at least the following beneficial effects: the invention can have robustness to various database types and environmental changes (including misalignment and shadow), and greatly improves the recognition accuracy of finger veins.
According to some embodiments of the invention, the acquiring a differential image of the input image and the registered image includes:
and obtaining a differential image from the two images for real matching or false matching, and taking the differential image as an input of the convolutional neural network.
According to some embodiments of the invention, the finger vein recognition based on the convolutional neural network output comprises:
and giving a matching result of real matching or false matching based on the final FCL result of the convolutional neural network.
An embodiment of a finger vein recognition apparatus according to a second aspect of the present invention includes:
a finger image acquisition unit configured to acquire a finger image;
a region-of-interest detection unit configured to detect a region of interest of the finger image;
a differential image acquisition unit configured to acquire a differential image of an input image and a registration image;
the training unit is used for inputting the differential image into a convolutional neural network for training;
and the identification unit is used for carrying out finger vein identification based on the convolutional neural network output.
According to some embodiments of the invention, the differential image acquiring unit is further configured to acquire one differential image from two images for true matching or false matching, and use the differential image as an input of the convolutional neural network;
the identification unit is further used for giving a matching result of real matching or false matching based on a final FCL result of the convolutional neural network.
An embodiment of a finger vein recognition device according to the third aspect of the present invention comprises at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the finger vein recognition method as described above in relation to the first aspect.
A computer-readable storage medium according to an embodiment of a fourth aspect of the present invention stores computer-executable instructions for causing a computer to perform the finger vein recognition method as described above in the first aspect.
Additional aspects and advantages of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the invention will become apparent and may be better understood from the following description of embodiments taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of a finger vein recognition method according to an embodiment of the present invention;
FIG. 2 is a structural configuration diagram of a convolutional neural network used in the finger vein recognition method according to an embodiment of the present invention;
FIG. 3 is a block diagram of a convolutional neural network used in the finger vein recognition method according to an embodiment of the present invention;
FIG. 4 is a schematic view of a finger vein recognition device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a finger vein recognition apparatus according to an embodiment of the present invention.
Reference numerals:
a finger vein recognition apparatus 100, a finger image acquisition unit 110, a region of interest detection unit 120, a differential image acquisition unit 130, a training unit 140, a recognition unit 150;
a finger vein recognition device 200, a control processor 201, a memory 202.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
In the description of the present invention, it should be understood that references to orientation descriptions such as upper, lower, front, rear, left, right, etc. are based on the orientation or positional relationship shown in the drawings, are merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus should not be construed as limiting the present invention.
In the description of the present invention, a number means one or more, a number means two or more, and greater than, less than, exceeding, etc. are understood to not include the present number, and above, below, within, etc. are understood to include the present number. The description of the first and second is for the purpose of distinguishing between technical features only and should not be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated.
In the description of the present invention, unless explicitly defined otherwise, terms such as arrangement, installation, connection, etc. should be construed broadly and the specific meaning of the terms in the present invention can be reasonably determined by a person skilled in the art in combination with the specific contents of the technical scheme.
Referring to fig. 1, a finger vein recognition method according to an embodiment of the first aspect of the present invention includes the steps of:
s1: acquiring a finger image;
s2: detecting a region of interest of the finger image;
s3: acquiring a difference image of an input image and a registered image;
s4: inputting the differential image into a convolutional neural network for training;
s5: and performing finger vein recognition based on the convolutional neural network output.
The finger vein recognition method provided by the embodiment of the invention has at least the following beneficial effects: the invention can have robustness to various database types and environmental changes (including misalignment and shadow), and greatly improves the recognition accuracy of finger veins.
According to some embodiments of the invention, the acquiring a differential image of the input image and the registered image includes:
and obtaining a differential image from the two images for real matching or false matching, and taking the differential image as an input of the convolutional neural network. In order to reduce the complexity of the conventional CNN structure for receiving two images as input, a method for obtaining a differential image from the two images for true matching or false matching is proposed, and the image is used as the input of the CNN.
According to some embodiments of the invention, the finger vein recognition based on the convolutional neural network output comprises:
and giving a matching result of real matching or false matching based on the final FCL result of the convolutional neural network. The individual distance matching method based on the characteristics is not adopted any more, but the final FCL result based on the CNN gives a matching result of real matching or false matching.
For embodiments of the first aspect of the invention, the detailed procedure is as follows:
referring to fig. 1, upper and lower boundaries of a finger are detected from an image obtained by a finger vein capture device using two 4×20 pixel masks, and a finger region of interest (ROI) is detected based thereon. Without filtering or quality enhancement, the detected finger ROI is adjusted to an image of 224×224 pixels, and then a differential image between the input and registered finger ROI images is obtained. The differential image is used as an input to a pre-training learning Convolutional Neural Network (CNN), and the input finger vein image is identified based on the CNN output.
The device for acquiring finger images of the present invention consists of 6 Near Infrared (NIR) Light Emitting Diodes (LEDs) at 850nm and a webcam (Logitech Webcam C) 600. The NIR cutting filter is removed from the webcam and the NIR passing filter is added.
Referring to fig. 2 and 3, VGG Net-16 fine adjustment using a differential image as an input is adopted as a finger vein recognition method based on recognition accuracy. VGG Net-16 consists of 13 convolutional layers, 5 pooling layers, and 3 fully-connected layers. In the 1 st convolution layer, 64 filters of a size of 3×3 are used. Thus, in the 1 st convolution layer, the feature map has a size 224×224×64, where 224 and 224 are the height and width of the feature map, respectively. They are calculated based on the formula (output height (or width) = (input height (or width) -filter height (or width) +2 x number of fills)/step +1. The rectifying linear unit (Relu) layer can be expressed as follows
y=max(0,x) (1)
Where x and y are the input and output values of the Relu function, respectively. The processing speed of the Relu function is typically faster than that of the nonlinear activation function. This function can reduce the gradient vanishing problem that may occur when a hyperbolic tangent function or sigmoid function is used for back propagation training. The feature map obtained by the Relu layer (relu1_1) passes through the second convolution layer and the Relu layer (relu1_2) again before passing through the max pooling layer (Pool 1). As with the 1 st convolution layer, the 2 nd convolution layer uses a 3×3 filter size, a 1 fill, and a step size of 0, and maintains a 224×224×64 feature map size. The 13 convolutional layers maintain the same feature map size by using a filter size of 3×3 and a fill of 1, and only the number of filters is changed to 64, 128, 256, and 512. In addition, a Relu layer is connected after each convolution layer, and the size of the feature map is maintained after passing through the convolution layers.
In the maximum pool layer, the maximum of the defined filter range values is used. Pool1 in fig. 2 is the result of the maximum pool of convolution layer 2 and relu1_2. When the maximum Pool layer (Pool 1) is executed, the input feature map size is 224×224×64, the filter size is 2×2, and the step size is 2×2. Here, the number of steps of 2×2 means a maximum Chi Lvbo of 2×2 in which the filter is shifted by two pixels in the horizontal and vertical directions. Since the filter movement has no overlap area, the feature size is reduced to 1/4 (horizontal 1/2, vertical 1/2). Therefore, the feature map size after passing Pool1 becomes 112×112×64 pixels. This pooling layer is used after relu1_2, relu2_2, relu3_3, relu4_3 and relu5_3. For all cases a 2 x 2 filter and a 2 x 2 step size are used, in such a way that the feature size is reduced to 1/4 (horizontal 1/2, vertical 1/2).
After the image of 224×224×3 pixels is subjected to the 13 convolution layers, the 13 Relu layers and the 5 pulling layers, a feature map of 7×7×512 pixels is finally obtained. In addition, it passes through three FCL layers. The number of output nodes of the first, second and third FCLs are 4096, 4096 and 2, respectively. In the present invention, a verification structure is designed to determine whether a finger vein image input by CNN is the same vein image as a registered image (accepted as a true match) or a different vein image (rejected as a false match). Finally, the third FCL consists of two output nodes. For the third FCL, a Softmax function is used, which can be expressed as:
Figure GDA0004125142920000061
as shown in equation (2), assuming that the output neuron array is set to p, the probability of the neuron corresponding to the i-th class can be calculated by dividing the value of the i-th element by the sum of the values of all elements.
In general, CNNs have an overfitting problem, which, although the accuracy of training data may be high, may result in lower recognition accuracy of test data. In order to solve the problem, the invention adopts a data expansion and dropout method to reduce the influence of the over fitting problem. For the dropout method, we use a 50% dropout probability to break the connection between the previous and next layers in the FCL. The dropout layer is used twice, i.e. after the first FCL with Relu6, after the second FCL and Relu 7.
Referring to fig. 4, a finger vein recognition apparatus 100 according to an embodiment of a second aspect of the present invention includes:
a finger image acquisition unit 110 for acquiring a finger image;
a region of interest detection unit 120 for detecting a region of interest of the finger image;
a differential image acquisition unit 130 for acquiring a differential image of the input image and the registered image;
a training unit 140, configured to input the differential image into a convolutional neural network for training;
and the identification unit 150 is used for carrying out finger vein identification based on the convolutional neural network output.
According to some embodiments of the present invention, the differential image obtaining unit 130 is further configured to obtain one differential image from two images for true matching or false matching, and use the differential image as an input of the convolutional neural network;
the identifying unit 150 is further configured to give a matching result of the real match or the false match based on a final FCL result of the convolutional neural network.
Referring to fig. 5, the finger vein recognition apparatus 200 according to the embodiment of the third aspect of the present invention may be any type of smart terminal, such as a mobile phone, a tablet computer, a personal computer, etc.
Specifically, the finger vein recognition apparatus 200 includes: one or more control processors 201 and a memory 202, one control processor 201 being exemplified in fig. 5.
The control processor 201 and the memory 202 may be connected by a bus or otherwise, for example in fig. 5.
The memory 202, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer-executable programs, and modules, such as program instructions/modules corresponding to the finger vein recognition method in embodiments of the present invention, e.g., the units 110-150 shown in fig. 4. The control processor 201 executes various functional applications and data processing of the finger vein recognition apparatus 100 by running non-transitory software programs, instructions, and modules stored in the memory 202, that is, implements the finger vein recognition method of the above-described method embodiment.
Memory 202 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the finger vein recognition device 100, or the like. In addition, memory 202 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 202 may optionally include memory remotely located with respect to the control processor 201, which may be connected to the finger vein recognition device 200 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The one or more modules are stored in the memory 202, which when executed by the one or more control processors 201, perform the finger vein recognition method in the above-described method embodiments, e.g., perform the method steps S1-S5 in fig. 1 described above, implementing the functions of the units 110-150 in fig. 4.
A computer readable storage medium according to an embodiment of the fourth aspect of the present invention stores computer executable instructions that are executed by one or more control processors, for example, by one of the control processors 201 in fig. 5, which may cause the one or more control processors 201 to perform the finger vein recognition method in the above method embodiment, for example, to perform the above-described method steps S1 to S5 in fig. 1, implementing the functions of the units 110 to 150 in fig. 4.
The above described embodiments of the apparatus are only illustrative, wherein the units described as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented in software plus a general purpose hardware platform. Those skilled in the art will appreciate that all or part of the processes implementing the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, may include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read Only Memory (ROM), a random access Memory (Random Access Memory, RAM), or the like.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of one of ordinary skill in the art without departing from the spirit of the present invention.

Claims (7)

1. A finger vein recognition method, comprising the steps of:
acquiring a finger image;
detecting a region of interest of the finger image;
acquiring a difference image of an input image and a registered image;
inputting the differential image into a convolutional neural network for training;
finger vein recognition is performed based on the convolutional neural network output;
wherein an upper boundary and a lower boundary of a finger are detected from an image obtained from a finger vein capture device using a mask of two 4×20 pixels, and a finger region of interest ROI is detected based thereon, the detected finger ROI is adjusted to an image of 224×224 pixels without filtering or quality enhancement, and then a differential image between the input and registered finger ROI images is obtained, which is used as an input to a pre-training learning convolutional neural network CNN, and the input finger vein image is identified based on the CNN output;
the VGG Net-16 fine adjustment with differential image as input is adopted as a finger vein recognition method based on recognition precision, wherein the VGG Net-16 consists of 13 convolution layers, 5 pooling layers and 3 full connection layers, in the 1 st convolution layer, 64 filters with the size of 3 multiplied by 3 are used, the size of a feature map is 224 multiplied by 64, and 224 are the height and width of the feature map respectively based on the formula: output height (or width) = (input height (or width) -filter height (or width) +2×number of fills)/step size +1; the rectifying linear unit (Relu) layer may be expressed as follows:
y=max(0,x)
where x and y are the input and output values, respectively, of a Relu function, which is generally faster than that of a nonlinear activation function, can reduce the gradient vanishing problem that may occur when a hyperbolic tangent function or a sigmoid function is used for back propagation training, the feature map obtained through the Relu layer (Relu1_1) passes through the second convolution layer and the Relu layer (Relu1_2) again before passing through the maximum pooling layer (Pool 1), the 2 nd convolution layer takes a 3×3 filter size, a 1 fill and a 0 step size, and maintains 224×224×64 feature map sizes, the 13 convolution layers maintain the same feature map size by using a 3×3 filter size and a 1 fill, and only the number of filters is changed to 64, 128, 256, and 512, and in addition, the Relu layer is connected after each convolution layer, and maintains the feature map size after passing through the convolution layers.
2. The finger vein recognition method according to claim 1, wherein the acquiring a differential image of the input image and the registration image includes:
and obtaining a differential image from the two images for real matching or false matching, and taking the differential image as an input of the convolutional neural network.
3. The finger vein recognition method according to claim 2, wherein said performing finger vein recognition based on said convolutional neural network output comprises:
and giving a matching result of real matching or false matching based on the final FCL result of the convolutional neural network.
4. A finger vein recognition device, comprising:
a finger image acquisition unit configured to acquire a finger image;
a region-of-interest detection unit configured to detect a region of interest of the finger image;
a differential image acquisition unit configured to acquire a differential image of an input image and a registration image;
the training unit is used for inputting the differential image into a convolutional neural network for training;
the identification unit is used for carrying out finger vein identification based on the convolutional neural network output;
wherein an upper boundary and a lower boundary of a finger are detected from an image obtained from a finger vein capture device using a mask of two 4×20 pixels, and a finger region of interest ROI is detected based thereon, the detected finger ROI is adjusted to an image of 224×224 pixels without filtering or quality enhancement, and then a differential image between the input and registered finger ROI images is obtained, which is used as an input to a pre-training learning convolutional neural network CNN, and the input finger vein image is identified based on the CNN output;
the VGG Net-16 fine adjustment with differential image as input is adopted as a finger vein recognition method based on recognition precision, wherein the VGG Net-16 consists of 13 convolution layers, 5 pooling layers and 3 full connection layers, in the 1 st convolution layer, 64 filters with the size of 3 multiplied by 3 are used, the size of a feature map is 224 multiplied by 64, and 224 are the height and width of the feature map respectively based on the formula: output height (or width) = (input height (or width) -filter height (or width) +2×number of fills)/step size +1; the rectifying linear unit (Relu) layer may be expressed as follows:
y=max(0,x)
where x and y are the input and output values, respectively, of a Relu function, which is generally faster than that of a nonlinear activation function, can reduce the gradient vanishing problem that may occur when a hyperbolic tangent function or a sigmoid function is used for back propagation training, the feature map obtained through the Relu layer (Relu1_1) passes through the second convolution layer and the Relu layer (Relu1_2) again before passing through the maximum pooling layer (Pool 1), the 2 nd convolution layer takes a 3×3 filter size, a 1 fill and a 0 step size, and maintains 224×224×64 feature map sizes, the 13 convolution layers maintain the same feature map size by using a 3×3 filter size and a 1 fill, and only the number of filters is changed to 64, 128, 256, and 512, and in addition, the Relu layer is connected after each convolution layer, and maintains the feature map size after passing through the convolution layers.
5. The finger vein recognition device according to claim 4, wherein:
the differential image acquisition unit is also used for acquiring a differential image from the two images for real matching or false matching, and taking the differential image as the input of the convolutional neural network;
the identification unit is further used for giving a matching result of real matching or false matching based on a final FCL result of the convolutional neural network.
6. A finger vein recognition device, characterized by: comprising at least one control processor and a memory for communication connection with the at least one control processor; the memory stores instructions executable by the at least one control processor to enable the at least one control processor to perform the finger vein recognition method as claimed in any one of claims 1 to 3.
7. A computer-readable storage medium, characterized by: the computer-readable storage medium stores computer-executable instructions for causing a computer to perform the finger vein recognition method as claimed in any one of claims 1 to 3.
CN201910986288.7A 2019-10-17 2019-10-17 Finger vein recognition method, device, equipment and storage medium Active CN110738174B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910986288.7A CN110738174B (en) 2019-10-17 2019-10-17 Finger vein recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910986288.7A CN110738174B (en) 2019-10-17 2019-10-17 Finger vein recognition method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110738174A CN110738174A (en) 2020-01-31
CN110738174B true CN110738174B (en) 2023-06-16

Family

ID=69269116

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910986288.7A Active CN110738174B (en) 2019-10-17 2019-10-17 Finger vein recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110738174B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111932596B (en) * 2020-09-27 2021-01-22 深圳佑驾创新科技有限公司 Method, device and equipment for detecting camera occlusion area and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529468A (en) * 2016-11-07 2017-03-22 重庆工商大学 Finger vein identification method and system based on convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102483642B1 (en) * 2016-08-23 2023-01-02 삼성전자주식회사 Method and apparatus for liveness test

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106529468A (en) * 2016-11-07 2017-03-22 重庆工商大学 Finger vein identification method and system based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孟宪静 ; 袭肖明 ; 杨璐 ; 尹义龙 ; .基于灰度不均匀矫正和SIFT的手指静脉识别方法.南京大学学报(自然科学).2018,(01),第7-16页. *

Also Published As

Publication number Publication date
CN110738174A (en) 2020-01-31

Similar Documents

Publication Publication Date Title
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
KR102554724B1 (en) Method for identifying an object in an image and mobile device for practicing the method
CN105335722B (en) Detection system and method based on depth image information
CN111274916B (en) Face recognition method and face recognition device
US11281921B2 (en) Anti-spoofing
CN104732200B (en) A kind of recognition methods of skin type and skin problem
CN109871780B (en) Face quality judgment method and system and face identification method and system
CN110569756A (en) face recognition model construction method, recognition method, device and storage medium
CN110163111A (en) Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face
Kalas Real time face detection and tracking using OpenCV
Anand et al. An improved local binary patterns histograms techniques for face recognition for real time application
US9449217B1 (en) Image authentication
CN112101195B (en) Crowd density estimation method, crowd density estimation device, computer equipment and storage medium
CN110826444A (en) Facial expression recognition method and system based on Gabor filter
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
Rai et al. Software development framework for real-time face detection and recognition in mobile devices
CN110738174B (en) Finger vein recognition method, device, equipment and storage medium
CN111126250A (en) Pedestrian re-identification method and device based on PTGAN
Ali et al. New algorithm for localization of iris recognition using deep learning neural networks
CN110866458A (en) Multi-user action detection and identification method and device based on three-dimensional convolutional neural network
Sikander et al. Facial feature detection: A facial symmetry approach
El-Sayed et al. An identification system using eye detection based on wavelets and neural networks
CN110705352A (en) Fingerprint image detection method based on deep learning
CN112381042A (en) Method for extracting palm vein features from palm vein image and palm vein identification method
CN111259753A (en) Method and device for processing key points of human face

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant