CN110599436A - Binocular image splicing and fusing algorithm - Google Patents
Binocular image splicing and fusing algorithm Download PDFInfo
- Publication number
- CN110599436A CN110599436A CN201910907559.5A CN201910907559A CN110599436A CN 110599436 A CN110599436 A CN 110599436A CN 201910907559 A CN201910907559 A CN 201910907559A CN 110599436 A CN110599436 A CN 110599436A
- Authority
- CN
- China
- Prior art keywords
- image
- finger vein
- finger
- images
- graph
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000003462 vein Anatomy 0.000 claims abstract description 75
- 238000001914 filtration Methods 0.000 claims abstract description 39
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 238000010606 normalization Methods 0.000 claims abstract description 7
- 238000000034 method Methods 0.000 claims abstract description 6
- 239000011159 matrix material Substances 0.000 claims description 48
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 4
- 230000003044 adaptive effect Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000012795 verification Methods 0.000 abstract description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/255—Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1382—Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
- G06V40/1388—Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger using image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Abstract
The invention discloses a binocular image splicing and fusing algorithm, which comprises the following steps of acquiring 2 finger vein images respectively provided with finger bottom vein information and side vein information by using a binocular camera finger vein acquisition device; fusing the 2 finger vein images by using an image fusion algorithm to form a finger vein integrated image; performing ROI processing on the finger vein integrated image to obtain an ROI image of the finger vein integrated image; carrying out GABOR filtering processing on the ROI image of the finger vein integrated image to obtain a GABOR image; carrying out LLBP treatment on the GABOR graph to obtain an LLBP graph; carrying out normalization and binarization processing on the LLBP graph to obtain a first backbone graph; and matching the characteristic values of the first backbone image and the backbone images of the stored finger vein binocular images, and judging whether the images are the same finger vein images. The method increases the information of the collected finger vein images by collecting the finger vein images of two sides of the finger, can reduce the weight of a single finger vein image, increases the robustness and the redundancy of a finger vein verification algorithm, and improves the safety of a biological identification technology.
Description
Technical Field
The invention relates to the technical field of finger vein image processing, in particular to a binocular image splicing and fusing algorithm.
Background
Most use 1 camera in the finger vein collection verification system, gather finger vein image, finger vein picture after gathering carries out the eigenvalue and draws, to the finger vein image that this kind of collection mode was gathered, can only reflect the plane characteristic of finger vein, is obtained or destroys maliciously easily, how to increase the finger vein's of gathering information quantity, and improvement finger vein system's security is the problem that awaits a urgent solution.
Disclosure of Invention
The invention provides a binocular image splicing and fusing algorithm, which solves the problems that the existing finger vein device only collects finger vein information in one direction, the quantity of the collected information is small, and the collected information is easily obtained or damaged maliciously.
The technical means adopted by the invention are as follows:
a binocular image splicing and fusing algorithm comprises the following steps:
step 1, collecting finger veins by using a binocular camera finger vein collecting device, and obtaining 2 finger vein images respectively provided with finger bottom and side vein information;
step 2, fusing the 2 finger vein images by using an image fusion algorithm to form a finger vein integrated image;
step 3, performing ROI processing on the finger vein integrated image to obtain an ROI image of the finger vein integrated image;
step 4, carrying out GABOR filtering processing on the ROI image of the finger vein integrated image to obtain a GABOR image;
step 5, carrying out LLBP treatment on the GABOR graph to obtain an LLBP graph;
step 6, carrying out normalization and binarization processing on the LLBP graph to obtain a first backbone graph;
and 7, matching the characteristic values of the first backbone image and the stored backbone images of the finger vein binocular images, and judging whether the images are the same finger vein images.
Further, the ROI map extraction in step 3 includes the following steps:
step 3.1, reading the finger vein integrated image;
step 3.2, converting the read finger vein integrated image into a single-channel gray-scale image;
3.3, extracting the finger edge contour of the single-channel gray-scale image to generate two contour envelopes of the upper and lower fingers;
step 3.4, calculating an included angle between the center line of the finger edge contour line and the longitudinal center line of the image, and rotating the finger contour line to enable the center line of the finger edge contour line to be coincident with the longitudinal center line of the image;
3.5, translating the rotated finger contour line to enable the center of the finger contour line to coincide with the center of the image;
step 3.6, zooming the finger contour line to enable the finger contour line to be adaptive to the image acquisition window;
and 3.7, intercepting the image of the finger contour line which is matched with the image acquisition window by using the set window, and generating an ROI image matrix.
Further, the GABOR filtering in step 4 adopts a GABOR filter in an OpenCV library, which includes the following steps:
step 4.1, carrying out parameter setting on the GABOR filter in the OpenCV library, wherein the set parameters comprise the size, the bandwidth, the radian and the wavelength of the filter;
step 4.2, initializing and setting the inner core of the GABOR filter with the set parameters;
4.3, filtering the ROI image in multiple directions by using the GABOR filter to respectively obtain filtered images in corresponding directions, and comparing the filtered images in the multiple corresponding directions to obtain a filtered image matrix with the minimum value;
and 4.4, carrying out normalization processing on the filtering image matrix with the minimum value to obtain a normalized image matrix.
Further, the LLBP filtering in step 5 adopts an LLBP filter in an OpenCV library, which includes the following steps:
step 5.1, carrying out 32-bit floating point type conversion on the gray map matrix obtained after the GABOR filtering to obtain a floating point type image matrix;
step 5.2, the constructed vertical filtering factor is
Wherein M is the number of lines of the GABOR graph;
step 5.3, filtering the floating-point type image matrix by using the vertical filtering factor to obtain a vertically filtered image matrix;
step 5.4, the constructed horizontal filtering factor is
Wherein M is the number of lines of the GABOR graph;
5.5, filtering the floating-point type image matrix by using the horizontal filtering factor to obtain a horizontally filtered image matrix;
step 5.6, adding and summing the vertically filtered image matrix and the horizontally filtered image matrix;
step 5.7, calculating the amplitude vector of the image matrix after the addition and summation;
step 5.8, normalizing the summed image matrix by using the calculated amplitude vector to generate a normalized image;
and 5.9, carrying out gray level processing on the normalized image to generate a gray level image.
Further, the step 6 of generating the backbone map includes the following steps:
6.1, carrying out binarization processing on the LLBP graph to generate a binarization image matrix;
6.2, removing the data of the first multiple lines and the last multiple lines in the binarization image matrix to reduce noise influence;
6.3, calculating the size of a connected region in the binary image, wherein the region of the connected region in the image, which is smaller than a set threshold value, is a small-area region image, and deleting the small-area region image to reduce the noise influence;
and 6.4, carrying out gray-scale image conversion on the binary image from which the upper and lower noises and the small-area are removed to generate a gray-scale image.
Further, the feature value matching in step 7 includes the following steps:
step 7.1, scaling down the two acquired backbone graphs in equal proportion to obtain a reduced backbone graph;
and 7.2, taking any same coordinate point in the two reduced backbone graphs as a center and 5X5 surrounding pixel point areas, comparing, counting the number of similar points, wherein if the number of the similar points is larger than a set threshold value, the two finger vein images are the same image, otherwise, the two finger vein images are different images.
Compared with the prior art, the binocular image splicing and fusing algorithm has the following beneficial effects: the method increases the information of the collected finger vein images by collecting the finger vein images on the front side and the side of the finger and fusing the two images, can reduce the weight of a single finger vein image, increases the robustness and the redundancy of a finger vein authentication algorithm, and improves the safety of a biological identification technology.
Drawings
FIG. 1 is a flow chart of a binocular image stitching fusion algorithm disclosed by the present invention;
FIG. 2 is a flowchart of the ROI map extraction step;
FIG. 3 is a flow chart of GABOR filtering;
fig. 4 is a flow chart of LBP filtering;
FIG. 5 is a flow chart of generating a backbone diagram;
fig. 6 is a flow chart of feature value matching.
Detailed Description
As shown in fig. 1, the binocular image stitching and fusing algorithm disclosed by the invention comprises the following steps:
step 1, collecting finger veins by using a binocular camera finger vein collecting device, and obtaining 2 finger vein images respectively provided with finger bottom and side vein information;
step 2, fusing the 2 finger vein images to form a finger vein integrated image;
step 3, performing ROI processing on the finger vein integrated image to obtain an ROI image of the finger vein integrated image;
step 4, carrying out GABOR filtering processing on the ROI image of the finger vein integrated image to obtain a GABOR image;
step 5, carrying out LLBP treatment on the GABOR graph to obtain an LLBP graph;
step 6, carrying out normalization and binarization processing on the LLBP graph to obtain a first backbone graph;
and 7, matching the characteristic values of the first backbone image and the stored backbone images of the finger vein binocular images, and judging whether the images are the same finger vein images. The method for acquiring the stored backbone image of the finger vein binocular image is the same as the method for acquiring the first backbone image, namely the steps 1 to 6 are repeated.
Fig. 2 shows a flowchart of ROI map extraction, which includes the following steps:
step 3.1, reading the finger vein integrated image;
step 3.2, converting the read finger vein integrated image into a single-channel gray-scale image;
3.3, extracting the finger edge contour of the single-channel gray-scale image to generate two contour envelopes of the upper and lower fingers;
step 3.4, calculating an included angle between the center line of the finger edge contour line and the longitudinal center line of the image, and rotating the finger contour line to enable the center line of the finger edge contour line to be coincident with the longitudinal center line of the image;
3.5, translating the rotated finger contour line to enable the center of the finger contour line to coincide with the center of the image;
step 3.6, zooming the finger contour line to enable the finger contour line to be adaptive to the image acquisition window;
and 3.7, intercepting the image of the finger contour line which is matched with the image acquisition window by using the set window, and generating an ROI image matrix.
As shown in fig. 3, the GABOR filtering in step 4 adopts a GABOR filter in an OpenCV library, which includes the following steps:
step 4.1, carrying out parameter setting on the GABOR filter in the OpenCV library, wherein the set parameters comprise the size, the bandwidth, the radian and the wavelength of the filter;
step 4.2, initializing and setting the inner core of the GABOR filter with the set parameters;
4.3, filtering the ROI image in multiple directions by using the GABOR filter to respectively obtain filtered images in corresponding directions, and comparing the filtered images in the multiple corresponding directions to obtain a filtered image matrix with the minimum value;
and 4.4, carrying out normalization processing on the filtering image matrix with the minimum value to obtain a normalized image matrix.
As shown in fig. 4, the LLBP filtering in step 5 adopts an LLBP filter in an OpenCV library, which includes the following steps:
step 5.1, carrying out 32-bit floating point type conversion on the gray map matrix obtained after the GABOR filtering to obtain a floating point type image matrix;
step 5.2, the constructed vertical filtering factor is
Wherein M is the number of lines of the GABOR graph;
step 5.3, filtering the floating-point type image matrix by using the vertical filtering factor to obtain a vertically filtered image matrix;
step 5.4, the constructed horizontal filtering factor is
Wherein M is the number of lines of the GABOR graph;
5.5, filtering the floating-point type image matrix by using the horizontal filtering factor to obtain a horizontally filtered image matrix;
step 5.6, adding and summing the vertically filtered image matrix and the horizontally filtered image matrix;
step 5.7, calculating the amplitude vector of the image matrix after the addition and summation;
step 5.8, normalizing the summed image matrix by using the calculated amplitude vector to generate a normalized image;
and 5.9, carrying out gray level processing on the normalized image to generate a gray level image.
As shown in fig. 5, the step 6 of generating the backbone map includes the following steps:
6.1, carrying out binarization processing on the LLBP graph to generate a binarization image matrix;
6.2, removing the data of the first multiple lines and the last multiple lines in the binarization image matrix to reduce noise influence;
6.3, calculating the size of a connected region in the binary image, wherein the region of the connected region in the image, which is smaller than a set threshold value, is a small-area region image, and deleting the small-area region image to reduce the noise influence;
and 6.4, carrying out gray-scale image conversion on the binary image from which the upper and lower noises and the small-area are removed to generate a gray-scale image.
As shown in fig. 6, the feature value matching in step 7 includes the following steps:
step 7.1, scaling down the two acquired backbone graphs in equal proportion to obtain a reduced backbone graph;
and 7.2, taking any same coordinate point in the two reduced backbone graphs as a center and 5X5 surrounding pixel point areas, comparing, counting the number of similar points, wherein if the number of the similar points is larger than a set threshold value, the two finger vein images are the same image, otherwise, the two finger vein images are different images.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be considered to be within the technical scope of the present invention, and the technical solutions and the inventive concepts thereof according to the present invention should be equivalent or changed within the scope of the present invention.
Claims (6)
1. A binocular image splicing and fusing algorithm is characterized in that: the method comprises the following steps:
step 1, collecting finger veins by using a binocular camera finger vein collecting device, and obtaining 2 finger vein images respectively provided with finger bottom and side vein information;
step 2, fusing the 2 finger vein images by using an image fusion algorithm to form a finger vein integrated image;
step 3, performing ROI processing on the finger vein integrated image to obtain an ROI image of the finger vein integrated image;
step 4, carrying out GABOR filtering processing on the ROI image of the finger vein integrated image to obtain a GABOR image;
step 5, carrying out LLBP treatment on the GABOR graph to obtain an LLBP graph;
step 6, carrying out normalization and binarization processing on the LLBP graph to obtain a first backbone graph;
and 7, matching the characteristic values of the first backbone image and the stored backbone images of the finger vein binocular images, and judging whether the images are the same finger vein images.
2. The binocular image stitching fusion algorithm of claim 1, wherein: the ROI map extraction in the step 3 comprises the following steps:
step 3.1, reading the finger vein integrated image;
step 3.2, converting the read finger vein integrated image into a single-channel gray-scale image;
3.3, extracting the finger edge contour of the single-channel gray-scale image to generate two contour envelopes of the upper and lower fingers;
step 3.4, calculating an included angle between the center line of the finger edge contour line and the longitudinal center line of the image, and rotating the finger contour line to enable the center line of the finger edge contour line to be coincident with the longitudinal center line of the image;
3.5, translating the rotated finger contour line to enable the center of the finger contour line to coincide with the center of the image;
step 3.6, zooming the finger contour line to enable the finger contour line to be adaptive to the image acquisition window;
and 3.7, intercepting the image of the finger contour line which is matched with the image acquisition window by using the set window, and generating an ROI image matrix.
3. The binocular image stitching fusion algorithm of claim 1, wherein: the GABOR filtering in the step 4 adopts a GABOR filter in an OpenCV library, and includes the following steps:
step 4.1, carrying out parameter setting on the GABOR filter in the OpenCV library, wherein the set parameters comprise the size, the bandwidth, the radian and the wavelength of the filter;
step 4.2, initializing and setting the inner core of the GABOR filter with the set parameters;
4.3, filtering the ROI image in multiple directions by using the GABOR filter to respectively obtain filtered images in corresponding directions, and comparing the filtered images in the multiple corresponding directions to obtain a filtered image matrix with the minimum value;
and 4.4, carrying out normalization processing on the filtering image matrix with the minimum value to obtain a normalized image matrix.
4. The binocular image stitching fusion algorithm of claim 1, wherein: the LLBP filtering in the step 5 adopts an LLBP filter in an OpenCV library, and the method comprises the following steps:
step 5.1, carrying out 32-bit floating point type conversion on the gray map matrix obtained after the GABOR filtering to obtain a floating point type image matrix;
step 5.2, the constructed vertical filtering factor is
Wherein M is the number of lines of the GABOR graph;
step 5.3, filtering the floating-point type image matrix by using the vertical filtering factor to obtain a vertically filtered image matrix;
step 5.4, the constructed horizontal filtering factor is
Wherein M is the number of lines of the GABOR graph;
5.5, filtering the floating-point type image matrix by using the horizontal filtering factor to obtain a horizontally filtered image matrix;
step 5.6, adding and summing the vertically filtered image matrix and the horizontally filtered image matrix;
step 5.7, calculating the amplitude vector of the image matrix after the addition and summation;
step 5.8, normalizing the summed image matrix by using the calculated amplitude vector to generate a normalized image;
and 5.9, carrying out gray level processing on the normalized image to generate a gray level image.
5. The binocular image stitching fusion algorithm of claim 1, wherein: the step 6 of generating the backbone graph comprises the following steps:
6.1, carrying out binarization processing on the LLBP graph to generate a binarization image matrix;
6.2, removing the data of the first multiple lines and the last multiple lines in the binarization image matrix to reduce noise influence;
6.3, calculating the size of a connected region in the binary image, wherein the region of the connected region in the image, which is smaller than a set threshold value, is a small-area region image, and deleting the small-area region image to reduce the noise influence;
and 6.4, carrying out gray-scale image conversion on the binary image from which the upper and lower noises and the small-area are removed to generate a gray-scale image.
6. The binocular image stitching fusion algorithm of claim 1, wherein: the characteristic value matching in the step 7 comprises the following steps:
step 7.1, scaling down the two acquired backbone graphs in equal proportion to obtain a reduced backbone graph;
and 7.2, taking any same coordinate point in the two reduced backbone graphs as a center and 5X5 surrounding pixel point areas, comparing, counting the number of similar points, wherein if the number of the similar points is larger than a set threshold value, the two finger vein images are the same image, otherwise, the two finger vein images are different images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910907559.5A CN110599436A (en) | 2019-09-24 | 2019-09-24 | Binocular image splicing and fusing algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910907559.5A CN110599436A (en) | 2019-09-24 | 2019-09-24 | Binocular image splicing and fusing algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110599436A true CN110599436A (en) | 2019-12-20 |
Family
ID=68862883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910907559.5A Pending CN110599436A (en) | 2019-09-24 | 2019-09-24 | Binocular image splicing and fusing algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110599436A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104414620A (en) * | 2013-08-23 | 2015-03-18 | 东莞市中健医疗设备科技有限公司 | Binocular camera based vein positioning method and device |
CN107248137A (en) * | 2017-04-27 | 2017-10-13 | 努比亚技术有限公司 | A kind of method and mobile terminal for realizing image procossing |
WO2018032861A1 (en) * | 2016-08-17 | 2018-02-22 | 广州广电运通金融电子股份有限公司 | Finger vein recognition method and device |
CN109886220A (en) * | 2019-02-26 | 2019-06-14 | 北京凌云天润智能科技有限公司 | A kind of characteristics extraction and comparison algorithm of finger vein image |
-
2019
- 2019-09-24 CN CN201910907559.5A patent/CN110599436A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104414620A (en) * | 2013-08-23 | 2015-03-18 | 东莞市中健医疗设备科技有限公司 | Binocular camera based vein positioning method and device |
WO2018032861A1 (en) * | 2016-08-17 | 2018-02-22 | 广州广电运通金融电子股份有限公司 | Finger vein recognition method and device |
CN107248137A (en) * | 2017-04-27 | 2017-10-13 | 努比亚技术有限公司 | A kind of method and mobile terminal for realizing image procossing |
CN109886220A (en) * | 2019-02-26 | 2019-06-14 | 北京凌云天润智能科技有限公司 | A kind of characteristics extraction and comparison algorithm of finger vein image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10789465B2 (en) | Feature extraction and matching for biometric authentication | |
CN109815850B (en) | Iris image segmentation and positioning method, system and device based on deep learning | |
Qin et al. | Finger-vein verification based on the curvature in Radon space | |
CN108985134B (en) | Face living body detection and face brushing transaction method and system based on binocular camera | |
WO2017088109A1 (en) | Palm vein identification method and device | |
CN107729820B (en) | Finger vein identification method based on multi-scale HOG | |
WO2021027364A1 (en) | Finger vein recognition-based identity authentication method and apparatus | |
WO2017106996A1 (en) | Human facial recognition method and human facial recognition device | |
CN106096569B (en) | A kind of finger vein identification method | |
CN103870808A (en) | Finger vein identification method | |
CN107481374B (en) | Intelligent terminal fingerprint unblock door opener | |
CN110348289B (en) | Finger vein identification method based on binary image | |
CN110555380A (en) | Finger vein identification method based on Center Loss function | |
CN105426843A (en) | Single-lens palm vein and palmprint image acquisition apparatus and image enhancement and segmentation method | |
CN108182399B (en) | Finger vein feature comparison method and device, storage medium and processor | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
CN112597812A (en) | Finger vein identification method and system based on convolutional neural network and SIFT algorithm | |
CN113723309A (en) | Identity recognition method, identity recognition device, equipment and storage medium | |
CN109886220A (en) | A kind of characteristics extraction and comparison algorithm of finger vein image | |
CN113269029A (en) | Multi-modal and multi-characteristic finger vein image recognition method | |
CN110599436A (en) | Binocular image splicing and fusing algorithm | |
CN111209851B (en) | Finger vein recognition method based on deep fusion of finger abdominal vein and finger dorsal vein | |
WO2022121858A1 (en) | Image processing method and apparatus, fingerprint information extraction method and apparatus, device, product, and medium | |
KR101767051B1 (en) | Method and apparatus for extracting finger vein image based on fuzzy inference | |
CN112668412A (en) | Two-dimensional code generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191220 |