CN117542090B - Palm print vein recognition method based on fusion network and SIF characteristics - Google Patents

Palm print vein recognition method based on fusion network and SIF characteristics Download PDF

Info

Publication number
CN117542090B
CN117542090B CN202311584446.9A CN202311584446A CN117542090B CN 117542090 B CN117542090 B CN 117542090B CN 202311584446 A CN202311584446 A CN 202311584446A CN 117542090 B CN117542090 B CN 117542090B
Authority
CN
China
Prior art keywords
palm print
formula
pixel
vein
palmprint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311584446.9A
Other languages
Chinese (zh)
Other versions
CN117542090A (en
Inventor
薛喜柱
林本聪
麻亚翰
张俊杰
刘兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yimaitong Shenzhen Intelligent Technology Co ltd
Original Assignee
Yimaitong Shenzhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yimaitong Shenzhen Intelligent Technology Co ltd filed Critical Yimaitong Shenzhen Intelligent Technology Co ltd
Priority to CN202311584446.9A priority Critical patent/CN117542090B/en
Publication of CN117542090A publication Critical patent/CN117542090A/en
Application granted granted Critical
Publication of CN117542090B publication Critical patent/CN117542090B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Vascular Medicine (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Compared with the previous single recognition and template matching mode, the invention performs integrated analysis on the palm print and vein recognition method and collects the fusion characteristics of main and obvious characteristics so as to reduce the requirement on equipment. The invention provides a full SIF palm print vein feature mode, which can completely avoid the failure of detection of an algorithm network model caused by the angle shrinkage and enlargement of a palm acquired picture. A palm print vein recognition mode based on a fusion network decides an algorithm model more suitable for the current detection environment through a competition cooperation principle, and introduces balance consideration between time and accuracy, so that the detection speed of the whole network is higher, the requirements on equipment are lower, and the accuracy can be sufficiently ensured.

Description

Palm print vein recognition method based on fusion network and SIF characteristics
Technical Field
The invention relates to the field of machine learning image recognition, in particular to a palm print vein recognition method based on a fusion network and SIF features.
Background
At present, palm print recognition and vein recognition technologies have high requirements on image acquisition equipment. Palmprint recognition requires ultra-high resolution to capture fine texture information, while vein recognition requires a suitable infrared light source to acquire reliable vein images. In addition, the quality of the generated image directly affects the accuracy and reliability of the identification. These high requirements result in palm print recognition and vein recognition being far less popular and practical than face detection, etc. More seriously, most palmprint and venous image recognition require matching by means of templates, and use of deep convolutional networks to extract higher order features. This not only places high demands on the device, but also requires a large amount of computing resources to accelerate the recognition speed. These factors have severely hampered the popularity of palm print recognition and vein recognition techniques. Therefore, a recognition optimization method with strong reliability, high recognition speed and low equipment requirement is urgently needed at present. The method can quickly identify under the common image acquisition equipment, and supports wider application scenes.
Disclosure of Invention
In order to solve the problems, the palm print vein recognition method based on the fusion network and SIF features comprises the following steps:
1) Image acquisition
Firstly, collecting and characterizing data by using acquisition equipment, wherein the data comprises palm print pictures and vein pictures, the whole area is a palm, and branch points of the palm print and the vein part are captured in an emphasized manner in the image acquisition process, so that palm images with multiple angles or multiple amplification factors are not required to be acquired;
2) Region division
Preprocessing the acquired image, firstly, performing gray processing on the acquired palm image, converting a color image into a gray image, and primarily extracting palm print characteristics by adopting CANNY EDGE Detector filtering algorithm;
3) Palm print SIF feature extraction
In the step 2), filtering of fine palmprint and preliminary filtering of the palmprint are completed, discrete noise points exist in the palmprint, a vectorization noise reduction optimization method is adopted, a shallow VGG network is adopted to conduct a palmprint number classification task on the noise optimized image, after the algorithm network of prediction evaluation of the palmprint number is completed, the length ratio among the palmprint is calculated, the pixel radian value of each palmprint is represented, wherein before specific characteristics are represented, interval sampling and classification are needed to be conducted on the pixel points among the palmprint according to an interval formula and a classification formula respectively, and in addition, a palmprint length ratio formula, a pixel radian value calculation formula and a fine palmprint length ratio extraction formula are adopted to conduct palmprint SI feature extraction respectively;
4) Vein feature extraction map
In the step, the extraction of the SIF features of the veins is completed, the vein feature collection only needs to identify branch points of the palm, and the minute veins are not required for the branch points;
5) Converged network decision making
In the step, firstly, SVM multi-classification is adopted to perform multi-classification operation on palm print SIF and vein SIF characteristics, wherein a target mapping value is ID code of a detector in a database, a SoftMax is adopted to output probability values of all labels, a balance formula is adopted to balance the two results, and in addition, an optimal result of a single characteristic is obtained and then output.
Preferably, the intermediate formula in the step 3) is expressed as:
Wherein, the interval formula is expressed as follows:
xl=k*α,k=1,2,3,…,
Wherein x l is the abscissa value of the taken section line, and alpha is the hyper-parameter, namely the horizontal distance between each section line; For the values of all the pixels on the section line, only when the value is larger than 0, the pixel belongs to a certain main palm line; k is the number of times taken by the interval, the abscissa of all the stubs needs to be kept within the image range.
The categorization formula in the step 3) is expressed as follows:
wherein, the categorization formula is expressed as follows:
wherein the method comprises the steps of When the first time of the line is divided, i pixel points needing to be classified are divided,/>When the line is divided from the first-1 th line, the upper ordinate value of the j th main palm print,/>When the first-1 level line is divided, the j+1th main palm print has an upper longitudinal coordinate value, beta is a height difference super parameter, the picture with the denser main palm print is filtered when the set value gradually becomes smaller, the sparse main palm print picture is filtered when the value gradually becomes larger, and M represents the number of main palm prints;
The main palm print length ratio formula in the step 3) is as follows:
The length ratio of the main palmprint is calculated firstly, namely the ratio of the length of one main palmprint to the length of the rest palmprint, and the length ratio of the main palmprint is expressed as:
Wherein ph is a set, in which the length ratio of the h-th main palm print to other main palm prints is the ratio of the number of pixels contained in the h-th main palm print to the number of pixels contained in other main palm prints, wherein the number of pixels contained in the h-th main palm print can be obtained directly through the number of pixels classified in each main palm print in a classification formula, and M is the number of main palm prints contained in an image;
The pixel radian value calculation formula in the step 3) is expressed as follows:
the pixel radian value calculation formula is expressed as follows:
e=1,2,3,4,…,
Wherein, θ e is the pixel radian value when the pixel radian value is performed, the left end point is the pixel radian value when the e pixel point is performed, x e,xe+1,xe+2 is the abscissa value of the pixel point when e, e+1, e+2 are respectively performed, y e,ye+1,ye+2 is the ordinate value of the pixel point when e, e+1, e+2 are respectively performed, and the angle value on the master palm print can be calculated by the above formula;
The radian value calculation formula calculates an angle value of a radian system, wherein the angle value of the radian system mainly calculates a triangle-like shape formed by every three pixel points, wherein the angle value of an included angle where a middle pixel point is positioned is calculated by respectively calculating the distance between a left vertex pixel point coordinate and a middle pixel point coordinate and the distance between a right vertex pixel point and the middle pixel point coordinate, then calculating the pixel coordinate distance between the left vertex coordinate and the right vertex coordinate, and calculating the radian system angle value of the middle pixel point by a cosine theorem and an inverse trigonometric function;
The fine palmprint length ratio extraction formula in the step 3) is expressed as follows:
the feature extraction of the main palm print is completed in the steps, the feature extraction is performed on the fine palm print, and the characterization is mainly performed on the total length of the fine palm print and the total length of the main palm print, wherein the length ratio extraction formula of the fine palm print is as follows:
The Pixel Small is the ratio of the number of the total pixels occupied by the fine palm print to the number of the pixels contained in the main palm print, the total pixels in which the total pixels are equal to 1, the Pixel total is the number of the total pixels in the picture, the Pixel r is the number of the total pixels contained in the r-th main palm print, and the ratio between the length of the fine palm print of the person to be detected and the main palm print is calculated by the formula.
Preferably, the filtering function in the step 4) is expressed as:
Firstly, carrying out binarization processing on the acquired vein picture, converting all pixel points of an area outside veins into 0, converting all vein areas into 1, and secondly, adopting a filtering operator to an activation function in the algorithm, and rejecting vein paths, wherein the filtering function is expressed as follows:
Conv21=[1,1,1,1,1,1,1,1]
5<γ≤9
The filtering operator adopted by the Conv2 1 filtering layer is characterized in that the Feature map_G is a converted area, the Feature map_1 is an original convolution area, gamma is a set filtering operator, when gamma gradually becomes larger, vein paths and crossing points are filtered more strictly, when the number of bright points of the convolved area is not more, all the bright points are directly classified as 0, when the convolved area is more than 1, all the bright points are directly set as 1, wherein the convolution interval of the filtering operator is 2 each time.
The extraction function in the step 4) is expressed as follows:
In the above formula, filtering and removing of the crossing point and the vein and enhancing of the branching point are completed, and extraction is directly performed on the branching point in the following, wherein the extraction function is expressed as follows:
Conv22=[0.5,0.5,0.5,0.5,2,0.5,0.5,0.5]
Wherein Conv2 2 is an extraction operator, feature Map-2 is an image convolved by a filtering operator, res is a determination result, when the result is determined to be True, the region is a branching point, otherwise, the region is not a branching point, through the steps, the number of branching point regions in the image is extracted as one of vein features, and the center point coordinates of each branching point region are taken to calculate the ratio of distances between the branching point region and the left and right end points of each main palm print, so that m×c Feature values can also be collected, wherein M is the number of main palm prints, and C is the number of branching points.
Preferably, the balance formula in the step 5) is expressed as:
Firstly, performing multi-classification operation on palm print SIF and vein SIF characteristics by adopting SVM multi-classification, wherein a target mapping value is used for encoding ID of a detector in a database, and outputting probability values of all labels by adopting SoftMax, and balancing by utilizing detection accuracy based on palm print and vein SIF characteristic balancing formulas, wherein the balancing formulas are expressed as follows:
θ12=1
Wherein, θ 12 is the weight of the SVM classifier A and the SVM classifier B to the single feature result prediction result, arc A and arc_B are the correctness of the two classifiers, and the weight values of the two classifiers can be automatically balanced according to the correctness by the formula;
The conditional formula in the step 5) is expressed as follows:
Wherein the conditional formula is expressed as follows:
Wherein Arc conv,Arcsvm is the accuracy of the convolutional network and the accuracy of the single eigenvalue, time conv is the practice spent by the convolutional network, time SVM is the Time spent by the SVM classifier, ω is the Time tolerance, when taking higher values, the improvement of the accuracy of the convolutional network is faster than the increase of the Time spent, the Time spent by the user is detected more carelessly, and when taking lower values ω indicates that higher accuracy is required, and the Time constraint condition is a secondary condition. Compared with the prior art, the invention has the beneficial effects that:
1. According to the palm print vein recognition method based on the fusion network and the SIF features, the palm print vein features are changed into SIF feature parameters through direct collection or conversion, so that the hardware requirements of image equipment are reduced, error rates caused by factors such as image angles, scaling and the like are avoided, the overall calculation speed and accuracy are improved, and the economic cost is reduced.
2. The palm print vein recognition method based on the fusion network and the SIF features adopts a SIF feature value matching mode, avoids a template matching or deep convolution network which is needed in the traditional network, greatly reduces calculation resources and increases palm print vein recognition.
3. According to the palm print vein recognition method based on the fusion network and the SIF features, the palm print and the vein are comprehensively analyzed and weighed according to the accuracy of the palm print and the vein, so that double-line recognition of the palm print and the vein can be achieved at the same time, and the fault tolerance of the network to recognition of a detected person is increased.
4. According to the palm print vein recognition method based on the fusion network and the SIF features, provided by the invention, whether the fusion network is adopted for analysis is judged by adopting a conditional formula, the time cost and the accuracy are considered simultaneously, a choice can be made in the detection speed and the detection accuracy, and the use scene of the algorithm is increased.
Drawings
FIG. 1 is a flowchart of a palmprint vein recognition method based on a converged network and SIF features provided in accordance with the present invention;
FIG. 2 is a palmprint processing flow chart of a palmprint vein recognition method based on a converged network and SIF features provided in accordance with the present invention;
FIG. 3 is a flowchart of palm print feature extraction for a palm print vein recognition method based on a converged network and SIF features according to the present invention;
FIG. 4 is a schematic diagram of pixel radian values of a palm print vein recognition method based on a fusion network and SIF features according to the present invention;
FIG. 5 is a schematic vein diagram of a palmprint vein recognition method based on a fusion network and SIF features according to the present invention;
FIG. 6 is a flowchart for extracting the characteristic of a branch point of a palm print vein recognition method based on a fusion network and SIF characteristics, which is provided by the invention;
Fig. 7 is a flowchart of a fused network of vein branch points based on a fused network and a palm print vein recognition method of SIF features according to the present invention.
Detailed Description
The invention is described in further detail below with reference to fig. 1-7 and the detailed description of the embodiments:
Fig. 1 is a flowchart of a palmprint vein recognition method based on a fusion network and SIF features provided by the invention.
Step S1: and (5) image acquisition.
In this step, the data is first collected and characterized using an acquisition device, including palmprint and venous images, with the entire area being the palm. In the image acquisition process, branch points of palmprint and vein parts are captured in an emphasized mode, and palm images with multiple angles or multiple magnifications are not required to be acquired.
Step S2: and (5) dividing areas.
Fig. 2 is a flowchart of palmprint processing of the palmprint vein recognition method based on the fusion network and SIF features provided by the invention.
In step S1, a certain amount of palm print images are acquired using an acquisition device. In this step, a preprocessing operation is required for these images, as shown in fig. 2. Firstly, gray processing operation is carried out on the acquired palm picture, and a color image is converted into a gray image, so that the subsequent processing steps are simplified. Second, the present invention differs from previous patents in that it employs a histogram or other deep convolutional network. A CANNY EDGE Detector filtering algorithm is adopted as a palm print feature primary extraction mode. By setting a proper filtering strength value, the filtering strictness degree is adjusted. If the filtering value is set to be low, fine palmprint is still remained. If a higher threshold is set, only the dominant palm print features will be apparent. Compared with the traditional deep convolution network which needs higher calculation resources and more time, the processing mode has the advantages of lower calculation force requirement and faster processing speed although missing for particularly fine palmprints, and the SIF characteristic-based analysis can achieve faster speed and higher accuracy under the condition that particularly fine palmprints are not considered.
Step S3: and extracting palm print SIF features.
Fig. 3 is a flowchart of extracting palm print features of the palm print vein recognition method based on the fusion network and SIF features.
Fig. 4 is a schematic diagram of pixel radian values of a palm print vein recognition method based on a fusion network and SIF features according to the present invention.
The invention provides a multi-layer palm print characteristic filter network based on a primary principle of a main palm print. The first layer network is a main palm print filter layer, and main filter characteristics are considered to be the number of main palm prints and the length ratio of each main palm print, and the radian value of each main palm print pixel. All three characteristic values are characteristic values which are irrelevant to the angle and the size of the picture, namely Scale-INVARIANT FEATURES (SIF).
In step S2, filtering of fine palmprint and preliminary filtering of the palmprint are completed, but more discrete noise points still exist near the palmprint, so that the vectorization noise reduction optimization method is adopted in the invention, and noise reduction with lower density is selected in the horizontal direction, so that the horizontal line is continuous, and noise reduction with high strength is selected in the vertical direction, so that details in the direction are kept as one point; as shown in fig. 2, the vector noise reduction is performed to extract a curve composed of pixels with a value of 1 from the main palm print.
In the present invention, as shown in fig. 3, for the measurement of the number of the master palmprint, since the vectorized noise-reduced master palmprint image is simple and is a line composed of several continuous and uninterrupted pixel points with a value of 1, the shallow VGG network can be directly used for classifying tasks, for example, VGG16, where the target mapping value is the number M of the master palmprints. After the VGG16 transfer learning is completed, the prediction evaluation of higher accuracy on the specific number of the master palmprint in the vectorized and noise-reduced master palmprint image can be completed.
After the algorithm network for predicting and evaluating the number of the main palms is built, the length ratio among the main palms, the pixel radian value of each main palms, and the like, are required to be characterized, wherein before specific characteristics are characterized, the pixel points among the main palms are required to be sampled and classified at intervals.
Wherein, the interval formula is expressed as follows:
xl=k*α,k=1,2,3,…,
Wherein x l is the abscissa value of the taken section line, and alpha is the hyper-parameter, namely the horizontal distance between each section line; For the value of all pixels on this line, only when it is greater than 0, the pixel belongs to a main palm line. k is the number of times taken by the interval, it is noted that the abscissa of all the stubs needs to be kept within the image.
Wherein the categorization formula is expressed as follows:
In the two formulas, the pixel points truly belonging to the main palmprint can be collected and filtered by utilizing the transverse cross line, but the filtered pixel points cannot be judged to which main line the filtered pixel points belong to, so the invention adopts a classification formula to judge
Wherein, the categorization formula is expressed as follows:
wherein the method comprises the steps of When the first time of the line is divided, the i pixel point to be classified is the i pixel point. /(I)When the line is divided from the first-1 th line, the upper ordinate value of the j th main palm print,/>When the line is divided into the first-1 level section line, the j+1th main palm print has its upper longitudinal coordinate value, beta is the super parameter of the height difference, a smaller value can be set for filtering the denser picture of the main palm print, and a higher value can be used for filtering the sparser main palm print. M represents the number of master palmprints.
Each batch of pixel points segmented by the truncated line can be classified by the above formula, wherein it should be noted that the position of the coordinate point classified by the initial selection needs to satisfy formula 4, that is, the height difference between each truncated pixel point needs to reach a certain value, as shown in fig. 3, the truncated line on the left cannot satisfy the requirement, the height difference on the right is obvious, and the number of truncated palmprints is consistent with that in the VGG algorithm network, so that the truncated palmprint can be used as the initial classified coordinate point. Then, continuously intercepting palmprint coordinates through a formula, judging the intercepted coordinates by the formula, namely judging coordinate points intercepted at the last time and last time, comparing pixels intercepted at the last time with pixels of the palmprint, classifying the pixels with the point with the smallest height difference value with the last time, and further completing classification of all pixels, namely which palmprint the pixels belong to.
After the pixel points on the main palmprint are sampled and classified at intervals, the characteristic extraction work of the main palmprint can be performed, wherein the characteristic extraction work comprises the length ratio between the main palmprint and the pixel radian value of each main palmprint.
The length ratio of the main palmprint is calculated firstly, namely the ratio of the length of one main palmprint to the length of the rest palmprint, and the length ratio of the main palmprint is expressed as:
wherein ph is a set, and the length ratio of the h-th main palm print to the other main palm prints is defined, that is, the ratio of the number of pixels contained in the h-th main palm print to the number of pixels contained in the other main palm prints, where the number of pixels contained in the h-th main palm print can be obtained directly by the number of pixels classified in each main palm print in the classification formula, for example, 20 pixel coordinate points (including abscissa and ordinate) are obtained in the 1 st main palm print by the classification formula, and then the number of pixels is 20.M is the number of dominant palm prints contained in the image. Because the vectorization noise reduction is adopted in the steps, the number of the pixel points contained in each main palm print can be calculated through the formula, and then the length ratio is directly replaced by the ratio of the number of the pixel points.
The number of pixels included in the h-th main palm print can be directly determined by the number of pixels classified in each main palm print in the classification formula. For example, 20 pixel coordinate points (including an abscissa and an ordinate) are obtained in the 1 st main palm print through the classification formula, and then the number of the pixel points of the main palm print is 20.
The pixel radian value calculation formula is expressed as follows:
e=1,2,3,4,…,
Wherein, θ e is the pixel radian value when the pixel radian value is performed, the left end point is the pixel radian value when the e pixel point is performed, x e,xe+1,xe+2 is the abscissa value of the pixel point when e, e+1, e+2 are performed, and y e,ye+1,ye+2 is the ordinate value of the pixel point when e, e+1, e+2 are performed. The angle value of the main palm print can be calculated through the formula, specifically, the angle value of the radian system is calculated through the radian value calculation formula, the angle value of the radian system is mainly calculated as a triangle-like shape formed by every three pixel points, the angle value of an included angle where the middle pixel point is positioned is calculated through respectively calculating the distance between the coordinates of the left vertex pixel point and the coordinates of the middle pixel point and the distance between the coordinates of the right vertex pixel point and the coordinates of the middle pixel point, then calculating the pixel coordinate distances between the coordinates of the left vertex point and the coordinates of the right vertex point, and calculating the radian system angle value of the middle pixel point through a cosine theorem and an inverse trigonometric function;
The feature extraction of the main palmprint is completed in the above, the feature extraction is performed on the fine palmprint, the total length of the fine palmprint and the total length of the main palmprint are characterized in the invention, and the length ratio extraction formula of the fine palmprint is as follows:
The Pixel Small is the ratio of the number of the total pixels occupied by the fine palm print to the number of the pixels contained in the main palm print, the total pixels in which the total pixels are equal to 1, the Pixel total is the number of the total pixels in the picture, and the Pixel r is the number of the total pixels contained in the r main palm print, which is equal to 1. The ratio between the length of the fine palmprint and the main palmprint of the person to be detected can be calculated through the formula.
In this step, the feature extraction of the SI F of the palmprint is completed, including the number of palmprint, the length ratio between palmprint, and the pixel radian value of palmprint. In the traditional palm print identification method, a contrast area is usually extracted by utilizing an ROI, then template matching, deep convolution network analysis and the like are carried out on the lines, but the requirements of the pattern on palm print photos are very high in the mode, wherein the requirements comprise angles, amplification factors and the like, and once angle correction occurs, template matching and the accuracy of the deep convolution network can directly show cliff-breaking drop when the multiple correction is inaccurate. The invention firstly provides that parameters irrelevant to the angle, the size and the like of the picture are collected for the characteristic options of the main palm print, or the parameters originally related to the picture characteristics are converted into SIF characteristics. Therefore, the method has low requirement on equipment extraction of palm print characteristics, can well characterize the characteristics of detected personnel no matter what angle the acquired image is, and further greatly increases the stability of network identification of the algorithm of the follow-up step and the like.
Step S4: vein SIF feature extraction map.
Fig. 5 is a schematic vein diagram of a palm print vein recognition method based on a fusion network and SIF features according to the present invention.
Fig. 6 is a flowchart of extracting features of a branch point of a palm print vein recognition method based on a fusion network and SIF features.
Extraction of the palm print SIF features is completed in step S3, which is required to be completed. The invention has the requirements on vein special acquisition equipment, and only needs to identify branch points of palm parts. For bifurcation, fine pulses are not required, as shown in FIG. 5.
The invention provides an extraction operation for branch points based on double verification, wherein the double verification is divided into a filtering stage and an extraction stage, the former is mainly used for filtering a pulse path, and the latter is used for intercepting the branch points.
As shown in fig. 6, the acquired vein image is firstly binarized, and all pixels in the area outside the vein are converted into 0, and all vein areas are converted into 1. Secondly, the algorithm firstly adopts a filtering operator and an activating function to reject the vein, wherein the filtering function is expressed as follows:
Conv21=[1,1,1,1,1,1,1,1]
5<γ≤9
The filtering operator adopted by the Conv2 1 filtering layer is characterized in that the Feature map_G is a converted area, the Feature map_1 is an original convolution area, gamma is a set filtering operator, when gamma gradually becomes larger, vein paths and crossing points are filtered more strictly, when the number of bright points of the convolved area is not more, all the bright points are directly classified as 0, when the convolved area is more than 1, all the bright points are directly set as 1, wherein the convolution interval of the filtering operator is 2 each time.
The original Feature Map-1 refers to an original image subjected to binarization, the image subjected to binarization is filtered through a filtering operator, when a value obtained through calculation of one filtering operator is found to be higher than a threshold value, a suspected branch point can be judged, then further extraction is carried out through an extraction operator in an extraction function, and the image Feature Map-2 convolved by the filtering operator is convolved again to extract all the branch points.
The extraction function in the step 4) is expressed as follows:
In the above formula, filtering and removing of the crossing point and the vein and enhancing of the branching point are completed, and extraction is directly performed on the branching point in the following, wherein the extraction function is expressed as follows:
Conv22=[0.5,0.5,0.5,0.5,2,0.5,0.5,0.5]
Wherein Conv2 2 is an extraction operator, feature Map-2 is an image convolved by a filtering operator, res is a judgment result, when the result is judged to be True, the region is a branching point, otherwise, the region is not a branching point. Through the steps, the number of branch point areas in the image is extracted as one of vein features, and the center point coordinates of each branch point area are taken to calculate the ratio of the distance between the branch point areas and the left and right end points of each main palm print, so that M x C feature values can be collected, wherein M is the number of the main palm prints, and C is the number of the branch points.
Step S5: and (5) merging network judgment.
Fig. 7 is a flowchart of a fused network of vein branch points in the palm print vein recognition method based on the fused network and SIF characteristics.
In step S3, S4, SIF features of the palmprint and the vein are extracted respectively, which are the number of palmprints, the length ratio of palmprints, the radian value of the pixel point of palmprint, the length ratio of fine palmprint, the number of branch points of vein, and the distance ratio around the branch points of vein. These features are all independent of image characteristics, which not only reduces the requirements for equipment acquisition, but also reduces the influence of algorithm accuracy and picture quality in this step.
The specific network architecture is shown in fig. 7, and the multi-classification operation is performed on the palm print SIF and the vein SIF by adopting SVM multi-classification, wherein the target mapping value is the ID code of the inspector in the database, and the probability value of each label is output by adopting SoftMax. The invention provides a balancing formula based on palmprint and vein SIF characteristics, which is used for balancing by detecting the correct rate, wherein the balancing formula is expressed as follows:
θ12=1
wherein, θ 12 is the weight of the SVM classifier A and the SVM classifier B to the single feature result prediction result, and Arc A and arc_B are the correctness of the two classifiers. The weight values of the two can be automatically balanced according to the accuracy of the formula.
After the optimal result output of the single characteristic is obtained, the regional analysis is carried out on all characteristic values by adopting a convolution network, but the convolution network needs to judge the accuracy and the consumed time, and only if a condition formula is met, the fusion overall analysis is carried out.
Wherein the conditional formula is expressed as follows:
Wherein Arc conv,Arcsvm is the accuracy of the convolutional network and the accuracy of the single eigenvalue, time conv is the practice spent by the convolutional network, and Time SVM is the Time spent by the SVM classifier, respectively. ω is time tolerance, and when higher values are taken, the accuracy of the convolutional network is required to increase faster than the time spent. Detecting the time it takes for the user to be more conscious, and when ω takes a lower value, it indicates that it requires higher accuracy, the time constraint being a secondary condition.
When the conditional formula determines to pass, the convolutional network and the single feature result can be balanced. And finally outputting the matching sequence number result, and if the confidence coefficient does not meet the requirement, directly judging that the person does not belong to the database and rejecting the person.
The foregoing is merely a preferred embodiment of the invention, and it should be noted that, for those skilled in the art, several improvements and modifications can be made without departing from the principles of the invention, and these improvements and modifications should also be considered as the scope of the invention, and other parts not described in detail in the invention belong to the prior art, so they are not described in detail herein, and finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the invention, but not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (4)

1. The palm print vein recognition method based on the fusion network and the SIF features is characterized by comprising the following steps of:
1) Image acquisition
Firstly, collecting and characterizing data by using acquisition equipment, wherein the data comprises palm print pictures and vein pictures, the whole area is a palm, and branch points of the palm print and the vein part are captured in an emphasized manner in the image acquisition process, so that palm images with multiple angles or multiple amplification factors are not required to be acquired;
2) Region division
Preprocessing the acquired image, firstly, performing gray processing on the acquired palm image, converting a color image into a gray image, and primarily extracting palm print characteristics by adopting CANNY EDGE Detector filtering algorithm;
3) Palm print SIF feature extraction
In the step 2), filtering of fine palmprint and preliminary filtering of the palmprint are completed, discrete noise points exist in the palmprint, a vectorization noise reduction optimization method is adopted, a shallow VGG network is adopted to conduct a palmprint number classification task on the noise optimized image, after the algorithm network of prediction evaluation of the palmprint number is completed, the length ratio among the palmprint is calculated, the pixel radian value of each palmprint is characterized, wherein before specific characteristics are characterized, the pixel points among the palmprint are required to be sampled and classified at intervals according to an interval formula and a classification formula respectively, and in addition, a palmprint length ratio formula, a pixel radian value calculation formula and a fine palmprint length ratio extraction formula are adopted to conduct palmprint SIF characteristic extraction respectively;
4) Vein feature extraction map
In the step, the extraction of the SIF features of the veins is completed, the vein feature collection only needs to identify branch points of the palm, and the minute veins are not required for the branch points;
5) Converged network decision making
In the step, firstly, SVM multi-classification is adopted to perform multi-classification operation on palm print SIF and vein SIF characteristics, wherein a target mapping value is ID code of a detector in a database, a SoftMax is adopted to output probability values of all labels, a balance formula is adopted to balance the two results, and in addition, an optimal result of a single characteristic is obtained and then output.
2. The palm print vein recognition method based on the fusion network and the SIF features as claimed in claim 1, wherein:
the intermediate formula in the step 3) is established: first, for an interval formula, the coordinate axes adopt Cartesian coordinate axes, wherein the origin of coordinates is in the lower left corner of the image, and the interval formula is expressed as follows:
xl=k*α,k=1,2,3,…,
Wherein x l is the abscissa value of the taken section line, and alpha is the hyper-parameter, namely the horizontal distance between each section line; for the values of all the pixels on the section line, only when the value is larger than 0, the pixel belongs to a certain main palm line; k is the number of times taken by the interval, and the abscissa of all the sectional lines need to be kept within the image range;
The categorization formula in the step 3) is expressed as follows:
wherein, the categorization formula is expressed as follows:
wherein the method comprises the steps of When the first time of the line is divided, i pixel points needing to be classified are divided,/>When the line is divided from the first-1 th line, the upper ordinate value of the j th main palm print,/>When the first-1 level line is divided, the j+1th main palm print has an upper longitudinal coordinate value, beta is a height difference super parameter, the picture with the denser main palm print is filtered when the set value gradually becomes smaller, the sparse main palm print picture is filtered when the value gradually becomes larger, and M represents the number of main palm prints;
The main palm print length ratio formula in the step 3) is as follows:
The length ratio of the main palmprint is calculated firstly, namely the ratio of the length of one main palmprint to the length of the rest palmprint, and the length ratio of the main palmprint is expressed as:
wherein p h is a set, in which the length ratio of the h-th main palm print to other main palm prints is the ratio of the number of pixels contained in the h-th main palm print to the number of pixels contained in other main palm prints, wherein the number of pixels contained in the h-th main palm print can be obtained directly through the number of pixels classified in each main palm print in the classification formula, and M is the number of main palm prints contained in the image;
The pixel radian value calculation formula in the step 3) is expressed as follows:
the pixel radian value calculation formula is expressed as follows:
Wherein, θ e is the pixel radian value when the pixel radian value is performed, the left end point is the pixel radian value when the e pixel point is performed, x e,xe+1,xe+2 is the abscissa value of the pixel point when e, e+1, e+2 are respectively performed, y e,ye+1,ye+2 is the ordinate value of the pixel point when e, e+1, e+2 are respectively performed, and the angle value on the master palm print can be calculated by the above formula;
The radian value calculation formula calculates an angle value of a radian system, wherein the angle value of the radian system mainly calculates a triangle-like shape formed by every three pixel points, wherein the angle value of an included angle where a middle pixel point is positioned is calculated by respectively calculating the distance between a left vertex pixel point coordinate and a middle pixel point coordinate and the distance between a right vertex pixel point and the middle pixel point coordinate, then calculating the pixel coordinate distance between the left vertex coordinate and the right vertex coordinate, and calculating the radian system angle value of the middle pixel point by a cosine theorem and an inverse trigonometric function;
The fine palmprint length ratio extraction formula in the step 3) is expressed as follows:
the feature extraction of the main palm print is completed in the steps, the feature extraction is performed on the fine palm print, and the characterization is mainly performed on the total length of the fine palm print and the total length of the main palm print, wherein the length ratio extraction formula of the fine palm print is as follows:
The Pixel Small is the ratio of the number of the total pixels occupied by the fine palm print to the number of the pixels contained in the main palm print, the total pixels in which the total pixels are equal to 1, the Pixel total is the number of the total pixels in the picture, the Pixel r is the number of the total pixels contained in the r-th main palm print, and the ratio between the length of the fine palm print of the person to be detected and the main palm print is calculated by the formula.
3. The palm print vein recognition method based on the fusion network and the SIF features as claimed in claim 1, wherein:
The filtering function in the step 4) is expressed as follows:
Firstly, carrying out binarization processing on the acquired vein picture, converting all pixel points of an area outside veins into 0, converting all vein areas into 1, and secondly, adopting a filtering operator to an activation function in the algorithm, and rejecting vein paths, wherein the filtering function is expressed as follows:
Conv21=[1,1,1,1,1,1,1,1]
The filtering operator adopted by the Conv2 1 filtering layer is FeatureMap_G which is a converted area, featureMap-1 is an original convolution area, gamma is a set filtering operator, vein paths and crossing points are filtered more strictly when gamma becomes larger gradually, the bright points of the convolved area are directly classified as 0 when the number of the bright points is not large, the convolved area is directly and completely set as 1 when the convolved area is large, and the convolution interval of the filtering operator is 2 each time;
the extraction function in the step 4) is expressed as follows:
In the above formula, filtering and removing of the crossing point and the vein and enhancing of the branching point are completed, and extraction is directly performed on the branching point in the following, wherein the extraction function is expressed as follows:
Conv22=[0.5,0.5,0.5,0.5,2,0.5,0.5,0.5]
wherein Conv2 2 is an extraction operator, featureMap-2 is an image convolved by a filtering operator, res is a determination result, when the result is determined to be True, the region is a branching point, otherwise, the region is not a branching point, through the steps, the number of branching point regions in the image is extracted as one of vein features, and the center point coordinates of each branching point region are taken to calculate the ratio of distances between the branching point region and the left and right end points of each main palm print, so that m×c feature values can also be collected, wherein M is the number of main palm prints, and C is the number of branching points.
4. The palm print vein recognition method based on the fusion network and the SIF features as claimed in claim 1, wherein:
the equilibrium formula in step 5) is expressed as:
Firstly, performing multi-classification operation on palm print SIF and vein SIF characteristics by adopting SVM multi-classification, wherein a target mapping value is ID code of a detector in a database, probability values of all labels are output by adopting SoftMax, palm print and vein SIF characteristic balance formulas are balanced by utilizing detection accuracy, and the balance formulas are expressed as follows:
θ12=1
Wherein, θ 12 is the weight of the SVM classifier A and the SVM classifier B to the single feature result prediction result, arc A and arc_B are the correctness of the two classifiers, and the weight values of the two classifiers can be automatically balanced according to the correctness by the formula;
The conditional formula in the step 5) is expressed as follows:
Wherein the conditional formula is expressed as follows:
Wherein Arc conv,Arcsvm is the accuracy of the convolutional network and the accuracy of the single eigenvalue, time conv is the Time spent by the convolutional network, time SVM is the Time spent by the SVM classifier, ω is the Time tolerance, when taking higher values, the improvement of the accuracy of the convolutional network is faster than the increase of the Time spent, the Time spent by the user is detected more carelessly, and when taking lower values ω indicates that higher accuracy is required, and the Time constraint condition is a secondary condition.
CN202311584446.9A 2023-11-25 2023-11-25 Palm print vein recognition method based on fusion network and SIF characteristics Active CN117542090B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311584446.9A CN117542090B (en) 2023-11-25 2023-11-25 Palm print vein recognition method based on fusion network and SIF characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311584446.9A CN117542090B (en) 2023-11-25 2023-11-25 Palm print vein recognition method based on fusion network and SIF characteristics

Publications (2)

Publication Number Publication Date
CN117542090A CN117542090A (en) 2024-02-09
CN117542090B true CN117542090B (en) 2024-06-18

Family

ID=89791533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311584446.9A Active CN117542090B (en) 2023-11-25 2023-11-25 Palm print vein recognition method based on fusion network and SIF characteristics

Country Status (1)

Country Link
CN (1) CN117542090B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251889A (en) * 2007-12-25 2008-08-27 哈尔滨工业大学 Personal identification method and near-infrared image forming apparatus based on palm vena and palm print
CN107403161A (en) * 2017-07-31 2017-11-28 歌尔科技有限公司 Biological feather recognition method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7496214B2 (en) * 2002-09-25 2009-02-24 The Hong Kong Polytechnic University Method of palm print identification
JP6467852B2 (en) * 2014-10-10 2019-02-13 富士通株式会社 Biological information correction apparatus, biological information correction method, and biological information correction computer program
KR102375593B1 (en) * 2021-08-26 2022-03-17 전북대학교산학협력단 Apparatus and method for authenticating user based on a palm composite image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251889A (en) * 2007-12-25 2008-08-27 哈尔滨工业大学 Personal identification method and near-infrared image forming apparatus based on palm vena and palm print
CN107403161A (en) * 2017-07-31 2017-11-28 歌尔科技有限公司 Biological feather recognition method and device

Also Published As

Publication number Publication date
CN117542090A (en) 2024-02-09

Similar Documents

Publication Publication Date Title
CN106548182B (en) Pavement crack detection method and device based on deep learning and main cause analysis
CN111340824B (en) Image feature segmentation method based on data mining
CN109376740A (en) A kind of water gauge reading detection method based on video
CN109086714A (en) Table recognition method, identifying system and computer installation
CN108520219A (en) A kind of multiple dimensioned fast face detecting method of convolutional neural networks Fusion Features
CN107909081B (en) Method for quickly acquiring and quickly calibrating image data set in deep learning
CN110866430B (en) License plate recognition method and device
CN106875381A (en) A kind of phone housing defect inspection method based on deep learning
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
WO2017206914A1 (en) Fingerprint recognition method, fingerprint recognition system, and electronic device
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
KR20170006355A (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
CN112270310A (en) Cross-camera pedestrian multi-target tracking method and device based on deep learning
CN106203539B (en) Method and device for identifying container number
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN110659649A (en) Image processing and character recognition algorithm based on near infrared light imaging
CN112907519A (en) Metal curved surface defect analysis system and method based on deep learning
CN107066963B (en) A kind of adaptive people counting method
CN113324864A (en) Pantograph carbon slide plate abrasion detection method based on deep learning target detection
CN114897789B (en) Sinter grain size detection method and system based on image segmentation
KR20080079798A (en) Method of face detection and recognition
CN113989196B (en) Visual-sense-based method for detecting appearance defects of earphone silica gel gasket
CN109543498A (en) A kind of method for detecting lane lines based on multitask network
CN110910497A (en) Method and system for realizing augmented reality map
CN108563997B (en) Method and device for establishing face detection model and face recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant