CN117423140A - Palm pulse feature extraction method for massive data based on convolutional neural network - Google Patents
Palm pulse feature extraction method for massive data based on convolutional neural network Download PDFInfo
- Publication number
- CN117423140A CN117423140A CN202311495731.3A CN202311495731A CN117423140A CN 117423140 A CN117423140 A CN 117423140A CN 202311495731 A CN202311495731 A CN 202311495731A CN 117423140 A CN117423140 A CN 117423140A
- Authority
- CN
- China
- Prior art keywords
- palm
- pulse
- neural network
- convolutional neural
- vein
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 37
- 238000000605 extraction Methods 0.000 title claims abstract description 25
- 210000003462 vein Anatomy 0.000 claims abstract description 57
- 238000000034 method Methods 0.000 claims abstract description 19
- 230000011218 segmentation Effects 0.000 claims abstract description 18
- 238000011176 pooling Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 7
- 238000013507 mapping Methods 0.000 claims description 5
- 230000003287 optical effect Effects 0.000 claims description 5
- 239000013598 vector Substances 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 244000281974 silvertop palmetto Species 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000012549 training Methods 0.000 abstract description 5
- 238000000926 separation method Methods 0.000 abstract description 2
- 230000000694 effects Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 210000000554 iris Anatomy 0.000 description 2
- 238000005299 abrasion Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003834 intracellular effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Abstract
The invention belongs to the technical field of biological recognition, and particularly relates to a palm pulse feature extraction method of mass data based on a convolutional neural network, which comprises the following steps: s1: shooting the palm vein of the user by using acquisition equipment; s2: extracting image information of different layers through a convolutional neural network; s3: inputting the extracted palmar vein image information into any segmentation model for removing interference factors affecting palmar vein extraction to obtain an initial palmar vein image model; the interference factors comprise background removal, non-palm pulse area removal and external light; s4: carrying out multi-layer pooling on the initial palm vein image model to obtain clear palm vein image data; s5: converting the extracted palm pulse graphic features by adopting a full connection layer to obtain a palm pulse final output result; s6: and storing the processed palmar pulse data. The method adopts the convolutional neural network combined with any separation model to perform double training on the acquired palmar pulse information, and effectively improves palmar pulse acquisition accuracy.
Description
Technical Field
The invention belongs to the technical field of biological recognition, and particularly relates to a palm pulse feature extraction method based on mass data of a convolutional neural network.
Background
The biological recognition technology is a technology for recognizing and verifying biological characteristics of human bodies, such as fingerprints, palmprints, irises, voiceprints and the like. Compared with the traditional password and key authentication mode, the biological recognition technology has the advantages of safety, reliability, convenience, easiness in use, wide range and the like; as a primary biological recognition mode, fingerprint recognition has a small recognition range, and the problems of easy occurrence of degradation of recognition sensitivity and the like due to abrasion of fingerprints, various biological recognition technologies such as palm prints, faces, irises and the like gradually replace fingerprint recognition.
At present, a camera or other scanning modes are generally adopted for collecting the palm pulse characteristics, only a single CNN or other training algorithms are utilized in a computer system for training the palm pulse characteristics, a palm print model is generated, the effect of collecting can be achieved, but the situation that the matching degree between the palm pulse collected and trained and the information collected in real time is too low frequently occurs, and when a user utilizes palm pulse biological identification, the system can generate the phenomena of popup warning or incapability of identification and the like, so that the palm pulse collecting effect is reduced.
Disclosure of Invention
The invention aims to provide a palm vein feature extraction method based on mass data of a convolutional neural network, which adopts the convolutional neural network combined with any separation model to perform double training on the acquired palm vein information, thereby effectively improving the palm vein acquisition precision.
The technical scheme adopted by the invention is as follows:
a palm pulse feature extraction method based on mass data of a convolutional neural network comprises the following steps:
s1: shooting the palm vein of the user by using acquisition equipment;
s2: extracting image information of different layers through a convolutional neural network;
s3: inputting the extracted palmar vein image information into any segmentation model for removing interference factors affecting palmar vein extraction to obtain an initial palmar vein image model;
the interference factors comprise background removal, non-palm pulse area removal and external light;
s4: carrying out multi-layer pooling on the initial palm vein image model to obtain clear palm vein image data;
s5: converting the extracted palm pulse graphic features by adopting a full connection layer to obtain a palm pulse final output result;
s6: and storing the processed palmar pulse data for later comparison.
In the step S1, the acquisition equipment is any one of an infrared imager, a camera, a scanner and an optical sensor, and the acquisition mode comprises the following steps:
A. the palm is horizontally placed in an acquisition area above the acquisition equipment, and the distance from the palm to the acquisition equipment is kept at a height of 10 cm to 20 cm;
B. the palm is required to be stably placed for 3-10s and is used for collecting and shooting palm pulse information by the collecting equipment;
C. if the palm is blurred due to palm shake in the acquisition process, the palm print needs to be acquired again.
In the step S2, before inputting the palm vein image into the convolutional neural network, preprocessing the palm vein image to form a gray level image, and primarily extracting palm vein information;
the preprocessing mode comprises rotation, graying and enhancement of the image, and the feasibility of the palm vein image is judged by using a convolutional neural network.
In the step S3, the interference factors are removed by adopting a mode of dividing any model, and the specific steps are as follows:
a1: removing the external background: a background other than the palm in the image, a gap between the fingers;
b1: removing the non-fingerprint area of the palm: interference in non-fingerprint areas of the palm, finger side shadows, nails.
In the step S3, the palm print lines are identified, positioned and separated by adopting any separated model, and meanwhile, the contrast and definition between the palm prints can be adjusted, so that a clean and clear initial palm print model is obtained.
And S4, carrying out aggregation treatment on the local area of the input graph, reducing the graph size of the initial palm print model by using a pooling layer, and reserving key palm print information to obtain final palm pulse processing parameters.
In the step S5, the palm pulse processing parameters are converted into one-dimensional vectors, the feature mapping is converted into an output result by utilizing linear transformation and a nonlinear activation function, and the output result is stored.
The method for primarily extracting the palmar pulse information features selects any one of local binary patterns and directional gradient histograms.
After the palm vein features are extracted, the palm of the user is placed above the acquisition equipment again, the palm is photographed again, the trained palm vein model data is compared with the immediately photographed image data through the computer, if the accuracy rate between the palm vein model data and the immediately photographed image data is lower than 99%, the palm vein information is required to be acquired again, and the model library is updated.
The invention has the technical effects that:
according to the palm pulse feature extraction method based on the mass data of the convolutional neural network, the convolutional neural network and any segmentation model are adopted, double filtering processing can be carried out on the acquired palm pulse graph data, and other areas such as background areas and non-palm print areas in the palm pulse image are effectively removed, so that the processed palm pulse data are higher in matching with the palm pulse data acquired in real time, the problems that the traditional extraction features are inaccurate and cannot be identified are effectively solved, and the extraction precision is guaranteed.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The present invention will be specifically described with reference to examples below in order to make the objects and advantages of the present invention more apparent. It should be understood that the following text is intended to describe only one or more specific embodiments of the invention and does not limit the scope of the invention strictly as claimed.
As shown in fig. 1, a method for extracting palm pulse characteristics of mass data based on a convolutional neural network comprises the following steps:
s1: shooting the palm vein of the user by using acquisition equipment;
s2: extracting image information of different layers through a convolutional neural network;
s3: inputting the extracted palmar vein image information into any segmentation model for removing interference factors affecting palmar vein extraction to obtain an initial palmar vein image model;
the interference factors comprise background removal, non-palm pulse area removal and external light;
s4: carrying out multi-layer pooling on the initial palm vein image model to obtain clear palm vein image data;
s5: converting the extracted palm pulse graphic features by adopting a full connection layer to obtain a palm pulse final output result;
s6: and storing the processed palmar pulse data for later comparison.
Specifically, the convolutional neural network is used in parallel with any segmentation model, so that the segmentation processing of any object can be realized, the convolutional neural network is used for extracting the characteristics of the image, and the segmentation model is responsible for mapping the characteristics into the segmentation result at the pixel level. The parallel use method is commonly used for tasks such as image segmentation, semantic segmentation, instance segmentation and the like, and comprises the following steps:
a convolutional neural network is used to extract features of the image. The convolutional neural network can extract characteristic representations of different levels through a plurality of convolutional layers and pooling layers;
the extracted features are input into a segmentation model. Common segmentation models include full convolutional networks U-Net, mask R-CNN, and the like. These models are typically composed of an encoder for extracting image features and a decoder for mapping the features to the size of the original image by upsampling and deconvolution operations and generating segmentation results;
the output of the segmentation model is post-processed, e.g., by applying a threshold or other post-processing technique, to obtain the final segmentation result. This typically involves classifying pixels to assign them to different object classes or contexts.
In the step S1, the acquisition equipment is any one of an infrared imager, a camera, a scanner and an optical sensor, and the acquisition mode comprises the following steps:
A. the palm is horizontally placed in an acquisition area above the acquisition equipment, and the distance from the palm to the acquisition equipment is kept at a height of 10 cm to 20 cm;
B. the palm is required to be stably placed for 3-10s and is used for collecting and shooting palm pulse information by the collecting equipment;
C. if the palm is blurred due to palm shake in the acquisition process, the palm print needs to be acquired again.
Specifically, an infrared imager: palmar vein recognition requires the use of an infrared imager to acquire palmar vein images. The imaging instrument can capture the thermal infrared image of the palm vein region and can better display the texture and structure information of the palm vein;
camera or scanner: in addition to infrared imagers, conventional cameras or scanners may be used to acquire palmar images. The camera can capture the palm vein image and transmit the palm vein image to a computer or other processing equipment for subsequent analysis;
optical sensor: some palmar pulse identification systems may also require the use of optical sensors and modules to supplement the provision of additional information such as blood flow and blood oxygen levels. Such information may further enhance the accuracy and security of palmar pulse identification.
Preferably, in this solution, the acquisition device is preferably an infrared imager, which is generally capable of providing high quality palmprint images, and has a certain applicability to environments with darker light. The rich image information can improve the recognition effect of the convolutional neural network.
In the step S2, before inputting the palm vein image into the convolutional neural network, preprocessing the palm vein image to form a gray level image, and primarily extracting palm vein information;
the preprocessing mode comprises rotation, graying and enhancement of the image, and the feasibility of the palm vein image is judged by using a convolutional neural network, so that the image has better usability and robustness.
In the step S3, the interference factors are removed by adopting a mode of dividing any model, and the specific steps are as follows:
a1: removing the external background: a background other than the palm in the image, a gap between the fingers; background and noise in the fingerprint image may be identified and removed to extract a clean fingerprint region.
B1: removing the non-fingerprint area of the palm: interference in non-fingerprint areas of the palm, finger side shadows, nails. The segmentation model may help to remove disturbances in non-fingerprint areas in the fingerprint image, such as borders, edges, and non-fingerprint textures.
In the step S3, the palm print lines are identified, positioned and separated by adopting any separated model, and meanwhile, the contrast and definition between the palm prints can be adjusted, so that a clean and clear initial palm print model is obtained.
Specifically, the segmentation model can identify the lines in the fingerprint image, and accurately locate and segment the texture information of the fingerprint. This is important for subsequent feature extraction and alignment; and secondly, the fingerprint image can be enhanced, and the contrast and the definition of the fingerprint image are improved, so that the details and the characteristics of the fingerprint are better displayed.
And S4, carrying out aggregation treatment on the local area of the input graph, reducing the graph size of the initial palm print model by using a pooling layer, and reserving key palm print information to obtain final palm pulse processing parameters.
The convolution layer performs multiple convolution and pooling operations on the input image to extract important features, and in some other computer vision tasks, the convolution layer may iterate as many times as necessary, but in palmprint recognition applications we typically only need one to two convolutions. This helps reduce the number of parameters that need to be processed, reduces computational complexity, and increases translational invariance and positional invariance of the network. The pooling layer also helps to extract a more robust feature representation, as it may be somewhat robust to small offsets and noise of the image.
In the step S5, the palm pulse processing parameters are converted into one-dimensional vectors, the feature mapping is converted into an output result by utilizing linear transformation and a nonlinear activation function, and the output result is stored.
Specifically, the full-connection layer is usually located after the convolution layer and plays a role in converting the feature map extracted by the convolution layer into a final output, and the full-connection layer is usually used for classification tasks, wherein the output of the last full-connection layer can obtain probability distribution of each category through Softmax function processing. The parameters of the fully connected layer are large in quantity and therefore require more computational resources and training samples.
The method for primarily extracting the palmar pulse information features selects any one of local binary patterns and directional gradient histograms.
Specifically, the local binary pattern is an algorithm for image feature extraction, and the algorithm can be divided into three main steps:
determining a gray value of the center pixel: for each pixel in the neighborhood of a given pixel, comparing the magnitude relation of the gray value of the pixel with the gray value of the central pixel, if the gray value of the neighborhood pixel is more than or equal to the gray value of the central pixel, setting the pixel as 1, otherwise setting the pixel as 0;
calculating a binary pattern: the binary value sequences in the neighborhood of each pixel are combined into a binary number, and the binary number is the local binary pattern of the current pixel. For example, there are 8 pixels in a pixel neighborhood, then according to the algorithm, the binary value sequences of the 8 pixels can be combined into an 8-bit binary number, so as to obtain a local binary pattern of the pixel;
calculating image texture characteristics: for each pixel in the image, a corresponding local binary pattern value is obtained, which can be used to describe the texture features of the image. Typically, these local binary patterns may be further processed and processed using certain statistical features, such as mean, variance, entropy, histogram, etc.
The direction gradient histogram is an algorithm for image feature description and target detection, and the algorithm mainly comprises the following steps:
image preprocessing: firstly, converting an input image into a gray image, and then normalizing and smoothing the image to remove noise and redundant information in the image;
calculating the gradient: calculating the gradient and direction information of each pixel point in the image by applying a gradient arithmetic unit (such as Sobel, prewitt or Roberts arithmetic unit) to obtain an amplitude value and direction image of the gradient;
dividing the image area: dividing the image into non-overlapping patches (called Cell cells), typically each patch being 16x16 pixels in size;
calculating an intracellular gradient histogram: for each Cell, dividing the Cell into a plurality of sub-areas (called Cell blocks), and counting a histogram of gradient directions in each Cell Block;
calculating HOG feature vectors: the gradient histograms within each cell mass are connected to form an overall eigenvector. In general, feature vectors may be normalized to improve the robustness of the feature.
After the palm vein features are extracted, the palm of the user is placed above the acquisition equipment again, the palm is photographed again, the trained palm vein model data is compared with the immediately photographed image data through the computer, if the accuracy rate between the palm vein model data and the immediately photographed image data is lower than 99%, the palm vein information is required to be acquired again, and the model library is updated.
The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention. Structures, devices and methods of operation not specifically described and illustrated herein, unless otherwise indicated and limited, are implemented according to conventional means in the art.
Claims (9)
1. A palm pulse feature extraction method based on mass data of convolutional neural network is characterized in that: the extraction method comprises the following steps:
s1: shooting the palm vein of the user by using acquisition equipment;
s2: extracting image information of different layers through a convolutional neural network;
s3: inputting the extracted palmar vein image information into any segmentation model for removing interference factors affecting palmar vein extraction to obtain an initial palmar vein image model;
the interference factors comprise background removal, non-palm pulse area removal and external light;
s4: carrying out multi-layer pooling on the initial palm vein image model to obtain clear palm vein image data;
s5: converting the extracted palm pulse graphic features by adopting a full connection layer to obtain a palm pulse final output result;
s6: and storing the processed palmar pulse data for later comparison.
2. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: in the step S1, the acquisition equipment is any one of an infrared imager, a camera, a scanner and an optical sensor, and the acquisition mode comprises the following steps:
A. the palm is horizontally placed in an acquisition area above the acquisition equipment, and the distance from the palm to the acquisition equipment is kept at a height of 10 cm to 20 cm;
B. the palm is required to be stably placed for 3-10s and is used for collecting and shooting palm pulse information by the collecting equipment;
C. if the palm is blurred due to palm shake in the acquisition process, the palm print needs to be acquired again.
3. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: in the step S2, before inputting the palm vein image into the convolutional neural network, preprocessing the palm vein image to form a gray level image, and primarily extracting palm vein information;
the preprocessing mode comprises rotation, graying and enhancement of the image, and the feasibility of the palm vein image is judged by using a convolutional neural network.
4. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: in the step S3, the interference factors are removed by adopting a mode of dividing any model, and the specific steps are as follows:
a1: removing the external background: a background other than the palm in the image, a gap between the fingers;
b1: removing the non-fingerprint area of the palm: interference in non-fingerprint areas of the palm, finger side shadows, nails.
5. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: in the step S3, the palm print lines are identified, positioned and separated by adopting any separated model, and meanwhile, the contrast and definition between the palm prints can be adjusted, so that a clean and clear initial palm print model is obtained.
6. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: and S4, carrying out aggregation treatment on the local area of the input graph, reducing the graph size of the initial palm print model by using a pooling layer, and reserving key palm print information to obtain final palm pulse processing parameters.
7. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: in the step S5, the palm pulse processing parameters are converted into one-dimensional vectors, the feature mapping is converted into an output result by utilizing linear transformation and a nonlinear activation function, and the output result is stored.
8. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 3, wherein the method is characterized in that: the method for primarily extracting the palmar pulse information features selects any one of local binary patterns and directional gradient histograms.
9. The palm pulse feature extraction method based on mass data of convolutional neural network as claimed in claim 1, wherein the method is characterized in that: after the palm vein features are extracted, the palm of the user is placed above the acquisition equipment again, the palm is photographed again, the trained palm vein model data is compared with the immediately photographed image data through the computer, if the accuracy rate between the palm vein model data and the immediately photographed image data is lower than 99%, the palm vein information is required to be acquired again, and the model library is updated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311495731.3A CN117423140A (en) | 2023-11-10 | 2023-11-10 | Palm pulse feature extraction method for massive data based on convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311495731.3A CN117423140A (en) | 2023-11-10 | 2023-11-10 | Palm pulse feature extraction method for massive data based on convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117423140A true CN117423140A (en) | 2024-01-19 |
Family
ID=89532498
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311495731.3A Pending CN117423140A (en) | 2023-11-10 | 2023-11-10 | Palm pulse feature extraction method for massive data based on convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117423140A (en) |
-
2023
- 2023-11-10 CN CN202311495731.3A patent/CN117423140A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7072523B2 (en) | System and method for fingerprint image enhancement using partitioned least-squared filters | |
Raja | Fingerprint recognition using minutia score matching | |
EP2783328B1 (en) | Text detection using multi-layer connected components with histograms | |
JP4743823B2 (en) | Image processing apparatus, imaging apparatus, and image processing method | |
US11087106B2 (en) | Method of extracting features from a fingerprint represented by an input image | |
Piuri et al. | Fingerprint biometrics via low-cost sensors and webcams | |
Dewangan | Importance & applications of digital image processing | |
Sari et al. | Local line binary pattern for feature extraction on palm vein recognition | |
CN111814682A (en) | Face living body detection method and device | |
CN111126250A (en) | Pedestrian re-identification method and device based on PTGAN | |
KR101601187B1 (en) | Device Control Unit and Method Using User Recognition Information Based on Palm Print Image | |
Hany et al. | Speeded-Up Robust Feature extraction and matching for fingerprint recognition | |
CN110516731B (en) | Visual odometer feature point detection method and system based on deep learning | |
Holle et al. | Local line binary pattern and Fuzzy K-NN for palm vein recognition | |
Javidnia et al. | Palmprint as a smartphone biometric | |
Fathy et al. | Benchmarking of pre-processing methods employed in facial image analysis | |
Jindal et al. | Sign Language Detection using Convolutional Neural Network (CNN) | |
Walhazi et al. | Preprocessing latent-fingerprint images for improving segmentation using morphological snakes | |
Sisodia et al. | A conglomerate technique for finger print recognition using phone camera captured images | |
CN117423140A (en) | Palm pulse feature extraction method for massive data based on convolutional neural network | |
JP7403562B2 (en) | Method for generating slap/finger foreground masks | |
KR102329466B1 (en) | Apparatus and method for real time face recognition | |
Sehgal | Palm recognition using LBP and SVM | |
Devakumar et al. | An intelligent approach for anti-spoofing in a multimodal biometric system | |
CN110738174A (en) | Finger vein recognition method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |