CN112149521B - Palm print ROI extraction and enhancement method based on multitasking convolutional neural network - Google Patents

Palm print ROI extraction and enhancement method based on multitasking convolutional neural network Download PDF

Info

Publication number
CN112149521B
CN112149521B CN202010916060.3A CN202010916060A CN112149521B CN 112149521 B CN112149521 B CN 112149521B CN 202010916060 A CN202010916060 A CN 202010916060A CN 112149521 B CN112149521 B CN 112149521B
Authority
CN
China
Prior art keywords
palm print
neural network
image
convolutional neural
palmprint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010916060.3A
Other languages
Chinese (zh)
Other versions
CN112149521A (en
Inventor
王海霞
苏立循
蒋莉
陈朋
梁荣华
张仪龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010916060.3A priority Critical patent/CN112149521B/en
Publication of CN112149521A publication Critical patent/CN112149521A/en
Application granted granted Critical
Publication of CN112149521B publication Critical patent/CN112149521B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • G06V40/11Hand-related biometrics; Hand pose recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A palm print ROI extraction and enhancement method based on a multitasking convolutional neural network, comprising the steps of: 1) The preparation work before training is carried out on the sample, the sample is firstly duplicated into two parts, namely A and B, BM3D denoising and Gabor wavelet filtering are carried out on the sample A, and therefore image enhancement processing is carried out on the sample; marking the sample B, and respectively marking two inter-finger valley points and a palmprint ROI (region of interest) of the training sample; finally, carrying out data expansion on the marked training samples; 2) Training a multitasking convolutional neural network by using the training sample generated in the step 1) to obtain a network model for palm print ROI extraction and enhancement; 3) And verifying the trained multi-task convolutional neural network model through the verification set, outputting a result, and correcting the result. The invention can extract the region of interest of the palm print with image enhancement from the common palm print with higher accuracy and robustness.

Description

Palm print ROI extraction and enhancement method based on multitasking convolutional neural network
Technical Field
The invention relates to the field of palm print enhancement segmentation, in particular to a palm print ROI (Region Of Interest) extraction and enhancement method based on a multitasking convolutional neural network.
Background
The existing biological characteristic recognition technology mainly comprises fingerprint recognition, face recognition, iris recognition, palm print recognition and the like. Since the shape of the palm print is determined by the genes of everyone, even if the palm print is damaged in the following days, the subsequently grown lines can keep the same shape as the original lines, so that the palm print is a biological identification method with considerable potential. The palm print mainly comprises three lines, namely: milk lines, wrinkles and flexor wires. The three main lines are natural and have good stability. Although the milk lines and wrinkles slightly change during the young growth, the changes in the palms will not be noticeable after the adult, nor will these changes occur in a short time, requiring a lengthy period of time to pass. The university of hong Kong's university has conducted a study of human palmprints for up to four years, and the conclusion of the study is that palmprints are characterized by stability. The characteristics of palms are unique, and even if the palms are identical to twins, the palms are different, and even if the palms are identical to the left palms and the right palms of the same person, the palms are different. From the aspect of genetic inheritance, the palmprint belongs to polygenic inheritance, has uniqueness like a fingerprint, but has larger area than the fingerprint, and can better show detail characteristics.
Meanwhile, the neural network has strong self-learning ability and a function of quickly searching an optimal solution, and has been rapidly developed in recent years. The inspiration of the neural network is derived from the brain nerve cells of the human brain, the brain operation of the human is not to directly obtain information from retina, but to obtain the rules of things through a complex layered structure of a brain by the sensory organ receiving the stimulating signal, the definite layered hierarchical structure reduces the data amount, the processing efficiency is optimized, and the deep learning is generated under the inspired of the brain structure. This provides the possibility to enhance and extract palmprint by means of a neural network.
Disclosure of Invention
In order to overcome the defects of poor time complexity and poor robustness of the existing palm print enhancement and ROI extraction, the invention provides a palm print ROI extraction and enhancement method based on a multitasking convolutional neural network, and the palm print ROI extraction and enhancement is realized with higher robustness and reduced time period by self-learning and fast searching of an optimal solution function of the neural network. The multi-task learning is a machine learning method for learning a plurality of related tasks together based on a shared representation, and the plurality of tasks are generalized mutually, so that the learning effect is improved.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
A method of palm print ROI extraction and enhancement based on a multitasking convolutional neural network, the method comprising the steps of:
1) The preparation work before training is carried out on the sample, the sample is duplicated into two parts, namely A and B respectively; performing BM3D denoising and Gabor wavelet filtering on the sample A so as to perform image enhancement processing on the sample A; marking the sample B, and respectively marking two inter-finger valley points and a palmprint ROI (region of interest) of the training sample; finally, carrying out data expansion on the marked training samples;
2) Training a multitasking convolutional neural network by using the training sample generated in the step 1) to obtain a network model for palm print ROI extraction and enhancement;
3) And verifying the trained multi-task convolutional neural network model through the verification set, outputting a result, and correcting the result.
Further, the step 1) includes the steps of:
(11) The picked palmprint images have various noises, and the purpose of denoising the images is to provide a denoising training sample for subsequent neural network training, and the palmprint images are denoised by using a BM3D denoising algorithm;
(12) Filtering and enhancing the denoised palm print image by using Gabor wavelet, wherein Gabor wavelet transformation is one of wavelet transformation, orthogonalization of Gabor transformation is realized, the palm print image is decomposed by using Gabor wavelet, then inverse transformation is performed, and important components are screened by changing corresponding parameters in the process, so that the aim of enhancing the image is fulfilled;
(13) The palm print of the training sample is marked, the marking position is the valley point between the middle finger and the other two fingers, and the marking size is x pixels;
(14) For palm print marking of a training sample, marking a palm print region of interest at a mark position, and marking y pixels by z pixels, wherein y=144 and z=128;
(15) And (3) carrying out data expansion on the m Zhang Zhangwen images after the operation, wherein the expansion mode is to randomly rotate and turn left and right the images, and screening the images which still retain the main information of palmprint, thereby being used as training samples for training the neural network.
Still further, the step 2) includes the steps of:
(21) Constructing a multitasking convolutional neural network model, so that the multitasking convolutional neural network model can realize the enhancement and segmentation of a palm print image at the same time, dividing the palm print image into three parts by the network, wherein the first part is a shared part, the shared part performs downsampling operation on the palm print image, inputs the image, performs the operation of 2 convolutional modules, wherein the size of a convolutional kernel is 3*3, a characteristic layer is 64, and the output of the last convolutional module is used for storage; carrying out maximum pooling operation on the result, and then carrying out convolution operation of 2 convolution modules, wherein the size of a convolution kernel is 3*3, the characteristic layer is 128, and the result is stored through the output of the last convolution module; the subsequent processing is the same as the above, twice pooling is carried out, then convolution is carried out, the convolution kernel of the corresponding convolution module is 3*3, the characteristic layer is 256, the convolution kernel is 3*3, and the characteristic layer is 512;
(22) The second part is an independent part, is a palmprint image segmentation part, and is up-sampled on the basis of the output of the first part, the up-sampling process is the cross operation of 3 deconvolutions and 3 times of 2 convolution modules, and finally the palmprint segmentation chart with the same size as the original chart is output, the convolution kernel of the convolution layer of the part is 3*3, the size of the characteristic layer is 256, 28 and 64 in sequence, the convolution kernel of the last convolution operation is 1*1, and the characteristic layer is 1;
(23) The third part is an independent part, is a palm print image enhancement part, and is up-sampled on the basis of the output of the first part, the up-sampling process is the cross operation of a 3-time deconvolution module and 3-time 2 convolution modules, and finally a palm print segmentation chart with the same size as the original chart is output, the convolution kernel of a convolution layer of the part is 3*3, the sizes of characteristic layers are 256, 28 and 64 in sequence, the convolution kernel of the last convolution operation is 1*1, and the characteristic layer is 1;
(24) In the above-constructed multitasking convolutional neural network, the activation functions of the two shared convolutional layers are all ReLU, in the independent convolutional layers of the two tasks, the last layer of activation function of the palmprint ROI extraction network structure is sigmoid, and the activation function of the palmprint image enhancement network structure is ReLU;
(25) In the training process, the total loss function of the multitasking convolutional neural network is as follows:
Loss1+2=α*loss1+β*loss2
loss 1+2 is the total Loss function of the multi-task neural network, alpha and beta are weight coefficients preset corresponding to each task respectively, loss 1、loss2 is the palm print ROI extraction Loss function and the palm print image Loss function in the multi-task convolutional neural network respectively, and the process is as follows:
(251) The loss function loss 1 for palmprint ROI extraction is defined as:
(252) The loss function loss 2 for palm print image enhancement is defined as:
Wherein f i represents a predicted value; y i represents the true value.
Still further, the step 3) includes the steps of:
(31) Inputting the image to be processed into a trained multitasking convolutional neural network, comparing the output result of the network with the original image in the test set, calculating a loss function, carrying out network back propagation until the loss function is kept stable, and outputting a result graph: a palmprint calibration image and a palmprint enhancement image;
(32) Correcting the calibration chart of the region of interest of the palm print, wherein the correcting steps are as follows:
(321) Connecting two valley points, drawing a straight line and a corresponding vertical line, establishing a new coordinate axis by using the two straight lines, taking the intersection point of the two straight lines as an origin, and firstly, rotating a palmprint image by taking the origin O as a center by a rotation angle theta, wherein the theta is expressed as follows: rotating the palm print image counterclockwise when θ is positive; otherwise, the palm print image is rotated clockwise, and the distance between the palm print interest area and the valley points automatically calibrated by the neural network is judged by taking the two valley points as reference objects;
(322) According to experience judgment, the distance between the left side edge of the palm print interested region and the connecting line of the valley points is determined to be one fourth of the distance between the two valley points, the distance between the right side edge of the palm print interested region and the connecting line of the valley points is determined to be 1.5 times of the distance between the two valley points, if the condition is met, the calibration result does not need to be corrected, and otherwise, correction treatment is needed;
(323) If the result is not satisfied, the palm print calibration result is automatically corrected, the missing part of the palm print interested region is filled, and the redundant part is restored.
Compared with the prior art, the invention has the beneficial effects that: the palm print region of interest can be extracted, the palm print region of interest has an automatic correction function, the palm print region of interest is suitable for palms with different sizes, and the palm print image can be enhanced at the same time. The time cost of the two operation is reduced, and the robustness and universality are better.
Drawings
FIG. 1 is a schematic illustration of palmprint valley and ROI calibration in accordance with the present invention.
Fig. 2 is a schematic diagram of a multi-task convolutional neural network structure in the present invention.
Fig. 3 is a flowchart of the training steps of the multi-tasking convolutional neural network of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and embodiments:
referring to fig. 1, 2 and 3, a palm print ROI extraction and enhancement method based on a multitasking convolutional neural network includes the following steps:
1) The preparation work before training is carried out on the sample, the sample is duplicated into two parts, namely A and B respectively; performing BM3D denoising and Gabor wavelet filtering on the sample A so as to perform image enhancement processing on the sample A; marking the sample B, and respectively marking two inter-finger valley points and a palmprint ROI (region of interest) of the training sample; finally, carrying out data expansion on the marked training sample, wherein the method comprises the following steps of:
(11) The picked palmprint images have various noises, and the purpose of denoising the images is to provide a denoising training sample for subsequent neural network training, and the palmprint images are denoised by using a BM3D denoising algorithm;
(12) Filtering and enhancing the denoised palm print image by using Gabor wavelet, wherein Gabor wavelet transformation is one of wavelet transformation, orthogonalization of Gabor transformation is realized, the palm print image is decomposed by using Gabor wavelet, then inverse transformation is performed, and important components are screened by changing corresponding parameters in the process, so that the aim of enhancing the image is fulfilled;
(13) For palm print marking of a training sample, the marking position is a valley point between a middle finger and other two fingers, and the marking size is x pixels, for example, x=5, as shown in fig. 1;
(14) For palm print marking of a training sample, the marking position is a palm print interested region, the marking size is y pixels x z pixels, for example, y=144, and z=128, as shown in fig. 1;
(15) Carrying out data expansion on the m Zhang Zhangwen images after the operation, wherein the expansion mode is to randomly rotate and turn left and right the images, and screening the images which still retain the main information of palmprint, thereby being used as training samples for training the neural network;
2) Training a multitasking convolutional neural network by using the training sample generated in the step 1) to obtain a network model for palm print ROI extraction and enhancement; the method comprises the following steps:
(21) Constructing a multitasking convolutional neural network model, so that the multitasking convolutional neural network model can realize the enhancement and segmentation of a palm print image at the same time, dividing the palm print image into three parts by the network, wherein the first part is a shared part, the shared part performs downsampling operation on the palm print image, inputs the image, performs the operation of 2 convolutional modules, wherein the size of a convolutional kernel is 3*3, a characteristic layer is 64, and the output of the last convolutional module is used for storage; carrying out maximum pooling operation on the result, and then carrying out convolution operation of 2 convolution modules, wherein the size of a convolution kernel is 3*3, the characteristic layer is 128, and the result is stored through the output of the last convolution module; the subsequent processing is the same as the above, and twice pooling is performed, then convolution is performed, the convolution kernel of the corresponding convolution module is 3*3, the feature layer is 256, the convolution kernel is 3*3, and the feature layer is 512, as shown in fig. 2;
(22) The second part is an independent part, is a palmprint image segmentation part, is up-sampled on the basis of the output of the first part, and is subjected to cross operation of 3 deconvolution and 3 times of 2 convolution modules in the up-sampling process, and finally outputs a palmprint segmentation chart with the same size as the original chart, wherein the convolution kernel of a convolution layer of the palmprint image segmentation part is 3*3, and the sizes of characteristic layers are 256, 28 and 64 in sequence. The convolution kernel of the last convolution operation is 1*1, and the feature layer is 1, as shown in fig. 2;
(23) The third part is an independent part and is a palm print image enhancement part. And (3) up-sampling the first part based on the output of the first part, wherein the up-sampling process is the cross operation of a 3-time deconvolution module and 3-time 2 convolution modules, and finally the palm print segmentation map with the same size as the original image is output, the convolution kernel of the convolution layer of the first part is 3*3, and the sizes of the characteristic layers are 256, 28 and 64 in sequence. The convolution kernel of the last convolution operation is 1*1, and the feature layer is 1, as shown in fig. 2;
(24) In the above-constructed multitasking convolutional neural network, the activation functions of the two shared convolutional layers are all ReLU, in the independent convolutional layers of the two tasks, the last layer of activation function of the palmprint ROI extraction network structure is sigmoid, and the activation function of the palmprint image enhancement network structure is ReLU;
(25) In the training process, the total loss function of the multitasking convolutional neural network is as follows:
Loss1+2=α*loss1+β*loss2
loss 1+2 is the total loss function of the multi-task neural network, alpha and beta are weight coefficients preset corresponding to each task respectively, loss 1、loss2 is the palm print ROI extraction loss function and the palm print image loss function in the multi-task convolutional neural network respectively, and the process is as follows:
(251) The loss function loss 1 for palmprint ROI extraction is defined as:
(252) The loss function loss 2 for palm print image enhancement is defined as:
Wherein f i represents a predicted value; y i represents a true value;
3) Verifying the trained multi-task convolutional neural network model through a verification set, outputting a result, and correcting the result, wherein the method comprises the following steps of:
(31) Inputting the image to be processed into a trained multitasking convolutional neural network, comparing the output result of the network with the original image in the test set, calculating a loss function, carrying out network back propagation until the loss function is kept stable, and outputting a result graph: a palmprint calibration image and a palmprint enhancement image;
(32) Correcting the calibration chart of the region of interest of the palm print, wherein the correcting steps are as follows:
(321) Two valley points are connected, so that a straight line and a corresponding vertical line can be drawn. A new coordinate axis will be established with these two straight lines. Taking the intersection point of two straight lines as an origin, firstly, rotating the palm print image by an angle theta with the origin O as a center, wherein the angle theta is expressed as follows: rotating the palm print image counterclockwise when θ is positive; otherwise, the palm print image is rotated clockwise, and the distance between the palm print interest area and the valley points automatically calibrated by the neural network is judged by taking the two valley points as reference objects;
(322) According to experience judgment, the distance between the left side edge of the palm print interested region and the connecting line of the valley points is determined to be one fourth of the distance between the two valley points, the distance between the right side edge of the palm print interested region and the connecting line of the valley points is determined to be 1.5 times of the distance between the two valley points, if the condition is met, the calibration result does not need to be corrected, and otherwise, correction treatment is needed;
(323) If the result is not satisfied, the palm print calibration result is automatically corrected, the missing part of the palm print interested region is filled, and the redundant part is restored.

Claims (3)

1. A method for palm print ROI extraction and enhancement based on a multitasking convolutional neural network, the method comprising the steps of:
1) The preparation work before training is carried out on the sample, the sample is duplicated into two parts, namely A and B respectively; performing BM3D denoising and Gabor wavelet filtering on the sample A so as to perform image enhancement processing on the sample A; marking the sample B, and respectively marking two inter-finger valley points and a palmprint ROI (region of interest) of the training sample; finally, carrying out data expansion on the marked training samples;
2) Training a multitasking convolutional neural network by using the training sample generated in the step 1) to obtain a network model for palm print ROI extraction and enhancement;
Step 2) comprises the steps of:
(21) Constructing a multitasking convolutional neural network model, so that the multitasking convolutional neural network model can realize the enhancement and segmentation of a palm print image at the same time, dividing the palm print image into three parts by the network, wherein the first part is a shared part, the shared part performs downsampling operation on the palm print image, inputs the image, performs the operation of 2 convolutional modules, wherein the size of a convolutional kernel is 3*3, a characteristic layer is 64, and the output of the last convolutional module is used for storage; carrying out maximum pooling operation on the result, and then carrying out convolution operation of 2 convolution modules, wherein the size of a convolution kernel is 3*3, the characteristic layer is 128, and the result is stored through the output of the last convolution module; the subsequent processing is the same as the above, twice pooling is carried out, then convolution is carried out, the convolution kernel of the corresponding convolution module is 3*3, the characteristic layer is 256, the convolution kernel is 3*3, and the characteristic layer is 512;
(22) The second part is an independent part, is a palmprint image segmentation part, and is up-sampled on the basis of the output of the first part, the up-sampling process is the cross operation of 3 deconvolutions and 3 times of 2 convolution modules, and finally the palmprint segmentation chart with the same size as the original chart is output, the convolution kernel of the convolution layer of the part is 3*3, the size of the characteristic layer is 256, 28 and 64 in sequence, the convolution kernel of the last convolution operation is 1*1, and the characteristic layer is 1;
(23) The third part is an independent part, is a palm print image enhancement part, and is up-sampled on the basis of the output of the first part, the up-sampling process is the cross operation of a 3-time deconvolution module and 3-time 2 convolution modules, and finally a palm print segmentation chart with the same size as the original chart is output, the convolution kernel of a convolution layer of the part is 3*3, the sizes of characteristic layers are 256, 28 and 64 in sequence, the convolution kernel of the last convolution operation is 1*1, and the characteristic layer is 1;
(24) In the above-constructed multitasking convolutional neural network, the activation functions of the two shared convolutional layers are all ReLU, in the independent convolutional layers of the two tasks, the last layer of activation function of the palmprint ROI extraction network structure is sigmoid, and the activation function of the palmprint image enhancement network structure is ReLU;
(25) In the training process, the total loss function of the multitasking convolutional neural network is as follows:
Loss1+2=α*loss1+β*loss2
loss 1+2 is the total Loss function of the multi-task neural network, alpha and beta are weight coefficients preset corresponding to each task respectively, loss 1、loss2 is the palm print ROI extraction Loss function and the palm print image Loss function in the multi-task convolutional neural network respectively, and the process is as follows:
(251) The loss function loss 1 for palmprint ROI extraction is defined as:
(252) The loss function loss 2 for palm print image enhancement is defined as:
Wherein f i represents a predicted value; y i represents a true value;
3) And verifying the trained multi-task convolutional neural network model through the verification set, outputting a result, and correcting the result.
2. The method for palm print ROI extraction and enhancement based on a multitasking convolutional neural network according to claim 1 wherein said step 1) comprises the steps of:
(11) The picked palmprint images have various noises, and the purpose of denoising the images is to provide a denoising training sample for subsequent neural network training, and the palmprint images are denoised by using a BM3D denoising algorithm;
(12) Filtering and enhancing the denoised palm print image by using Gabor wavelet, wherein Gabor wavelet transformation is one of wavelet transformation, orthogonalization of Gabor transformation is realized, the palm print image is decomposed by using Gabor wavelet, then inverse transformation is performed, and important components are screened by changing corresponding parameters in the process, so that the aim of enhancing the image is fulfilled;
(13) The palm print of the training sample is marked, the marking position is the valley point between the middle finger and the other two fingers, and the marking size is x pixels;
(14) For palm print marking of a training sample, marking a palm print region of interest at a mark position, and marking y pixels by z pixels, wherein y=144 and z=128;
(15) And (3) carrying out data expansion on the m Zhang Zhangwen images after the operation, wherein the expansion mode is to randomly rotate and turn left and right the images, and screening the images which still retain the main information of palmprint, thereby being used as training samples for training the neural network.
3. The palm print ROI extraction and enhancement method based on a multitasking convolutional neural network according to claim 1 or 2 wherein step 3) comprises the steps of:
(31) Inputting the image to be processed into a trained multitasking convolutional neural network, comparing the output result of the network with the original image in the test set, calculating a loss function, carrying out network back propagation until the loss function is kept stable, and outputting a result graph: a palmprint calibration image and a palmprint enhancement image;
(32) Correcting the calibration chart of the region of interest of the palm print, wherein the correcting steps are as follows:
(321) Connecting two valley points, drawing a straight line and a corresponding vertical line, establishing a new coordinate axis by using the two straight lines, taking the intersection point of the two straight lines as an origin, and firstly, rotating a palmprint image by taking the origin O as a center by a rotation angle theta, wherein the theta is expressed as follows: rotating the palm print image counterclockwise when θ is positive; otherwise, the palm print image is rotated clockwise, and the distance between the palm print interest area and the valley points automatically calibrated by the neural network is judged by taking the two valley points as reference objects;
(322) According to experience judgment, determining that the distance between the left side edge of the palm print region of interest and the connecting line of the valley points is one fourth of the distance between the two valley points, and the distance between the right side edge of the palm print region of interest and the connecting line of the valley points is 1.5 times of the distance between the two valley points, if the condition is met, the calibration result does not need to be corrected, otherwise, correction treatment is needed;
(323) If the condition is not met, the palm print calibration result is automatically corrected, the missing part of the palm print interested region is filled, and the redundant part is restored.
CN202010916060.3A 2020-09-03 2020-09-03 Palm print ROI extraction and enhancement method based on multitasking convolutional neural network Active CN112149521B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010916060.3A CN112149521B (en) 2020-09-03 2020-09-03 Palm print ROI extraction and enhancement method based on multitasking convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010916060.3A CN112149521B (en) 2020-09-03 2020-09-03 Palm print ROI extraction and enhancement method based on multitasking convolutional neural network

Publications (2)

Publication Number Publication Date
CN112149521A CN112149521A (en) 2020-12-29
CN112149521B true CN112149521B (en) 2024-05-07

Family

ID=73889268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010916060.3A Active CN112149521B (en) 2020-09-03 2020-09-03 Palm print ROI extraction and enhancement method based on multitasking convolutional neural network

Country Status (1)

Country Link
CN (1) CN112149521B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191987A (en) * 2021-05-31 2021-07-30 齐鲁工业大学 Palm print image enhancement method based on PCNN and Otsu
CN114140424B (en) * 2021-11-29 2023-07-18 佳都科技集团股份有限公司 Palm vein data enhancement method, palm vein data enhancement device, electronic equipment and medium
CN115527079B (en) * 2022-02-28 2023-07-14 腾讯科技(深圳)有限公司 Palm print sample generation method, device, equipment, medium and program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN107977671A (en) * 2017-10-27 2018-05-01 浙江工业大学 A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN108564097A (en) * 2017-12-05 2018-09-21 华南理工大学 A kind of multiscale target detection method based on depth convolutional neural networks
KR20200046169A (en) * 2018-10-17 2020-05-07 엔에이치엔 주식회사 Neural network system for detecting Palmprint and method for providing Palmprint-based fortune forecasting service

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701513A (en) * 2016-01-14 2016-06-22 深圳市未来媒体技术研究院 Method of rapidly extracting area of interest of palm print
CN107977671A (en) * 2017-10-27 2018-05-01 浙江工业大学 A kind of tongue picture sorting technique based on multitask convolutional neural networks
CN108564097A (en) * 2017-12-05 2018-09-21 华南理工大学 A kind of multiscale target detection method based on depth convolutional neural networks
KR20200046169A (en) * 2018-10-17 2020-05-07 엔에이치엔 주식회사 Neural network system for detecting Palmprint and method for providing Palmprint-based fortune forecasting service

Also Published As

Publication number Publication date
CN112149521A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112149521B (en) Palm print ROI extraction and enhancement method based on multitasking convolutional neural network
Li et al. Residual u-net for retinal vessel segmentation
CN110348330A (en) Human face posture virtual view generation method based on VAE-ACGAN
CN110543822A (en) finger vein identification method based on convolutional neural network and supervised discrete hash algorithm
CN106709450A (en) Recognition method and system for fingerprint images
CN110245621B (en) Face recognition device, image processing method, feature extraction model, and storage medium
CN106991380A (en) A kind of preprocess method based on vena metacarpea image
CN112288645B (en) Skull face restoration model construction method and restoration method and system
CN113658040A (en) Face super-resolution method based on prior information and attention fusion mechanism
CN104091145A (en) Human palm vein feature image acquisition method
CN111914616A (en) Finger vein identification and anti-counterfeiting integrated method, device, storage medium and equipment
CN107481224A (en) Method for registering images and device, storage medium and equipment based on structure of mitochondria
CN106709431A (en) Iris recognition method and device
CN110232390A (en) Image characteristic extracting method under a kind of variation illumination
Guo et al. Multifeature extracting CNN with concatenation for image denoising
CN117196963A (en) Point cloud denoising method based on noise reduction self-encoder
CN109829857B (en) Method and device for correcting inclined image based on generation countermeasure network
CN117409030B (en) OCTA image blood vessel segmentation method and system based on dynamic tubular convolution
CN109559296B (en) Medical image registration method and system based on full convolution neural network and mutual information
Shen et al. CNN-based high-resolution fingerprint image enhancement for pore detection and matching
CN111310820A (en) Foundation meteorological cloud chart classification method based on cross validation depth CNN feature integration
Chen et al. A finger vein recognition algorithm based on deep learning
Aithal Two Dimensional Clipping Based Segmentation Algorithm for Grayscale Fingerprint Images
CN112435179B (en) Fuzzy pollen particle picture processing method and device and electronic equipment
CN111612083B (en) Finger vein recognition method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant