CN117133022B - Color image palm print recognition method and device, equipment and storage medium - Google Patents
Color image palm print recognition method and device, equipment and storage medium Download PDFInfo
- Publication number
- CN117133022B CN117133022B CN202310607087.8A CN202310607087A CN117133022B CN 117133022 B CN117133022 B CN 117133022B CN 202310607087 A CN202310607087 A CN 202310607087A CN 117133022 B CN117133022 B CN 117133022B
- Authority
- CN
- China
- Prior art keywords
- training
- palm print
- pixel point
- graph
- palmprint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 79
- 238000000605 extraction Methods 0.000 claims abstract description 28
- 238000012884 algebraic function Methods 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims abstract description 14
- 238000012549 training Methods 0.000 claims description 113
- 239000013598 vector Substances 0.000 claims description 69
- 230000006870 function Effects 0.000 claims description 18
- 238000013528 artificial neural network Methods 0.000 claims description 15
- 238000010586 diagram Methods 0.000 claims description 9
- 238000003708 edge detection Methods 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 6
- 229910003460 diamond Inorganic materials 0.000 claims description 4
- 239000010432 diamond Substances 0.000 claims description 4
- 238000007499 fusion processing Methods 0.000 claims description 4
- 230000009467 reduction Effects 0.000 claims description 4
- 238000002372 labelling Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 abstract description 19
- 238000012545 processing Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 235000020061 kirsch Nutrition 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000002520 smart material Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and discloses a color image palm print recognition method, which calculates 12 characteristic values of each pixel point in a color palm print image by utilizing Clifford algebraic function; and inputting 12 eigenvalues into a trained target palm print line detection model, carrying out feature extraction on a palm print edge binary image output by the model to obtain paired geometric eigenvectors, calculating the similarity of the paired geometric eigenvectors and candidate eigenvectors, and determining the candidate eigenvectors as a matching result when the similarity reaches a preset similarity, so that higher-dimensional eigenvalues in a color palm print image can be calculated by using Clifford algebraic functions, the extracted palm print lines are more refined, the recognition accuracy can be improved, and meanwhile, the trained model is used for judging edge points according to the 12-dimensional eigenvalues to extract the palm print edge lines, so that the extraction by using a convolutional neural network is not needed, and the calculation operation time can be greatly reduced.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method, a device, equipment and a storage medium for identifying palmprints of color images.
Background
As a technical means for identifying the true identity of a user in the field of information security, the human body biological feature identification technology has become an important research topic in academia. Palm print recognition is one of the biological feature recognition technologies, and compared with biological features such as fingerprints, faces and the like, the palm print has a relatively large area and contains more personal information, and has the advantages of rotation invariance, uniqueness, low cost of acquisition equipment and the like, so that the palm print is widely applied.
As a core step of palm print recognition, palm print feature extraction is a process of analyzing a preprocessed image, removing smart materials, and directly relates to a recognition result. Common palm print feature extraction methods are divided into the following categories: structure-based feature extraction, statistics-based feature extraction, subspace-based feature extraction, time-frequency analysis-based feature extraction, encoding-based feature extraction, template-based feature extraction, and spectrum-based feature extraction.
The most common palm print extraction method is based on gray-scale palm print images. However, the color palm print image is richer than the gray palm print image in information, so that the color palm print image is directly subjected to feature extraction, and more fine personal information can be obtained, thereby being more beneficial to the biological recognition work.
In order to adapt to the high-dimensional information quantity of the color palm print image, a method for extracting palm print characteristics of the color palm print image by training a convolutional neural network model is mostly adopted at present, however, in practice, the convolutional neural network model is found to have long training time, long calculation running time and high configuration requirement on a running platform.
Disclosure of Invention
The invention aims to provide a color image palmprint recognition method, a color image palmprint recognition device, color image palmprint recognition equipment and a color image palmprint recognition storage medium, which can improve recognition accuracy and reduce calculation running time.
The first aspect of the invention discloses a color image palmprint recognition method, which comprises the following steps:
calculating 12 target characteristic values of each pixel point in the color palm print image by using the Clifford algebraic function; the 12 target characteristic values are calculated according to RGB three component values of four neighborhood points of the front, rear, left and right of the corresponding pixel point and coordinate values of the neighborhood points in the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right and lower left twelve directions;
inputting 12 target characteristic values into a trained target palm line detection model to obtain a palm print edge binary image of the color palm print image;
extracting linear characteristics from the palm print edge binary image to obtain paired geometric characteristic vectors;
calculating the similarity between the paired geometric feature vectors and candidate feature vectors;
and when the similarity reaches the preset similarity, determining the candidate feature vector as a matching result.
The second aspect of the present invention discloses a color image palm print recognition device, comprising:
the first calculation unit is used for calculating 12 target characteristic values of each pixel point in the color palm print image by utilizing the Clifford algebraic function; the 12 target characteristic values are calculated according to RGB three component values of four neighborhood points of the front, rear, left and right of the corresponding pixel point and coordinate values of the neighborhood points in the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right and lower left twelve directions;
the prediction unit is used for inputting 12 target characteristic values into a trained target palm line detection model to obtain a palm print edge binary image of the color palm print image;
the extraction unit is used for carrying out linear feature extraction on the palm print edge binary image to obtain paired geometric feature vectors;
a second calculation unit for calculating the similarity between the paired geometric feature vectors and the candidate feature vectors;
and the matching unit is used for determining the candidate feature vector as a matching result when the similarity reaches a preset similarity.
A third aspect of the invention discloses an electronic device comprising a memory storing executable program code and a processor coupled to the memory; the processor invokes the executable program code stored in the memory for performing the color image palmprint recognition method disclosed in the first aspect.
A fourth aspect of the present invention discloses a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the color image palm print recognition method disclosed in the first aspect.
The method has the advantages that 12 target characteristic values of each pixel point in the color palm print image are calculated by utilizing the Clifford algebraic function; and inputting 12 target eigenvalues into a trained target palm print line detection model, carrying out linear feature extraction on a palm print edge binary image output by the model to obtain paired geometric eigenvectors, calculating the similarity of the paired geometric eigenvectors and candidate eigenvectors, and determining the candidate eigenvectors as a matching result when the similarity reaches a preset similarity, so that the extracted palm print line can be more refined by using a Clifford algebraic function to calculate a higher-dimensional eigenvalue in a color palm print image, the recognition accuracy can be improved, and meanwhile, the edge points are judged according to the generalization capability of the trained model to extract the palm print edge line without being extracted by a convolutional neural network, so that the calculation running time can be greatly reduced. In addition, in the process of matching and identifying the extracted palmprints, matching identification is carried out on the basis of paired geometric feature vectors with unchanged geometric characteristics, the influence on rotation change, noise interference and the like is small, and the robustness can be improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles and effects of the invention.
Unless specifically stated or otherwise defined, the same reference numerals in different drawings denote the same or similar technical features, and different reference numerals may be used for the same or similar technical features.
FIG. 1 is a flow chart of a method for identifying palmprint of a color image according to an embodiment of the present invention;
FIG. 2 is a three-dimensional template of pixel Clifford feature vectors according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of paired geometric feature vectors disclosed in an embodiment of the present invention;
FIG. 4 is a schematic structural view of a color image palm print recognition device according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Reference numerals illustrate:
401. a first calculation unit; 402. a prediction unit; 403. an extraction unit; 404. a second calculation unit; 405. a matching unit; 501. a memory; 502. a processor.
Detailed Description
In order that the invention may be readily understood, a more particular description of specific embodiments thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
Unless defined otherwise or otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. In the context of a realistic scenario in connection with the technical solution of the invention, all technical and scientific terms used herein may also have meanings corresponding to the purpose of the technical solution of the invention. The terms "first and second …" are used herein merely for distinguishing between names and not for describing a particular number or order. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "fixed" to another element, it can be directly fixed to the other element or intervening elements may also be present; when an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present; when an element is referred to as being "mounted to" another element, it can be directly mounted to the other element or intervening elements may also be present. When an element is referred to as being "disposed on" another element, it can be directly on the other element or intervening elements may also be present.
As used herein, unless specifically stated or otherwise defined, "the" means that the feature or technical content mentioned or described before in the corresponding position may be the same or similar to the feature or technical content mentioned. Furthermore, the terms "comprising," "including," and "having," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
As shown in fig. 1, an embodiment of the present invention discloses a color image palm print recognition method. The execution subject of the method can be electronic equipment such as a computer, a notebook computer, a tablet computer and the like, or a color image palm print recognition device embedded in the electronic equipment, and the invention is not limited to the above. The method comprises the following steps 110-150:
110. and calculating 12 target characteristic values of each pixel point in the color palm print image by using the Clifford algebraic function. The 12 target feature values are calculated according to RGB three component values of four neighborhood points of the front, rear, left and right of the corresponding pixel point and coordinate values of the neighborhood points in the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right, lower left and twelve directions.
Prior to step 110, a target palm line detection model may be obtained by training based on a Back Propagation (BP) neural network. The training process comprises the following steps 101-104:
101. and acquiring a training sample graph.
In this step, edges of the palm print sample image may be extracted by using multiple edge detection operators, respectively, to obtain multiple first palm print line patterns corresponding to the multiple edge detection operators one by one. Among them, the various edge detection operators include, but are not limited to, one or more combinations of Canny, robert, sobel, laplace, prewitt and Kirsch.
And then carrying out fusion processing on the plurality of first palmprint line graphs to obtain a second palmprint line graph. Specifically, the plurality of first palm print line patterns may be fused by an or operation to form a preliminary palm print line image, that is, a second palm print line pattern. And finally, determining a training sample diagram according to the second palmprint diagram. By means of fusion processing of the first palmprint line graphs detected by the operators independently and respectively, a second palmprint line graph which is more accurate can be obtained, and accuracy of edge positioning of the training sample graph is improved.
In some embodiments, determining the training sample pattern from the second palmprint line pattern may include: the second palmprint line drawing is directly determined as a training sample drawing. Or, as another alternative embodiment, determining the training sample map according to the second palmprint line map may include: and performing diamond type expansion treatment with a specified radius on the second palmprint line graph to obtain a third palmprint line graph, and determining the third palmprint line graph as a training sample graph. Specifically, the palm print edge picture (i.e., the third palm print picture) obtained after performing the diamond expansion treatment with the radius of 1 on the second palm print picture obtained above is used as a training sample picture of the subsequent BP neural network.
102. And calculating 12 training characteristic values of each pixel point in the training sample graph by using the Clifford algebraic function.
In step 102, a Clifford algebraic function is constructed for each pixel in the training sample graph, and then 12 training feature values of each pixel are calculated according to the parsing of the Clifford algebraic function.
In constructing the Clifford algebraic function for each pixel point in the training sample graph, assume (e 1 ,e 2 ,…e n ) Is a set of orthogonal bases of linear space on real number domain R, clifford algebra A n Is formed by (e) 1 ,e 2 ,…e n ) The tensed combination algebra satisfies:
e 0 e i =e i e 0 =e i ,i=1,2,...n.e i e j =-e j e i ,1≤i≠j≤n。
A n the element in (a) is called Clifford number, and any element x E A n Element x has the form:
x=λ 0 +∑λ A e A ;A=(h 1 ,...,h p ),1≤h 1 <…<h p ≤n,1≤p≤n,x A ∈R。
obviously A n Is 2 n Algebra of the combination of dimensions but not of the exchange.
If Clifford number x has a formThen x is referred to as oneThe Clifford vectors. For any x E A n The Clifford mode of x is defined as +.>In particular, the modulus of the Clifford vector x is +.>
Let Ω be R n The open-ended set of connectivity in (a) defines: let f E C ∞ (Ω,A n ) If (3)Then f is called the left Clifford analytical function on the Ω set, where +.>n represents a dimension, e i Representing each primitive in Clifford algebraic orthonormal, x i Representing the various variables in the function f.
Thus, the Clifford function of the pixel point, i.e., the vector function f (x) =f of the twelve-dimensional vector space, can be defined 1 e 1 +f 2 e 2 +f 3 e 3 +f 4 e 4 +f 5 e 5 +f 6 e 6 +f 7 e 7 +f 8 e 8 +f 9 e 9 +f 10 e 10 +f 11 e 11 +f 12 e 12
Wherein f 1 ,f 2 ,…f 12 Respectively as vector function imaginary part e 1 ,e 2 ,…e 12 The corresponding numerical value is taken from R, G, B component values of four neighborhood points of the front, the back, the left and the right of the corresponding pixel point. And x is 1 ,x 2 ,…x 12 Coordinate values of the neighborhood points in the directions of front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right, lower left and lower left of the pixel point are respectively corresponding. Twelve neighbors of pixels are shown in fig. 2.
Defined by left Clifford analytic functionThe unfolding can be obtained:
wherein,respectively representing the vector function f (x) for each pixel point coordinate component (x 1 ,x 2 ,…x 12 ) Is a partial derivative of (c).
The formula obtained in the above is rewritten as:the following 12 eigenvalues can be obtained by expansion:
thus in step 102, in particular according to a above 1 ~a 12 And (3) calculating 12 training characteristic values of each pixel point of the training sample graph.
103. And determining a palmprint line label graph corresponding to the training sample graph according to the training characteristic value.
As an alternative embodiment, step 103 may include the following steps 1031-1032:
1031. and judging whether each pixel point in the training sample graph is a training edge point or not according to the training characteristic value.
Considering that the resolvability of the actual image does not all completely meet the above formula, according to the resolvability theorem of the Clifford algebra, whether the pixel point meets the resolvability can be judged by using an appropriate specified threshold T, and if the pixel point meets the resolvability, the pixel point is judged to be an edge point.
Based on this, the method for judging whether each pixel point in the training sample graph is a training edge point specifically includes: and judging whether the training characteristic value of each pixel point is smaller than a specified threshold value T. If all the 12 training characteristic values of the pixel point are smaller than the specified threshold value, judging that the pixel point is not a training edge point; if the 12 training feature values of the pixel point are not smaller than the specified threshold value, judging the pixel point as a training edge point. Wherein the 12 training feature values are not all less than the specified threshold, meaning that there is at least one training feature value not less than the specified threshold.
Wherein a specified threshold t=0.4 may be set. If the 12 training feature values of the pixel point are smaller than T, the pixel point is considered to be a non-edge point, otherwise, the pixel point is considered to be an edge point.
1032. And determining a palmprint line label graph corresponding to the training sample graph according to the training edge points.
And finally, traversing all the pixel points of the training sample graph, and combining the pixel points judged as the training edge points to obtain the palm print line label graph corresponding to the training sample graph.
104. And inputting the 12 training characteristic values of each pixel point in the training sample diagram and the corresponding palm print line label diagram into an error back propagation neural network for training to obtain a target palm print line detection model.
And taking 12 training characteristic values of each pixel point in the training sample graph as input, a corresponding palmprint line label graph as a label, inputting the palmprint line label graph into a BP neural network, and carrying out classification memory training on the data through the self-learning capability of the BP neural network so as to learn to obtain model parameters (comprising weight values and threshold values), thereby obtaining the target palmprint line detection model. Specifically, in the training process of the BP neural network, calculating an error value between a prediction graph output by the BP neural network and a corresponding palm print line label graph, and judging that a convergence condition is met if the error value converges to a set threshold value. Preferably, the setting threshold may be set to 0.001.
120. Inputting the 12 target characteristic values into a trained target palm print line detection model to obtain a palm print edge binary image of the color palm print image.
The trained target palm line detection model (namely BP neural network) comprises an input layer, an hidden layer and an output layer which are sequentially connected, and the whole structure is 12 x 20 x 2.
In the practical application of predicting the color palm print image to be identified, i.e. in step 110, the method is specifically based on the above-mentioned a 1 ~a 12 Calculating 12 target characteristic values a of each pixel point of the color palm print image 1 ,a 2 ,…a 12 . These 12 target feature values are then input to the input layer of the BP neural network, so the input layer is provided with 12 nodes. The output palmprint extraction result is a binary image, so the output layer of the BP neural network is provided with 2 nodes.
The number of units in the hidden layer is set to 20. The transfer function of the input layer adopts a Tan-Sigmoid function, the transfer function of the output layer adopts a Log-Sigmoid function, the activation function of the hidden layer adopts an S-shaped Sigmoid function, and the BP neural network adopts an L-M (Levenberg-Marquardt) optimization algorithm for optimization training.
130. And carrying out linear feature extraction on the palm print edge binary image to obtain paired geometric feature vectors.
After the palm print extraction result, namely the palm print edge binary image is obtained by utilizing the BP neural network, the palm print identification is further carried out according to the extracted palm print edge binary image. Specifically, the hough transform algorithm is utilized to extract linear features from the obtained palmprint edge binary image, and then paired geometric features, namely the directional opposite angles and the directional opposite positions, are introduced to construct feature vectors of palmprint edges, so as to obtain paired geometric feature vectors, as shown in fig. 3. Wherein the pair of geometric feature vectors includes a directional relative angular feature vector and a directional relative positional feature vector.
Representing arbitrary two line segments as vectorsAnd->The direction is directed away from the intersection point. The directional relative angular feature vector is formulated as followsThe following is shown:
if the direction of the included angle between the two line segments is clockwise, the sign of the opposite angle is positive, otherwise, the sign is negative.
As shown in fig. 3, the formula for the directional relative position feature vector is as follows:
140. and calculating the similarity between the paired geometric feature vectors and the candidate feature vectors.
As an alternative embodiment, step 140 may include the following steps 1401-140:
1401. a target two-dimensional histogram is determined from the paired geometric feature vectors.
After the pairs of geometric feature vectors are calculated, they are counted using a two-dimensional feature histogram for convenience of matching. The two-dimensional feature histogram may be calculated according to the following formula, i.e. a target two-dimensional histogram is obtained:
where i and j represent two line segments extracted from the palm print edge binary image, and E represents an edge set.
1402. And performing dimension reduction on the target two-dimensional histogram to obtain the target one-dimensional histogram.
In order to facilitate matching, the target two-dimensional histogram calculated in the step 1401 is scanned by rows, so as to perform dimension reduction to obtain a target one-dimensional histogram A.
1403. And acquiring a candidate one-dimensional histogram corresponding to the candidate feature vector.
And then a plurality of candidate feature vectors stored in the database are acquired and matched one by one, each candidate feature vector corresponds to a specific user, and when the candidate feature vector matched with a specific user is identified, the palm print of the specific user can be identified. Each candidate feature vector correspondingly stores a candidate one-dimensional histogram B for matching with a target one-dimensional histogram A to be identified.
1404. And calculating the similarity between the target one-dimensional histogram and the candidate one-dimensional histogram.
Specifically, the target one-dimensional histogram A and the candidate one-dimensional histogram B corresponding to the two palmprint images are normalized to obtain n units. The distance between the two histograms is calculated using the following formula:
the range of the distance d value is in the interval 0,1, and its size determines the similarity of the two images, i.e. the smaller the more similar. And therefore, the similarity between the target one-dimensional histogram and each candidate one-dimensional histogram is determined according to the d value. The similarity and the d value are in negative correlation. That is, the smaller the d value, the higher the similarity.
150. And when the similarity reaches the preset similarity, determining the candidate feature vector as a matching result.
After the similarity between the target one-dimensional histogram and the candidate one-dimensional histogram is calculated, comparing the calculated similarity with the preset similarity, and if the similarity reaches the preset similarity, considering that the histograms A and B are from the same person, and determining the candidate feature vector as a matching result. In other words, when d < 0.1, it may be determined that the similarity reaches the preset similarity, and the histograms a and B are considered to be from the same person.
Therefore, by implementing the embodiment of the invention, the higher-dimensional characteristic value in the color palm print image can be calculated by utilizing the Clifford algebraic function, so that the extracted palm print line is more refined, the recognition accuracy can be improved, meanwhile, the edge points are judged according to the 12-dimensional characteristic value by utilizing the generalization capability of the trained model, so that the palm print edge line is extracted, the extraction by a convolutional neural network is not needed, and the calculation running time can be greatly reduced. In addition, in the process of matching and identifying the extracted palmprints, matching identification is carried out on the basis of paired geometric feature vectors with unchanged geometric characteristics, the influence on rotation change, noise interference and the like is small, and the robustness can be improved.
In order to verify the effect of the target edge detection model in the embodiment of the invention, experimental result analysis is performed.
(1) Overall performance test of identification method
The color palm print recognition method based on Clifford algebra and the color palm print recognition method based on eight-element numbers are used for carrying out matching recognition operation on the respectively established feature library by using the same sample picture. Meanwhile, a convolutional neural network identification method which is popular in recent years is adopted to identify palm lines (learning rate is 0.03, and the number of convolution kernels is 50). The final identified performance comparisons for the four methods are shown in table 1 below.
Table 1 comparison of identification performance of four methods
Method | Success rate of identification% | Average run time(s) |
Eight-element-number-based color palmprint recognition method | 96 | 7.604 |
Convolutional neural network identification method | 99.3 | 204 |
The invention provides a recognition method | 98.6 | 5.631 |
As can be seen from the data in Table 1, the palm print recognition method provided by the invention has higher recognition success rate and higher operation efficiency compared with the eight-element-based color palm print feature extraction and recognition method. Because Clifford algebra used in the present invention can describe higher dimensional data features, it is better than octave resolution. And more eigenvalue vectors are input, so the extracted edge information is richer.
Although the recognition rate of the convolutional neural network recognition method is slightly higher than that of the convolutional neural network recognition method, the overall operation time is much longer than that of the convolutional neural network recognition method because the convolutional network needs to be trained, and therefore the palm print recognition method is superior to the convolutional neural network recognition method under the condition of comprehensively considering the recognition rate and the operation time. As can be seen from the comparison result of experimental recognition performance, the palm print recognition model has high recognition rate and operation efficiency.
(2) Robustness testing of identification methods
In the palm print matching and identifying process, the invention uses the paired geometric features based on the geometric invariant characteristic for identification, has higher robustness to rotation change and noise interference, and is tested by the following experiment. Firstly, the test original image is rotated by 30 degrees, 45 degrees and 60 degrees anticlockwise.
The palm print feature extraction and recognition method, the matching method based on the eight-element number vector product representation and the palm print recognition method of the convolutional neural network provided by the invention are used for respectively carrying out matching recognition operation on the respectively established feature library by using sample pictures, and the final recognition performance of the three methods is compared as shown in the following table 2.
Table 2 identification performance comparison of three methods for rotation robustness
As can be seen from the above data in the experiment table 2, compared with the matching method based on the eight-element vector product representation, the palmprint recognition method and the palmprint recognition method of the convolutional neural network provided by the invention have robustness to rotation, and the recognition success rate is far greater than that of the matching method based on the eight-element vector product representation. However, although the recognition rate of the palmprint recognition method of the convolutional neural network is high, it has been seen from table 1 that the convolutional neural network recognition method takes a lot of training time.
Again, two different parameters of gaussian noise interference, μ=0, σ=0.001 and μ=0, σ=0.005, are added to the test artwork, respectively. The palm print feature extraction and recognition method, the matching method based on the eight-element number vector product representation and the palm print recognition method of the convolutional neural network provided by the invention are used for respectively carrying out matching recognition operation on the respectively established feature library by using sample pictures added with noise interference, and the final recognition performance of the three methods is compared as shown in the following table 3.
Table 3 identification performance comparison of three methods for rotation robustness
As can be seen from the data in the above experiment table 3, compared with the matching method based on the eight-element number vector product representation, the palmprint recognition method and the palmprint recognition method of the convolutional neural network provided by the invention have robustness to noise interference, and the recognition success rate is also greater than that of the matching method based on the eight-element number vector product representation. However, although the recognition rate of the palmprint recognition method of the convolutional neural network is high, it has been seen from table 1 that the convolutional neural network recognition method takes a lot of training time.
As shown in fig. 4, an embodiment of the present invention discloses a color image palm print recognition device, which includes a first calculating unit 401, a predicting unit 402, an extracting unit 403, a second calculating unit 404, and a matching unit 405, wherein,
a first calculating unit 401, configured to calculate 12 target feature values of each pixel point in the color palm print image by using a Clifford algebraic function; the 12 target characteristic values are calculated according to RGB three component values of four neighborhood points of the front, rear, left and right of the corresponding pixel point and coordinate values of the neighborhood points in the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right and lower left twelve directions;
the prediction unit 402 is configured to input 12 target feature values into a trained target palm line detection model, and obtain a palm print edge binary image of the color palm print image;
an extracting unit 403, configured to perform linear feature extraction on the palm print edge binary image to obtain paired geometric feature vectors;
a second calculating unit 404, configured to calculate a similarity between the pair of geometric feature vectors and the candidate feature vector;
and the matching unit 405 is configured to determine the candidate feature vector as a matching result when the similarity reaches a preset similarity.
Optionally, the color image palm print recognition device may further include the following units not shown:
an obtaining unit 406, configured to obtain a training sample graph before the predicting unit 402 inputs 12 target feature values into the trained target palm line detection model to obtain a palm print edge binary image of the color palm print image;
a third calculation unit 407, configured to calculate 12 training feature values of each pixel point in the training sample graph by using a Clifford algebraic function;
the labeling unit 408 is configured to determine a palmprint line label graph corresponding to the training sample graph according to the training feature value;
the training unit 409 is configured to input the 12 training feature values of each pixel point in the training sample map and the corresponding palmprint line label map into the error back propagation neural network for training, so as to obtain a target palmprint line detection model.
As an alternative embodiment, the acquisition unit 406 may include the following sub-units, not shown:
the extraction sub-unit is used for respectively extracting edges of the palm print sample image by utilizing a plurality of edge detection operators to obtain a plurality of first palm print line graphs which are in one-to-one correspondence with the plurality of edge detection operators;
the fusion sub-unit is used for carrying out fusion processing on the plurality of first palmprint line graphs to obtain a second palmprint line graph;
and the determining subunit is used for determining a training sample graph according to the second palmprint line graph.
Further optionally, the determining subunit is specifically configured to perform diamond expansion processing with a specified radius on the second palm print line graph to obtain a third palm print line graph; and determining the third palmprint line map as a training sample map.
As an optional implementation manner, the second calculating unit 404 is specifically configured to determine a target two-dimensional histogram according to the paired geometric feature vectors, and perform dimension reduction on the target two-dimensional histogram to obtain a target one-dimensional histogram; and obtaining a candidate one-dimensional histogram corresponding to the candidate feature vector, and calculating the similarity between the target one-dimensional histogram and the candidate one-dimensional histogram.
As shown in fig. 5, an embodiment of the present invention discloses an electronic device comprising a memory 501 storing executable program code and a processor 502 coupled to the memory 501;
the processor 502 calls executable program codes stored in the memory 501, and executes the color image palm print recognition method described in the above embodiments.
The embodiments of the present invention also disclose a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the color image palm print recognition method described in the above embodiments.
The foregoing embodiments are provided for the purpose of exemplary reproduction and deduction of the technical solution of the present invention, and are used for fully describing the technical solution, the purpose and the effects of the present invention, and are used for enabling the public to understand the disclosure of the present invention more thoroughly and comprehensively, and are not used for limiting the protection scope of the present invention.
The above examples are also not an exhaustive list based on the invention, and there may be a number of other embodiments not listed. Any substitutions and modifications made without departing from the spirit of the invention are within the scope of the invention.
Claims (7)
1. The color image palmprint recognition method is characterized by comprising the following steps:
calculating 12 target characteristic values of each pixel point in the color palm print image by using the Clifford algebraic function; the 12 target characteristic values are calculated according to RGB three component values of four neighborhood points of the front, rear, left and right of the corresponding pixel point and coordinate values of the neighborhood points in the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right and lower left twelve directions;
inputting 12 target characteristic values into a trained target palm line detection model to obtain a palm print edge binary image of the color palm print image;
extracting linear characteristics from the palm print edge binary image to obtain paired geometric characteristic vectors;
calculating the similarity between the paired geometric feature vectors and candidate feature vectors;
when the similarity reaches a preset similarity, determining the candidate feature vector as a matching result;
the method further comprises the steps of:
acquiring a training sample graph;
calculating 12 training characteristic values of each pixel point in the training sample diagram by using the Clifford algebraic function;
determining a palmprint line label graph corresponding to the training sample graph according to the training characteristic value;
inputting 12 training characteristic values of each pixel point in the training sample diagram and the corresponding palm print line label diagram into an error back propagation neural network for training to obtain a target palm print line detection model;
the method for calculating the 12 training characteristic values of each pixel point in the training sample graph by utilizing the Clifford algebraic function comprises the following steps:
constructing a Clifford algebraic function for each pixel point in the training sample graph, and calculating 12 training characteristic values of each pixel point according to the analytic property of the Clifford algebraic function; wherein, the Clifford function of the pixel point is defined, namely, the vector function f (x) =f of the twelve-dimensional vector space 1 e 1 +f 2 e 2 +f 3 e 3 +f 4 e 4 +f 5 e 5 +f 6 e 6 +f 7 e 7 +f 8 e 8 +f 9 e 9 +f 10 e 10 +f 11 e 11 +f 12 e 12 ;
Wherein f 1 ,f 2 ,…f 12 Respectively as vector function imaginary part e 1 ,e 2 ,…e 12 The corresponding numerical value is taken as R, G, B component values of four neighborhood points of the front, the back, the left and the right of the corresponding pixel point; x is x 1 ,x 2 ,…x 12 Coordinate values of the neighborhood points corresponding to the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right, lower left and lower directions of the pixel points respectively;
the method for determining the palmprint line label graph corresponding to the training sample graph according to the training characteristic value comprises the following steps:
judging whether each pixel point in the training sample graph is a training edge point or not according to the training characteristic value;
determining a palmprint line label graph corresponding to the training sample graph according to the training edge points;
the method for judging whether each pixel point in the training sample graph is a training edge point specifically comprises the following steps: judging whether each training characteristic value of each pixel point is smaller than a specified threshold value or not; if all the 12 training characteristic values of the pixel point are smaller than the specified threshold value, judging that the pixel point is not a training edge point; if the 12 training feature values of the pixel point are not smaller than the specified threshold value, judging the pixel point as a training edge point.
2. The color image palmprint recognition method of claim 1, wherein the acquiring a training sample map includes:
respectively extracting edges of the palm print sample image by utilizing a plurality of edge detection operators to obtain a plurality of first palm print line graphs corresponding to the plurality of edge detection operators one by one;
performing fusion processing on the plurality of first palmprint line graphs to obtain a second palmprint line graph;
and determining a training sample graph according to the second palmprint line graph.
3. The color image palmprint recognition method of claim 2, wherein determining a training sample map from the second palmprint line map includes:
performing diamond type expansion treatment on the second palmprint line graph with a specified radius to obtain a third palmprint line graph;
and determining the third palmprint line graph as a training sample graph.
4. A color image palm print recognition method according to any one of claims 1 to 3, wherein calculating the similarity of the paired geometric feature vectors to candidate feature vectors comprises:
determining a target two-dimensional histogram according to the paired geometric feature vectors;
performing dimension reduction on the target two-dimensional histogram to obtain a target one-dimensional histogram;
acquiring a candidate one-dimensional histogram corresponding to the candidate feature vector;
and calculating the similarity between the target one-dimensional histogram and the candidate one-dimensional histogram.
5. A color image palm print recognition device, comprising:
the first calculation unit is used for calculating 12 target characteristic values of each pixel point in the color palm print image by utilizing the Clifford algebraic function; the 12 target characteristic values are calculated according to RGB three component values of four neighborhood points of the front, rear, left and right of the corresponding pixel point and coordinate values of the neighborhood points in the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right and lower left twelve directions;
the prediction unit is used for inputting 12 target characteristic values into a trained target palm line detection model to obtain a palm print edge binary image of the color palm print image;
the extraction unit is used for carrying out linear feature extraction on the palm print edge binary image to obtain paired geometric feature vectors;
a second calculation unit for calculating the similarity between the paired geometric feature vectors and the candidate feature vectors;
the matching unit is used for determining the candidate feature vector as a matching result when the similarity reaches a preset similarity;
wherein, the color image palm print recognition device further comprises:
the obtaining unit is used for obtaining a training sample graph before the predicting unit inputs 12 target characteristic values into the trained target palm line detection model to obtain a palm print edge binary graph of the color palm print image;
the third calculation unit is used for calculating 12 training characteristic values of each pixel point in the training sample graph by utilizing the Clifford algebraic function;
the marking unit is used for determining a palmprint line label graph corresponding to the training sample graph according to the training characteristic value;
the training unit is used for inputting the 12 training characteristic values of each pixel point in the training sample graph and the corresponding palm print line label graph into the error back propagation neural network for training to obtain a target palm print line detection model;
the third calculation unit is specifically configured to construct a Clifford algebraic function for each pixel point in the training sample graph, and calculate 12 training feature values of each pixel point according to the parsing of the Clifford algebraic function; wherein, the Clifford function of the pixel point is defined, namely, the vector function f (x) =f of the twelve-dimensional vector space 1 e 1 +f 2 e 2 +f 3 e 3 +f 4 e 4 +f 5 e 5 +f 6 e 6 +f 7 e 7 +f 8 e 8 +f 9 e 9 +f 10 e 10 +f 11 e 11 +f 12 e 12 The method comprises the steps of carrying out a first treatment on the surface of the Wherein f 1 ,f 2 ,…f 12 Respectively as vector function imaginary part e 1 ,e 2 ,…e 12 The corresponding numerical value is taken as R, G, B component values of four neighborhood points of the front, the back, the left and the right of the corresponding pixel point; x is x 1 ,x 2 ,…x 12 Coordinate values of the neighborhood points corresponding to the front, rear, left, right, upper, lower, upper left, lower right, upper, lower, upper right, lower left and lower directions of the pixel points respectively;
the labeling unit is specifically configured to determine whether each pixel point in the training sample graph is a training edge point according to the training feature value; determining a palmprint line label graph corresponding to the training sample graph according to the training edge points;
the labeling unit is used for judging whether each pixel point in the training sample graph is a training edge point or not, and the mode specifically comprises the following steps: judging whether each training characteristic value of each pixel point is smaller than a specified threshold value or not; if all the 12 training characteristic values of the pixel point are smaller than the specified threshold value, judging that the pixel point is not a training edge point; if the 12 training feature values of the pixel point are not smaller than the specified threshold value, judging the pixel point as a training edge point.
6. An electronic device comprising a memory storing executable program code and a processor coupled to the memory; the processor invokes the executable program code stored in the memory for performing the color image palm print identification method of any one of claims 1 to 4.
7. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the color image palm print recognition method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310607087.8A CN117133022B (en) | 2023-05-26 | 2023-05-26 | Color image palm print recognition method and device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310607087.8A CN117133022B (en) | 2023-05-26 | 2023-05-26 | Color image palm print recognition method and device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117133022A CN117133022A (en) | 2023-11-28 |
CN117133022B true CN117133022B (en) | 2024-02-27 |
Family
ID=88857172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310607087.8A Active CN117133022B (en) | 2023-05-26 | 2023-05-26 | Color image palm print recognition method and device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117133022B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123537A (en) * | 2014-07-04 | 2014-10-29 | 西安理工大学 | Rapid authentication method based on handshape and palmprint recognition |
CN104182764A (en) * | 2014-08-19 | 2014-12-03 | 田文胜 | Pattern recognition system |
CN110852216A (en) * | 2019-10-30 | 2020-02-28 | 平安科技(深圳)有限公司 | Palm print verification method and device, computer equipment and readable storage medium |
CN114596639A (en) * | 2022-05-10 | 2022-06-07 | 富算科技(上海)有限公司 | Biological feature recognition method and device, electronic equipment and storage medium |
-
2023
- 2023-05-26 CN CN202310607087.8A patent/CN117133022B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104123537A (en) * | 2014-07-04 | 2014-10-29 | 西安理工大学 | Rapid authentication method based on handshape and palmprint recognition |
CN104182764A (en) * | 2014-08-19 | 2014-12-03 | 田文胜 | Pattern recognition system |
CN104820817A (en) * | 2014-08-19 | 2015-08-05 | 崔明 | Four-dimensional code, image identification system and method as well as retrieval system and method based on four-dimensional code |
CN110852216A (en) * | 2019-10-30 | 2020-02-28 | 平安科技(深圳)有限公司 | Palm print verification method and device, computer equipment and readable storage medium |
CN114596639A (en) * | 2022-05-10 | 2022-06-07 | 富算科技(上海)有限公司 | Biological feature recognition method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117133022A (en) | 2023-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Soltanpour et al. | A survey of local feature methods for 3D face recognition | |
CN110728209B (en) | Gesture recognition method and device, electronic equipment and storage medium | |
Kusakunniran et al. | Recognizing gaits across views through correlated motion co-clustering | |
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
Alvarez-Betancourt et al. | A keypoints-based feature extraction method for iris recognition under variable image quality conditions | |
Chen et al. | A new off-line signature verification method based on graph | |
CN105243139B (en) | A kind of method for searching three-dimension model and its retrieval device based on deep learning | |
WO2015149696A1 (en) | Method and system for extracting characteristic of three-dimensional face image | |
CN103218609B (en) | A kind of Pose-varied face recognition method based on hidden least square regression and device thereof | |
CN107958230B (en) | Facial expression recognition method and device | |
Bedagkar-Gala et al. | Multiple person re-identification using part based spatio-temporal color appearance model | |
CN108182397B (en) | Multi-pose multi-scale human face verification method | |
US8224072B2 (en) | Method for normalizing displaceable features of objects in images | |
CN107967442A (en) | A kind of finger vein identification method and system based on unsupervised learning and deep layer network | |
CN110232318A (en) | Acupuncture point recognition methods, device, electronic equipment and storage medium | |
Palaniswamy et al. | Automatic identification of landmarks in digital images | |
Alaslni et al. | Transfer learning with convolutional neural networks for iris recognition | |
CN111753119A (en) | Image searching method and device, electronic equipment and storage medium | |
CN106611158A (en) | Method and equipment for obtaining human body 3D characteristic information | |
CN111639562A (en) | Intelligent positioning method for palm region of interest | |
Chakraborty et al. | Hand gesture recognition: A comparative study | |
Amiri et al. | RASIM: a novel rotation and scale invariant matching of local image interest points | |
Lavanya et al. | Particle Swarm Optimization Ear Identification System | |
CN117133022B (en) | Color image palm print recognition method and device, equipment and storage medium | |
Jabnoun et al. | Visual scene prediction for blind people based on object recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |