CN113158887A - Electronic signature authentication method and equipment for improving identification accuracy of electronic signature - Google Patents
Electronic signature authentication method and equipment for improving identification accuracy of electronic signature Download PDFInfo
- Publication number
- CN113158887A CN113158887A CN202110420441.7A CN202110420441A CN113158887A CN 113158887 A CN113158887 A CN 113158887A CN 202110420441 A CN202110420441 A CN 202110420441A CN 113158887 A CN113158887 A CN 113158887A
- Authority
- CN
- China
- Prior art keywords
- signature
- value
- margin
- information
- track
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 48
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 96
- 238000007781 pre-processing Methods 0.000 claims abstract description 15
- 238000010586 diagram Methods 0.000 claims abstract description 12
- 238000012549 training Methods 0.000 claims description 100
- 238000012360 testing method Methods 0.000 claims description 38
- 238000002372 labelling Methods 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 28
- 239000013598 vector Substances 0.000 claims description 18
- 238000012795 verification Methods 0.000 claims description 16
- 230000001427 coherent effect Effects 0.000 claims description 8
- 238000003062 neural network model Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 5
- 230000003416 augmentation Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/30—Writer recognition; Reading and verifying signatures
- G06V40/37—Writer recognition; Reading and verifying signatures based only on signature signals such as velocity or pressure, e.g. dynamic signature recognition
- G06V40/382—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to an electronic signature authentication method for improving the identification accuracy of an electronic signature, which comprises the following steps: collecting signature information; preprocessing signature information: a signature track layer is newly built, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map; inputting the characteristic diagram into the trained convolutional neural network, comparing the characteristic diagram with the signature template diagram of the signature through the convolutional neural network, and outputting an authentication result. The invention also provides electronic signature authentication equipment for improving the identification accuracy of the electronic signature. The invention has the advantages that: meanwhile, signature handwriting characteristics and handwriting pressure value characteristics during signature are extracted, and recognition is carried out through a convolutional neural network, so that the recognition accuracy is greatly improved.
Description
Technical Field
The invention relates to an electronic signature authentication method and equipment for improving the identification accuracy of an electronic signature, and belongs to the field of information security.
Background
At present, the electronic signature handwriting authentication method uses DTW (Dynamic Time Warping) to align the signature strokes and calculate the similarity of the strokes. This requires manual construction of features, such as time-consuming signature, number of strokes of signature, size of signature, etc., and then classification and identification by using a classifier, so that a lot of experiments are required to combine features, and the final accuracy is affected if the constructed features are not ideal.
The existing electronic signature handwriting authentication mode also uses a convolution network to extract the characteristics of the signature handwriting, calculates the similarity of the characteristics, improves the identification accuracy, but is still difficult to identify the signature handwriting with higher copy similarity.
Disclosure of Invention
In order to solve the technical problem, the invention provides an electronic signature authentication method for improving the identification accuracy of an electronic signature.
The technical scheme of the invention is as follows:
an electronic signature authentication method for improving the identification accuracy of an electronic signature comprises the following steps: acquiring signature information through a terminal capable of acquiring an electronic signature, wherein the signature information comprises a timestamp, coordinate information and pressure information of each pixel point of an electronic signature track; preprocessing the signature information: a signature track layer is newly built, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from the pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map; inputting the characteristic graph into a trained convolutional neural network, comparing the characteristic graph with the signature template graph of the signature through the convolutional neural network, and outputting an authentication result.
Preferably, the restoring the signature track on the signature track layer according to the coordinate information of the signature track specifically includes: firstly, presetting a gray value of a signature track as A and a gray value of a non-signature track region as B, then setting a gray value of a pixel point corresponding to coordinate information as A and setting a gray value of each pixel point of the non-signature track region as B on a signature track layer according to the coordinate information of the signature track; the method comprises the steps of obtaining a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to coordinate information of a signature track on a signature pressure layer, wherein the pressure value is specifically as follows: and carrying out equal-scale scaling on the information of each pressure value, calculating the pressure value, wherein the size of the pressure value is within a gray value range, then setting the gray value of a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer according to the coordinate information of the signature track, setting the gray value as a pressure value, and setting the gray value of each pixel point in a non-signature track area on the signature pressure layer as B.
Preferably, the combining the signature track layer and the signature pressure layer generates a feature map, which specifically includes: and according to the coordinate information of the signature track, taking the gray values of all pixel points corresponding to the coordinate information on the signature track layer and the signature pressure layer as two attribute values of the pixel points corresponding to the coordinate information on the characteristic diagram, wherein the attribute values of all the pixel points in the non-signature track area are (B, B).
Preferably, the training process of the convolutional neural network is as follows:
step 1, creating a signature information data set, traversing each signature information, and executing the preprocessing step;
step 2, randomly selecting feature graphs corresponding to two signature information from the signature information data set to form a data pair, then marking the data pair, distinguishing whether the two signature information in the data pair are both correct signatures through marking, and finally, dividing the signature information data set into a training set, a verification set and a test set;
step 3, building a convolutional neural network model, inputting each group of data pairs, and outputting a feature vector corresponding to each signature information; the convolutional neural network model uses a contrast loss coherent loss as a loss function, the contrast loss function being:wherein y represents a label, margin is a boundary parameter in the hyper-parameter, d represents a characteristic distance, N represents the total amount of the training data, and L represents a loss value;
step 4, setting hyper-parameters of the convolutional neural network model, wherein the hyper-parameters comprise: value set and initial learning rate of margin; the value set of margin comprises a plurality of margin values;
step 5, configuring data in the training set according to the hyper-parameters to perform model training, and selecting a margin value from a margin value set for calculating the contrast loss function;
step 6, testing the identification accuracy of the convolutional neural network by using the verification set;
step 7, testing the identification accuracy of the convolutional neural network by using the test set;
step 8, judging whether the value set of margin is traversed or not, if so, obtaining the identification accuracy corresponding to each margin value, executing step 9, if not, reselecting the margin value from the value set, and returning to the step 5 to start training;
step 9, judging whether the value precision of the margin value in the current value set of margin meets the expected requirement or not: if yes, determining the highest recognition accuracy rate from the obtained multiple recognition accuracy rates, then recording the corresponding optimal comparison threshold according to the margin value corresponding to the highest recognition accuracy rate, and finishing training; if not, updating the value set of margin according to the margin value corresponding to the highest recognition accuracy, reducing the value range of the updated value set of margin and containing the margin value corresponding to the highest recognition accuracy, and then returning to the step 5 to restart the training.
Preferably, in the step 6 and the step 7, the step of testing the identification accuracy of the convolutional neural network comprises: firstly, traversing the values between 0 and the margin value by a preset step length to obtain a group of comparison threshold values Tm { T1, T2, T3. Secondly, calculating the characteristic vector of each signature information in each set of data pairs through a convolution neural network model, then calculating the Euclidean distance between two characteristic vectors in each data pair, then comparing said euclidean distance with an alignment threshold Tm, and if said euclidean distance is less than the alignment threshold Tm, judging the samples to be the same, if the Euclidean distance is greater than a comparison threshold Tm, judging the samples to be different, comparing each judgment result with a labeling result, if the Euclidean distance is greater than the comparison threshold Tm, judging the judgment result to be different from the labeling result, if the judgment result is consistent with the labeling result, marking the judgment result as correct, and if the judgment result is inconsistent with the labeling result, marking the judgment result as incorrect, and calculating the corresponding identification accuracy when the comparison threshold is Tm; and repeating the step 6, traversing T1 to Tm to obtain m identification accuracy rates, recording the optimal identification accuracy rate as the identification accuracy rate of the current margin value, and comparing the contrast threshold value corresponding to the optimal identification accuracy rate to be used as the optimal contrast threshold value corresponding to the current margin value.
The invention also provides electronic signature authentication equipment for improving the identification accuracy of the electronic signature.
The second technical scheme of the invention is as follows:
an electronic signature authentication device for improving the recognition accuracy of electronic signatures, comprising a processor and a memory, said memory storing instructions adapted to be loaded by the processor and to perform the steps of: acquiring signature information through a terminal capable of acquiring an electronic signature, wherein the signature information comprises a timestamp, coordinate information and pressure information of each pixel point of an electronic signature track; preprocessing the signature information: a signature track layer is newly built, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from the pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map; inputting the characteristic graph into a trained convolutional neural network, comparing the characteristic graph with the signature template graph of the signature through the convolutional neural network, and outputting an authentication result.
Preferably, the restoring the signature track on the signature track layer according to the coordinate information of the signature track specifically includes: firstly, presetting a gray value of a signature track as A and a gray value of a non-signature track region as B, then setting a gray value of a pixel point corresponding to coordinate information as A and setting a gray value of each pixel point of the non-signature track region as B on a signature track layer according to the coordinate information of the signature track; the method comprises the steps of obtaining a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to coordinate information of a signature track on a signature pressure layer, wherein the pressure value is specifically as follows: and carrying out equal-scale scaling on the information of each pressure value, calculating the pressure value, wherein the size of the pressure value is within a gray value range, then setting the gray value of a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer according to the coordinate information of the signature track, setting the gray value as a pressure value, and setting the gray value of each pixel point in a non-signature track area on the signature pressure layer as B.
Preferably, the combining the signature track layer and the signature pressure layer generates a feature map, which specifically includes: and according to the coordinate information of the signature track, taking the gray values of all pixel points corresponding to the coordinate information on the signature track layer and the signature pressure layer as two attribute values of the pixel points corresponding to the coordinate information on the characteristic diagram, wherein the attribute values of all the pixel points in the non-signature track area are (B, B).
Preferably, the training process of the convolutional neural network is as follows:
step 1, creating a signature information data set, traversing each signature information, and executing the preprocessing step;
step 2, randomly selecting feature graphs corresponding to two signature information from the signature information data set to form a data pair, then marking the data pair, distinguishing whether the two signature information in the data pair are both correct signatures through marking, and finally, dividing the signature information data set into a training set, a verification set and a test set;
step 3, building a convolutional neural network model, inputting each group of data pairs, and outputting a feature vector corresponding to each signature information; the convolutional neural network model uses a contrast loss coherent loss as a loss function, the contrast loss function being:wherein y represents a label, margin is a boundary parameter in the hyper-parameter, d represents a characteristic distance, N represents the total amount of the training data, and L represents a loss value;
step 4, setting hyper-parameters of the convolutional neural network model, wherein the hyper-parameters comprise: value set and initial learning rate of margin; the value set of margin comprises a plurality of margin values;
step 5, configuring data in the training set according to the hyper-parameters to perform model training, and selecting a margin value from a margin value set for calculating the contrast loss function;
step 6, testing the identification accuracy of the convolutional neural network by using the verification set;
step 7, testing the identification accuracy of the convolutional neural network by using the test set;
step 8, judging whether the value set of margin is traversed or not, if so, obtaining the identification accuracy corresponding to each margin value, executing step 9, if not, reselecting the margin value from the value set, and returning to the step 5 to start training;
step 9, judging whether the value precision of the margin value in the current value set of margin meets the expected requirement or not: if yes, determining the highest recognition accuracy rate from the obtained multiple recognition accuracy rates, then recording the corresponding optimal comparison threshold according to the margin value corresponding to the highest recognition accuracy rate, and finishing training; if not, updating the value set of margin according to the margin value corresponding to the highest recognition accuracy, reducing the value range of the updated value set of margin and containing the margin value corresponding to the highest recognition accuracy, and then returning to the step 5 to restart the training.
Preferably, in the step 6 and the step 7, the step of testing the identification accuracy of the convolutional neural network comprises: firstly, traversing the values between 0 and the margin value by a preset step length to obtain a group of comparison threshold values Tm { T1, T2, T3. Secondly, calculating the characteristic vector of each signature information in each set of data pairs through a convolution neural network model, then calculating the Euclidean distance between two characteristic vectors in each data pair, then comparing said euclidean distance with an alignment threshold Tm, and if said euclidean distance is less than the alignment threshold Tm, judging the samples to be the same, if the Euclidean distance is greater than a comparison threshold Tm, judging the samples to be different, comparing each judgment result with a labeling result, if the Euclidean distance is greater than the comparison threshold Tm, judging the judgment result to be different from the labeling result, if the judgment result is consistent with the labeling result, marking the judgment result as correct, and if the judgment result is inconsistent with the labeling result, marking the judgment result as incorrect, and calculating the corresponding identification accuracy when the comparison threshold is Tm; and repeating the step 6, traversing T1 to Tm to obtain m identification accuracy rates, recording the optimal identification accuracy rate as the identification accuracy rate of the current margin value, and comparing the contrast threshold value corresponding to the optimal identification accuracy rate to be used as the optimal contrast threshold value corresponding to the current margin value.
The invention has the following beneficial effects:
1. an electronic signature authentication method and equipment for improving the identification accuracy of an electronic signature simultaneously extract signature handwriting characteristics and handwriting pressure value characteristics during signature, not only considers stroke similarity information, but also considers pressure value information corresponding to strokes, effectively improves the identification effect of an attack item with high similarity to the copied handwriting, and accordingly improves the final identification accuracy.
2. The method and the equipment for authenticating the electronic signature improve the identification accuracy of the electronic signature, use the convolutional neural network to extract the characteristics, compared with the traditional characteristic engineering to extract the characteristics, the convolutional neural network does not need to manually carry out a fussy characteristic design process, and the convolutional neural network has better generalization capability, so the final identification accuracy is higher.
3. A coherent loss is used as a loss function, the distance between approximate samples is smaller, the distance between different samples is larger through coherent loss calculation, and a hyper-parameter margin is added into the loss function, so that the training effect of a convolutional neural network model is improved.
Drawings
FIG. 1 is an authentication flow diagram of the present invention;
FIG. 2 is a flow chart of convolutional neural network training in accordance with the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Example one
Referring to fig. 1, an electronic signature authentication method for improving the identification accuracy of an electronic signature includes the following steps: the method comprises the steps that signature information is collected through a terminal capable of collecting the electronic signature, and the signature information comprises time stamps, coordinate information and pressure information of all pixel points of an electronic signature track. Typically, the signature information is saved as an xml file during the collection process. Preprocessing the signature information: during preprocessing, coordinate information and pressure information are firstly analyzed from the file. Then, a signature track layer is newly established, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from the pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map; inputting the characteristic graph into a trained convolutional neural network, comparing the characteristic graph with the signature template graph of the signature through the convolutional neural network, and outputting an authentication result.
Preferably, the restoring the signature track on the signature track layer according to the coordinate information of the signature track specifically includes: first, the gray scale value of the signature track is preset to be a, the gray scale value of the non-signature track area is preset to be B, for example, the gray scale value a of the signature track is set to be 255, the gray scale value B of the non-signature track area is set to be 0, the contrast between the signature track and the background is increased, and the signature track is easier to identify. Then according to the coordinate information of the signature track, setting the gray value of a pixel point corresponding to the coordinate information as A and setting the gray value of each pixel point in a non-signature track area as B on a signature track layer; the method comprises the steps of obtaining a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to coordinate information of a signature track on a signature pressure layer, wherein the pressure value is specifically as follows: because the range of the pressure value in the collected pressure information is generally large, for example, 0 to 1023, the information of each pressure value needs to be scaled equally to calculate the pressure value, the size of the pressure value is in the range of the gray value, and the gray value is greater than or equal to 0 and less than or equal to 255. And then setting the gray value of a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer according to the coordinate information of the signature track, setting the gray value as a pressure value, and setting the gray value of each pixel point in a non-signature track area on the signature pressure layer as B.
The merging of the signature track layer and the signature pressure layer generates a feature map, which specifically comprises the following steps: and according to the coordinate information of the signature track, taking the gray values of all pixel points corresponding to the coordinate information on the signature track layer and the signature pressure layer as two attribute values of the pixel points corresponding to the coordinate information on the characteristic diagram, namely the attribute values of the pixel points on the signature track are (A, pressure values), and the attribute values of all the pixel points in the non-signature track area are (B, B).
Referring to fig. 2, the training process of the convolutional neural network is as follows:
step 1, creating a signature information data set which comprises a plurality of signature information used for training, traversing each signature information, and executing the preprocessing step to obtain a feature map corresponding to each signature information; then, in order to improve the diversity of the training data, the model achieves a better convergence effect, and the training data is augmented. The augmentation mode comprises image angle rotation and mirror image processing.
Step 2, randomly selecting feature graphs corresponding to two pieces of signature information from the signature information data set to form a data pair, then labeling the data pair, and distinguishing whether the two pieces of signature information in the data pair are both correct signatures through labeling; for example, a is zhang san and B is lie san, 1 indicating that both signatures in the data pair are correct signatures. 0 indicates that the data pair contains an error signature:
[ data A; data B; labeling results
"Zhang three" for A write; "Zhang three" of A write; 1 ]
"Zhang three" for A write; "Zhang Wang" for writing A; 0]
"Zhang three" for A write; "Zhang three" written in B; 0]
"Zhang three" for B write; "Zhang Wang" written in B; 0]
And finally, dividing the signature information data set with the labeling completion characteristic diagram into a training set, a verification set and a test set according to a preset division ratio, wherein the data in the training set is used for training a convolutional neural network model, the data in the verification set is used for verifying the recognition accuracy of the trained convolutional neural network model, and the data in the test set is used for testing the recognition accuracy of the trained convolutional neural network model. For example, the proportion of the exercise set, the verification set, and the test set is 9: 0.5: 0.5.
step 3, building a convolutional neural network model, and using a contrast loss coherent loss as a loss function, wherein the contrast loss function is as follows:wherein y represents a label, margin is a boundary parameter in a hyper-parameter, d represents a characteristic distance, N represents the total amount of the training data, L represents a loss value, and the loss value L calculated by the comparison loss function is propagated reversely in the training process of the convolutional neural network model to update the weight parameters in the convolutional layer and the dense layer; when the characteristic distance d is larger than the value of margin, the loss value L of the data pair is 0, which indicates that the current model can accurately identify the data pair, therefore, the invention sets the hyper-parameter margin and uses the hyper-parameter marginThe loss value of the data pair which can be identified is enabled not to generate any influence on the updating of the weight parameter by calculating the loss value, the accuracy of the loss value L of the training sample is improved, and therefore the training effect of the convolutional neural network model is improved.
The convolutional neural network model comprises an input layer, a hidden layer and an output layer, and the convolutional neural network model has the specific structure that:
(INPUT1,INPUT2)->CONV1->MAXPOOL->CONV2->MAXPOOL->CONV3->MAXPOOL->CONV4->FLATTEN->DENSE->L2_DIST->OUTPUT
an example application of the model is as follows:
(INPUT1, INPUT2) is INPUT layer data, that is, INPUT1 is a feature map of INPUT "zhangsan", INPUT2 is a feature map of INPUT "listeta", and the image size of the feature map is 96 × 2;
CONV1 is the first layer convolution, image size 10 × 64, step size 1;
MAXFOOL is the maximum pooling layer, and the step length is 2;
CONV2 is the second layer convolution, image size 7 × 128, step size 1;
CONV3 is the third layer convolution with an image size of 4 x 128, step size 1;
CONV4 is the fourth convolution layer with an image size of 4 x 64 and a step size of 1;
FLATTEN is a flattening layer and is used for flattening the high-dimensional feature map into a one-dimensional feature map;
DENSE is a DENSE layer, the output image size is 1 x 128; the function of the dense layer is to reduce the dimension of the data of the flat layer, the size of the output feature map of the flat layer is 1 × 1024(1024 × 4 × 64), and the calculation amount of the next layer for calculating the euclidean distance is reduced by compressing the dense layer into 1 × 128.
L2_ DIST is an euclidean distance calculation layer, and during model training, a feature distance between two signature information in a set of data pairs is calculated, and after the model training is completed, the feature distance is applied to signature authentication, that is, the feature distance between a feature map generated by the collected signature information and a signature template of the signature is calculated.
OUTPUT is an OUTPUT layer and is a feature vector corresponding to each feature map extracted by the convolutional neural network.
Step 4, setting hyper-parameters of the convolutional neural network model, wherein the hyper-parameters comprise: the value set of margin, the initial learning rate, the size of single batch of training data and the iteration times of the training data. The value set of margin comprises a plurality of margin values within a preset value range. The initial learning rate is a hyper-parameter set at the beginning of training and is adjusted during the training process. Different adjustment strategies are adopted, the learning rate is dynamically adjusted by adopting a strategy of dynamically adjusting the learning rate, as long as the initial learning rate is set at the beginning of training, and the learning rate is dynamically adjusted according to conditions such as training times and the like in the training process. The learning rate is used for controlling the amplitude of weight parameter adjustment of the convolutional neural in the training process. The amplitude of the weight parameter adjustment is determined by a loss value, a gradient descent algorithm and a learning rate. After the gradient descent algorithm is determined, the loss value is smaller and smaller in the training process, if the learning rate is fixed, the weight parameter is caused to oscillate up and down on the optimal parameter, and the optimal parameter cannot be approached by further convergence, so that the learning rate needs to be reduced.
Step 5, configuring data in the training set according to the hyper-parameters to perform model training, and selecting a margin value from a margin value set for calculating the contrast loss function; in the training process, the amplitude of the weight parameter adjustment of the convolutional neural network model is determined by a loss value L, a gradient descent algorithm and a learning rate;
step 6, testing the identification accuracy of the convolutional neural network by using a verification set, wherein the testing step of the identification accuracy of the convolutional neural network is as follows: firstly, traversing a numerical value between 0 and the margin value by a preset step length to obtain a group of comparison thresholds Tm to be selected, wherein the comparison thresholds Tm to be selected is { T1, T2, T3.., Tm }, and m is a natural number, for example, when the margin value is 2, traversing [0,2] by the step length of 0.1 to obtain a group of comparison thresholds Tm to be selected, wherein the comparison thresholds Tm to be selected is {0.1,0.2, 0.3., 1.7,1.8,1.9,2.0 }; secondly, calculating a characteristic vector of each signature information in each group of data pairs in input data through a convolutional neural network model, then calculating the Euclidean distance between two characteristic vectors in each data pair, then comparing the Euclidean distance with a comparison threshold Tm, if the Euclidean distance is smaller than the comparison threshold Tm, determining the same sample, if the Euclidean distance is larger than the comparison threshold Tm, determining the different samples, comparing each determination result with a labeling result, if the determination result is consistent with the labeling result, marking the comparison result as correct, and if the determination result is inconsistent with the labeling result, marking the comparison result as incorrect, thereby calculating the corresponding identification accuracy when the comparison threshold is Tm; repeating the step 6, traversing T1 to Tm to obtain m identification accuracy rates, recording the optimal identification accuracy rate as the identification accuracy rate of the current margin value, and comparing the contrast threshold value corresponding to the optimal identification accuracy rate as the optimal contrast threshold value corresponding to the current margin value;
and 7, testing the identification accuracy of the trained convolutional neural network by using the test set data, and executing the test step of the identification accuracy of the convolutional neural network, so that the floating range of the identification accuracy of the convolutional neural network is determined, and the generalization capability of the convolutional neural network is verified.
Step 8, judging whether the value set of margin is traversed or not, if so, obtaining the identification accuracy corresponding to each margin value, executing step 9, if not, reselecting the margin value from the value set, and returning to the step 5 to start training;
step 9, judging whether the value precision of the margin value in the current value set of margin meets the expected requirement or not:
if yes, determining the highest recognition accuracy rate from the obtained multiple recognition accuracy rates, then recording the corresponding optimal comparison threshold according to the margin value corresponding to the highest recognition accuracy rate, and finishing the training. The value taking precision refers to that the difference between two adjacent margin values in a value taking set is smaller than or equal to an expected precision, for example, if the expected value taking precision is set to be 1, the value taking set is [4,5,6,7,8], margin is 5, and the recognition accuracy is the highest, the value taking precision of the value taking set meets the expected requirement, and the training can be stopped; if the expected value precision is determined to be 2, the value set is [0,2, 4, 6, 8], margin is 2, and the recognition accuracy is the highest, the value precision of the value set meets the expected requirement, and the training can be stopped. When the trained convolutional neural network is applied to field authentication, the convolutional neural network calculates the characteristic distance between the acquired signature information and the signature template, and compares the characteristic distance with the optimal comparison threshold determined in the step 9, so as to authenticate whether the acquired signature information is a real signature;
if not, updating the value set of margin according to the margin value corresponding to the highest recognition accuracy, reducing the value range of the updated value set of margin and containing the margin value corresponding to the highest recognition accuracy, and then returning to the step 5 to restart the training. In the initial stage of adjustment of the margin value, the value range needs to be expanded, the density and the iteration times of the point are reduced, and after the margin is determined to be the optimal value in the value set, the value range is reduced, and the density and the iteration times of the candidate points are increased. For example, the region is selected, and [1,10,100,1000] is selected for training at the beginning, and the accuracy is the highest when the region is found to be 10; continuously refining the value range of margin to be [1,5,10,20,40,60,80,100], and finding that the accuracy is the highest when the margin is 5; continuously thinning a margin value range [1,2,3,4,5,6,7,8,9 and 10 ]; the highest accuracy is found when margin is 2. And finally determining the value of margin to be 2.
In this embodiment, the distance between the approximate samples is smaller and the distance between different samples is larger by the contrast loss function used by the convolutional neural network, and the hyper-parameter margin is added to the contrast loss function, so that in the training process of the model, when the characteristic distance d between two samples in the data pair is larger than margin, the loss value L is 0, the data pair does not continuously optimize the model, that is, the data pair does not influence the update of the model weight parameter, and the training effect of the convolutional neural network model is improved. Meanwhile, the convolutional neural network is used for extracting features, the features not only comprise signature handwriting features but also comprise handwriting pressure value features during signature, the identification effect of the attack item with high similarity to the copied handwriting is effectively improved, and the final identification accuracy is greatly improved.
Example two
Referring to fig. 1, an electronic signature authentication apparatus for improving the recognition accuracy of an electronic signature includes a processor and a memory, the memory storing instructions adapted to be loaded by the processor and execute the following steps: the method comprises the steps that signature information is collected through a terminal capable of collecting the electronic signature, and the signature information comprises time stamps, coordinate information and pressure information of all pixel points of an electronic signature track. Typically, the signature information is saved as an xml file during the collection process. Preprocessing the signature information: during preprocessing, coordinate information and pressure information are firstly analyzed from the file. Then, a signature track layer is newly established, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from the pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map; inputting the characteristic graph into a trained convolutional neural network, comparing the characteristic graph with the signature template graph of the signature through the convolutional neural network, and outputting an authentication result.
Preferably, the restoring the signature track on the signature track layer according to the coordinate information of the signature track specifically includes: first, the gray scale value of the signature track is preset to be a, the gray scale value of the non-signature track area is preset to be B, for example, the gray scale value a of the signature track is set to be 255, the gray scale value B of the non-signature track area is set to be 0, the contrast between the signature track and the background is increased, and the signature track is easier to identify. Then according to the coordinate information of the signature track, setting the gray value of a pixel point corresponding to the coordinate information as A and setting the gray value of each pixel point in a non-signature track area as B on a signature track layer; the method comprises the steps of obtaining a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to coordinate information of a signature track on a signature pressure layer, wherein the pressure value is specifically as follows: because the range of the pressure value in the collected pressure information is generally large, for example, 0 to 1023, the information of each pressure value needs to be scaled equally to calculate the pressure value, the size of the pressure value is in the range of the gray value, and the gray value is greater than or equal to 0 and less than or equal to 255. And then setting the gray value of a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer according to the coordinate information of the signature track, setting the gray value as a pressure value, and setting the gray value of each pixel point in a non-signature track area on the signature pressure layer as B.
The merging of the signature track layer and the signature pressure layer generates a feature map, which specifically comprises the following steps: and according to the coordinate information of the signature track, taking the gray values of all pixel points corresponding to the coordinate information on the signature track layer and the signature pressure layer as two attribute values of the pixel points corresponding to the coordinate information on the characteristic diagram, namely the attribute values of the pixel points on the signature track are (A, pressure values), and the attribute values of all the pixel points in the non-signature track area are (B, B).
Referring to fig. 2, the training process of the convolutional neural network is as follows:
step 1, creating a signature information data set which comprises a plurality of signature information used for training, traversing each signature information, and executing the preprocessing step to obtain a feature map corresponding to each signature information; then, in order to improve the diversity of the training data, the model achieves a better convergence effect, and the training data is augmented. The augmentation mode comprises image angle rotation and mirror image processing.
Step 2, randomly selecting feature graphs corresponding to two pieces of signature information from the signature information data set to form a data pair, then labeling the data pair, and distinguishing whether the two pieces of signature information in the data pair are both correct signatures through labeling; for example, a is zhang san and B is lie san, 1 indicating that both signatures in the data pair are correct signatures. 0 indicates that the data pair contains an error signature:
[ data A; data B; labeling results
"Zhang three" for A write; "Zhang three" of A write; 1 ]
"Zhang three" for A write; "Zhang Wang" for writing A; 0]
"Zhang three" for A write; "Zhang three" written in B; 0]
"Zhang three" for B write; "Zhang Wang" written in B; 0]
And finally, dividing the signature information data set with the labeling completion characteristic diagram into a training set, a verification set and a test set according to a preset division ratio, wherein the data in the training set is used for training a convolutional neural network model, the data in the verification set is used for verifying the recognition accuracy of the trained convolutional neural network model, and the data in the test set is used for testing the recognition accuracy of the trained convolutional neural network model. For example, the proportion of the exercise set, the verification set, and the test set is 9: 0.5: 0.5.
step 3, building a convolutional neural network model, and using a contrast loss coherent loss as a loss function, wherein the contrast loss function is as follows:wherein y represents a label, margin is a boundary parameter in a hyper-parameter, d represents a characteristic distance, N represents the total amount of the training data, L represents a loss value, and the loss value L calculated by the comparison loss function is propagated reversely in the training process of the convolutional neural network model to update the weight parameters in the convolutional layer and the dense layer; when the characteristic distance d is greater than the margin value, the loss value L of the data pair is 0, which indicates that the data pair can be accurately identified by the current model, therefore, the method sets the hyper-parameter margin and is used for calculating the loss value, so that the loss value of the data pair which can be identified can not generate any influence on the updating of the weight parameter, the accuracy of the loss value L of the training sample is improved, and the training effect of the convolutional neural network model is improved.
The convolutional neural network model comprises an input layer, a hidden layer and an output layer, and the convolutional neural network model has the specific structure that:
(INPUT1,INPUT2)->CONV1->MAXPOOL->CONV2->MAXPOOL->CONV3->MAXPOOL->CONV4->FLATTEN->DENSE->L2_DIST->OUTPUT
an example application of the model is as follows:
(INPUT1, INPUT2) is INPUT layer data, that is, INPUT1 is a feature map of INPUT "zhangsan", INPUT2 is a feature map of INPUT "listeta", and the image size of the feature map is 96 × 2;
CONV1 is the first layer convolution, image size 10 × 64, step size 1;
MAXFOOL is the maximum pooling layer, and the step length is 2;
CONV2 is the second layer convolution, image size 7 × 128, step size 1;
CONV3 is the third layer convolution with an image size of 4 x 128, step size 1;
CONV4 is the fourth convolution layer with an image size of 4 x 64 and a step size of 1;
FLATTEN is a flattening layer and is used for flattening the high-dimensional feature map into a one-dimensional feature map;
DENSE is a DENSE layer, the output image size is 1 x 128; the function of the dense layer is to reduce the dimension of the data of the flat layer, the size of the output feature map of the flat layer is 1 × 1024(1024 × 4 × 64), and the calculation amount of the next layer for calculating the euclidean distance is reduced by compressing the dense layer into 1 × 128.
L2_ DIST is an euclidean distance calculation layer, and during model training, a feature distance between two signature information in a set of data pairs is calculated, and after the model training is completed, the feature distance is applied to signature authentication, that is, the feature distance between a feature map generated by the collected signature information and a signature template of the signature is calculated.
OUTPUT is an OUTPUT layer and is a feature vector corresponding to each feature map extracted by the convolutional neural network.
Step 4, setting hyper-parameters of the convolutional neural network model, wherein the hyper-parameters comprise: the value set of margin, the initial learning rate, the size of single batch of training data and the iteration times of the training data. The value set of margin comprises a plurality of margin values within a preset value range. The initial learning rate is a hyper-parameter set at the beginning of training and is adjusted during the training process. Different adjustment strategies are adopted, the learning rate is dynamically adjusted by adopting a strategy of dynamically adjusting the learning rate, as long as the initial learning rate is set at the beginning of training, and the learning rate is dynamically adjusted according to conditions such as training times and the like in the training process. The learning rate is used for controlling the amplitude of weight parameter adjustment of the convolutional neural in the training process. The amplitude of the weight parameter adjustment is determined by a loss value, a gradient descent algorithm and a learning rate. After the gradient descent algorithm is determined, the loss value is smaller and smaller in the training process, if the learning rate is fixed, the weight parameter is caused to oscillate up and down on the optimal parameter, and the optimal parameter cannot be approached by further convergence, so that the learning rate needs to be reduced.
Step 5, configuring data in the training set according to the hyper-parameters to perform model training, and selecting a margin value from a margin value set for calculating the contrast loss function; in the training process, the amplitude of the weight parameter adjustment of the convolutional neural network model is determined by a loss value L, a gradient descent algorithm and a learning rate;
step 6, testing the identification accuracy of the convolutional neural network by using a verification set, wherein the testing step of the identification accuracy of the convolutional neural network is as follows: firstly, traversing a numerical value between 0 and the margin value by a preset step length to obtain a group of comparison thresholds Tm to be selected, wherein the comparison thresholds Tm to be selected is { T1, T2, T3.., Tm }, and m is a natural number, for example, when the margin value is 2, traversing [0,2] by the step length of 0.1 to obtain a group of comparison thresholds Tm to be selected, wherein the comparison thresholds Tm to be selected is {0.1,0.2, 0.3., 1.7,1.8,1.9,2.0 }; secondly, calculating a characteristic vector of each signature information in each group of data pairs in input data through a convolutional neural network model, then calculating the Euclidean distance between two characteristic vectors in each data pair, then comparing the Euclidean distance with a comparison threshold Tm, if the Euclidean distance is smaller than the comparison threshold Tm, determining the same sample, if the Euclidean distance is larger than the comparison threshold Tm, determining the different samples, comparing each determination result with a labeling result, if the determination result is consistent with the labeling result, marking the comparison result as correct, and if the determination result is inconsistent with the labeling result, marking the comparison result as incorrect, thereby calculating the corresponding identification accuracy when the comparison threshold is Tm; repeating the step 6, traversing T1 to Tm to obtain m identification accuracy rates, recording the optimal identification accuracy rate as the identification accuracy rate of the current margin value, and comparing the contrast threshold value corresponding to the optimal identification accuracy rate as the optimal contrast threshold value corresponding to the current margin value;
and 7, testing the identification accuracy of the trained convolutional neural network by using the test set data, and executing the test step of the identification accuracy of the convolutional neural network, so that the floating range of the identification accuracy of the convolutional neural network is determined, and the generalization capability of the convolutional neural network is verified.
Step 8, judging whether the value set of margin is traversed or not, if so, obtaining the identification accuracy corresponding to each margin value, executing step 9, if not, reselecting the margin value from the value set, and returning to the step 5 to start training;
step 9, judging whether the value precision of the margin value in the current value set of margin meets the expected requirement or not:
if yes, determining the highest recognition accuracy rate from the obtained multiple recognition accuracy rates, then recording the corresponding optimal comparison threshold according to the margin value corresponding to the highest recognition accuracy rate, and finishing the training. The value taking precision refers to that the difference between two adjacent margin values in a value taking set is smaller than or equal to an expected precision, for example, if the expected value taking precision is set to be 1, the value taking set is [4,5,6,7,8], margin is 5, and the recognition accuracy is the highest, the value taking precision of the value taking set meets the expected requirement, and the training can be stopped; if the expected value precision is determined to be 2, the value set is [0,2, 4, 6, 8], margin is 2, and the recognition accuracy is the highest, the value precision of the value set meets the expected requirement, and the training can be stopped. When the trained convolutional neural network is applied to field authentication, the convolutional neural network calculates the characteristic distance between the acquired signature information and the signature template, and compares the characteristic distance with the optimal comparison threshold determined in the step 9, so as to authenticate whether the acquired signature information is a real signature;
if not, updating the value set of margin according to the margin value corresponding to the highest recognition accuracy, reducing the value range of the updated value set of margin and containing the margin value corresponding to the highest recognition accuracy, and then returning to the step 5 to restart the training. In the initial stage of adjustment of the margin value, the value range needs to be expanded, the density and the iteration times of the point are reduced, and after the margin is determined to be the optimal value in the value set, the value range is reduced, and the density and the iteration times of the candidate points are increased. For example, the region is selected, and [1,10,100,1000] is selected for training at the beginning, and the accuracy is the highest when the region is found to be 10; continuously refining the value range of margin to be [1,5,10,20,40,60,80,100], and finding that the accuracy is the highest when the margin is 5; continuously thinning a margin value range [1,2,3,4,5,6,7,8,9 and 10 ]; the highest accuracy is found when margin is 2. And finally determining the value of margin to be 2.
In this embodiment, the distance between the approximate samples is smaller and the distance between different samples is larger by the contrast loss function used by the convolutional neural network, and the hyper-parameter margin is added to the contrast loss function, so that in the training process of the model, when the characteristic distance d between two samples in the data pair is larger than margin, the loss value L is 0, the data pair does not continuously optimize the model, that is, the data pair does not influence the update of the model weight parameter, and the training effect of the convolutional neural network model is improved. Meanwhile, the convolutional neural network is used for extracting features, the features not only comprise signature handwriting features but also comprise handwriting pressure value features during signature, the identification effect of the attack item with high similarity to the copied handwriting is effectively improved, and the final identification accuracy is greatly improved.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.
Claims (10)
1. An electronic signature authentication method for improving the identification accuracy of an electronic signature is characterized in that: the method comprises the following steps:
acquiring signature information through a terminal capable of acquiring an electronic signature, wherein the signature information comprises a timestamp, coordinate information and pressure information of each pixel point of an electronic signature track;
preprocessing the signature information: a signature track layer is newly built, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from the pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map;
inputting the characteristic graph into a trained convolutional neural network, comparing the characteristic graph with the signature template graph of the signature through the convolutional neural network, and outputting an authentication result.
2. The electronic signature authentication method for improving the recognition accuracy of the electronic signature according to claim 1, wherein:
the restoring the signature track on the signature track layer according to the coordinate information of the signature track specifically comprises the following steps: firstly, presetting a gray value of a signature track as A and a gray value of a non-signature track region as B, then setting a gray value of a pixel point corresponding to coordinate information as A and setting a gray value of each pixel point of the non-signature track region as B on a signature track layer according to the coordinate information of the signature track;
the method comprises the steps of obtaining a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to coordinate information of a signature track on a signature pressure layer, wherein the pressure value is specifically as follows: and carrying out equal-scale scaling on the information of each pressure value, calculating the pressure value, wherein the size of the pressure value is within a gray value range, then setting the gray value of a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer according to the coordinate information of the signature track, setting the gray value as a pressure value, and setting the gray value of each pixel point in a non-signature track area on the signature pressure layer as B.
3. The electronic signature authentication method for improving the recognition accuracy of the electronic signature according to claim 2, wherein: the merging of the signature track layer and the signature pressure layer generates a feature map, which specifically comprises the following steps: and according to the coordinate information of the signature track, taking the gray values of all pixel points corresponding to the coordinate information on the signature track layer and the signature pressure layer as two attribute values of the pixel points corresponding to the coordinate information on the characteristic diagram, wherein the attribute values of all the pixel points in the non-signature track area are (B, B).
4. The electronic signature authentication method for improving the recognition accuracy of the electronic signature according to claim 1, wherein: the training process of the convolutional neural network is as follows:
step 1, creating a signature information data set, traversing each signature information, and executing the preprocessing step;
step 2, randomly selecting feature graphs corresponding to two signature information from the signature information data set to form a data pair, then marking the data pair, distinguishing whether the two signature information in the data pair are both correct signatures through marking, and finally, dividing the signature information data set into a training set, a verification set and a test set;
step 3, building a convolutional neural network model, inputting each group of data pairs, and outputting a feature vector corresponding to each signature information; the convolutional neural network model uses a contrast loss coherent loss as a loss function, the contrast loss function being:wherein y represents a label, margin is a boundary parameter in the hyper-parameter, d represents a characteristic distance, N represents the total amount of the training data, and L represents a loss value;
step 4, setting hyper-parameters of the convolutional neural network model, wherein the hyper-parameters comprise: value set and initial learning rate of margin; the value set of margin comprises a plurality of margin values;
step 5, configuring data in the training set according to the hyper-parameters to perform model training, and selecting a margin value from a margin value set for calculating the contrast loss function;
step 6, testing the identification accuracy of the convolutional neural network by using the verification set;
step 7, testing the identification accuracy of the convolutional neural network by using the test set;
step 8, judging whether the value set of margin is traversed or not, if so, obtaining the identification accuracy corresponding to each margin value, executing step 9, if not, reselecting the margin value from the value set, and returning to the step 5 to start training;
step 9, judging whether the value precision of the margin value in the current value set of margin meets the expected requirement or not: if yes, determining the highest recognition accuracy rate from the obtained multiple recognition accuracy rates, then recording the corresponding optimal comparison threshold according to the margin value corresponding to the highest recognition accuracy rate, and finishing training; if not, updating the value set of margin according to the margin value corresponding to the highest recognition accuracy, reducing the value range of the updated value set of margin and containing the margin value corresponding to the highest recognition accuracy, and then returning to the step 5 to restart the training.
5. The electronic signature authentication method for improving the identification accuracy of the electronic signature as claimed in claim 4, wherein: in the step 6 and the step 7, the step of testing the identification accuracy of the convolutional neural network comprises the following steps: firstly, traversing the values between 0 and the margin value by a preset step length to obtain a group of comparison threshold values Tm { T1, T2, T3. Secondly, calculating the characteristic vector of each signature information in each set of data pairs through a convolution neural network model, then calculating the Euclidean distance between two characteristic vectors in each data pair, then comparing said euclidean distance with an alignment threshold Tm, and if said euclidean distance is less than the alignment threshold Tm, judging the samples to be the same, if the Euclidean distance is greater than a comparison threshold Tm, judging the samples to be different, comparing each judgment result with a labeling result, if the Euclidean distance is greater than the comparison threshold Tm, judging the judgment result to be different from the labeling result, if the judgment result is consistent with the labeling result, marking the judgment result as correct, and if the judgment result is inconsistent with the labeling result, marking the judgment result as incorrect, and calculating the corresponding identification accuracy when the comparison threshold is Tm; and repeating the step 6, traversing T1 to Tm to obtain m identification accuracy rates, recording the optimal identification accuracy rate as the identification accuracy rate of the current margin value, and comparing the contrast threshold value corresponding to the optimal identification accuracy rate to be used as the optimal contrast threshold value corresponding to the current margin value.
6. An electronic signature authentication device for improving the identification accuracy of an electronic signature is characterized in that: comprising a processor and a memory, said memory storing instructions adapted to be loaded by the processor and to perform the steps of:
acquiring signature information through a terminal capable of acquiring an electronic signature, wherein the signature information comprises a timestamp, coordinate information and pressure information of each pixel point of an electronic signature track;
preprocessing the signature information: a signature track layer is newly built, and a signature track is restored on the signature track layer according to the coordinate information of the signature track; newly building a signature pressure layer, acquiring a pressure value from the pressure information, and setting the pressure value on a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer; combining the signature track layer and the signature pressure layer to generate a feature map;
inputting the characteristic graph into a trained convolutional neural network, comparing the characteristic graph with the signature template graph of the signature through the convolutional neural network, and outputting an authentication result.
7. An electronic signature authentication apparatus for improving the recognition accuracy of an electronic signature according to claim 6, wherein: the restoring the signature track on the signature track layer according to the coordinate information of the signature track specifically comprises the following steps: firstly, presetting a gray value of a signature track as A and a gray value of a non-signature track region as B, then setting a gray value of a pixel point corresponding to coordinate information as A and setting a gray value of each pixel point of the non-signature track region as B on a signature track layer according to the coordinate information of the signature track;
the method comprises the steps of obtaining a pressure value from pressure information, and setting the pressure value on a pixel point corresponding to coordinate information of a signature track on a signature pressure layer, wherein the pressure value is specifically as follows: and carrying out equal-scale scaling on the information of each pressure value, calculating the pressure value, wherein the size of the pressure value is within a gray value range, then setting the gray value of a pixel point corresponding to the coordinate information of the signature track on the signature pressure layer according to the coordinate information of the signature track, setting the gray value as a pressure value, and setting the gray value of each pixel point in a non-signature track area on the signature pressure layer as B.
8. An electronic signature authentication apparatus for improving the recognition accuracy of an electronic signature according to claim 7, wherein: the merging of the signature track layer and the signature pressure layer generates a feature map, which specifically comprises the following steps: and according to the coordinate information of the signature track, taking the gray values of all pixel points corresponding to the coordinate information on the signature track layer and the signature pressure layer as two attribute values of the pixel points corresponding to the coordinate information on the characteristic diagram, wherein the attribute values of all the pixel points in the non-signature track area are (B, B).
9. An electronic signature authentication apparatus for improving the recognition accuracy of an electronic signature according to claim 6, wherein: the training process of the convolutional neural network is as follows:
step 1, creating a signature information data set, traversing each signature information, and executing the preprocessing step;
step 2, randomly selecting feature graphs corresponding to two signature information from the signature information data set to form a data pair, then marking the data pair, distinguishing whether the two signature information in the data pair are both correct signatures through marking, and finally, dividing the signature information data set into a training set, a verification set and a test set;
step 3, building a convolutional neural network model, inputting each group of data pairs, and outputting a feature vector corresponding to each signature information; the convolutional neural network model uses a contrast loss coherent loss as a loss function, the contrast loss function being:wherein y represents a label tag and margin is superBoundary parameters in the parameters, d represents a characteristic distance, N represents the total amount of the training data, and L represents a loss value;
step 4, setting hyper-parameters of the convolutional neural network model, wherein the hyper-parameters comprise: value set and initial learning rate of margin; the value set of margin comprises a plurality of margin values;
step 5, configuring data in the training set according to the hyper-parameters to perform model training, and selecting a margin value from a margin value set for calculating the contrast loss function;
step 6, testing the identification accuracy of the convolutional neural network by using the verification set;
step 7, testing the identification accuracy of the convolutional neural network by using the test set;
step 8, judging whether the value set of margin is traversed or not, if so, obtaining the identification accuracy corresponding to each margin value, executing step 9, if not, reselecting the margin value from the value set, and returning to the step 5 to start training;
step 9, judging whether the value precision of the margin value in the current value set of margin meets the expected requirement or not: if yes, determining the highest recognition accuracy rate from the obtained multiple recognition accuracy rates, then recording the corresponding optimal comparison threshold according to the margin value corresponding to the highest recognition accuracy rate, and finishing training; if not, updating the value set of margin according to the margin value corresponding to the highest recognition accuracy, reducing the value range of the updated value set of margin and containing the margin value corresponding to the highest recognition accuracy, and then returning to the step 5 to restart the training.
10. An electronic signature authentication apparatus for improving the recognition accuracy of an electronic signature according to claim 9, wherein: in the step 6 and the step 7, the step of testing the identification accuracy of the convolutional neural network comprises the following steps: firstly, traversing the values between 0 and the margin value by a preset step length to obtain a group of comparison threshold values Tm { T1, T2, T3. Secondly, calculating the characteristic vector of each signature information in each set of data pairs through a convolution neural network model, then calculating the Euclidean distance between two characteristic vectors in each data pair, then comparing said euclidean distance with an alignment threshold Tm, and if said euclidean distance is less than the alignment threshold Tm, judging the samples to be the same, if the Euclidean distance is greater than a comparison threshold Tm, judging the samples to be different, comparing each judgment result with a labeling result, if the Euclidean distance is greater than the comparison threshold Tm, judging the judgment result to be different from the labeling result, if the judgment result is consistent with the labeling result, marking the judgment result as correct, and if the judgment result is inconsistent with the labeling result, marking the judgment result as incorrect, and calculating the corresponding identification accuracy when the comparison threshold is Tm; and repeating the step 6, traversing T1 to Tm to obtain m identification accuracy rates, recording the optimal identification accuracy rate as the identification accuracy rate of the current margin value, and comparing the contrast threshold value corresponding to the optimal identification accuracy rate to be used as the optimal contrast threshold value corresponding to the current margin value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110420441.7A CN113158887B (en) | 2021-04-19 | 2021-04-19 | Electronic signature authentication method and equipment for improving electronic signature recognition accuracy |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110420441.7A CN113158887B (en) | 2021-04-19 | 2021-04-19 | Electronic signature authentication method and equipment for improving electronic signature recognition accuracy |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113158887A true CN113158887A (en) | 2021-07-23 |
CN113158887B CN113158887B (en) | 2024-08-16 |
Family
ID=76868736
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110420441.7A Active CN113158887B (en) | 2021-04-19 | 2021-04-19 | Electronic signature authentication method and equipment for improving electronic signature recognition accuracy |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113158887B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115223178A (en) * | 2022-07-13 | 2022-10-21 | 厦门国际银行股份有限公司 | Signature authenticity verification method, system, terminal device and storage medium |
CN117235813A (en) * | 2023-11-16 | 2023-12-15 | 中国标准化研究院 | Electronic signature data quality detection method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718918A (en) * | 2016-01-28 | 2016-06-29 | 华南理工大学 | Video based authentication method for signature in air |
CN108154136A (en) * | 2018-01-15 | 2018-06-12 | 众安信息技术服务有限公司 | For identifying the method, apparatus of writing and computer-readable medium |
CN110399815A (en) * | 2019-07-12 | 2019-11-01 | 淮阴工学院 | A kind of CNN-SVM Handwritten Signature Recognition Method based on VGG16 |
CN111046774A (en) * | 2019-12-06 | 2020-04-21 | 国网湖北省电力有限公司电力科学研究院 | Chinese signature handwriting identification method based on convolutional neural network |
CN111178290A (en) * | 2019-12-31 | 2020-05-19 | 上海眼控科技股份有限公司 | Signature verification method and device |
WO2021027336A1 (en) * | 2019-08-14 | 2021-02-18 | 深圳壹账通智能科技有限公司 | Authentication method and apparatus based on seal and signature, and computer device |
-
2021
- 2021-04-19 CN CN202110420441.7A patent/CN113158887B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105718918A (en) * | 2016-01-28 | 2016-06-29 | 华南理工大学 | Video based authentication method for signature in air |
CN108154136A (en) * | 2018-01-15 | 2018-06-12 | 众安信息技术服务有限公司 | For identifying the method, apparatus of writing and computer-readable medium |
CN110399815A (en) * | 2019-07-12 | 2019-11-01 | 淮阴工学院 | A kind of CNN-SVM Handwritten Signature Recognition Method based on VGG16 |
WO2021027336A1 (en) * | 2019-08-14 | 2021-02-18 | 深圳壹账通智能科技有限公司 | Authentication method and apparatus based on seal and signature, and computer device |
CN111046774A (en) * | 2019-12-06 | 2020-04-21 | 国网湖北省电力有限公司电力科学研究院 | Chinese signature handwriting identification method based on convolutional neural network |
CN111178290A (en) * | 2019-12-31 | 2020-05-19 | 上海眼控科技股份有限公司 | Signature verification method and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115223178A (en) * | 2022-07-13 | 2022-10-21 | 厦门国际银行股份有限公司 | Signature authenticity verification method, system, terminal device and storage medium |
CN117235813A (en) * | 2023-11-16 | 2023-12-15 | 中国标准化研究院 | Electronic signature data quality detection method and system |
CN117235813B (en) * | 2023-11-16 | 2024-01-23 | 中国标准化研究院 | Electronic signature data quality detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN113158887B (en) | 2024-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108647583B (en) | Face recognition algorithm training method based on multi-target learning | |
Kashi et al. | A Hidden Markov Model approach to online handwritten signature verification | |
CN103605972B (en) | Non-restricted environment face verification method based on block depth neural network | |
CN108460649A (en) | A kind of image-recognizing method and device | |
CN102163281B (en) | Real-time human body detection method based on AdaBoost frame and colour of head | |
CN113158887B (en) | Electronic signature authentication method and equipment for improving electronic signature recognition accuracy | |
CN103927532B (en) | Person's handwriting method for registering based on stroke feature | |
CN105447441A (en) | Face authentication method and device | |
CN106980809B (en) | Human face characteristic point detection method based on ASM | |
CN101369309B (en) | Human ear image normalization method based on active apparent model and outer ear long axis | |
CN103714340B (en) | Self-adaptation feature extracting method based on image partitioning | |
CN109934114A (en) | A kind of finger vena template generation and more new algorithm and system | |
CN105654056A (en) | Human face identifying method and device | |
CN111275070A (en) | Signature verification method and device based on local feature matching | |
CN113361666B (en) | Handwritten character recognition method, system and medium | |
CN106919884A (en) | Human facial expression recognition method and device | |
CN114220178A (en) | Signature identification system and method based on channel attention mechanism | |
CN111626246A (en) | Face alignment method under mask shielding | |
CN104732247B (en) | A kind of human face characteristic positioning method | |
CN114022914B (en) | Palmprint recognition method based on fusion depth network | |
CN109886091A (en) | Three-dimensional face expression recognition methods based on Weight part curl mode | |
CN111310548B (en) | Method for identifying stroke types in online handwriting | |
CN110298159B (en) | Smart phone dynamic gesture identity authentication method | |
CN111062338A (en) | Certificate portrait consistency comparison method and system | |
US20220036129A1 (en) | Method, device, and computer program product for model updating |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |