CN114360071A - Method for realizing off-line handwritten signature verification based on artificial intelligence - Google Patents

Method for realizing off-line handwritten signature verification based on artificial intelligence Download PDF

Info

Publication number
CN114360071A
CN114360071A CN202210028558.5A CN202210028558A CN114360071A CN 114360071 A CN114360071 A CN 114360071A CN 202210028558 A CN202210028558 A CN 202210028558A CN 114360071 A CN114360071 A CN 114360071A
Authority
CN
China
Prior art keywords
signature
feature
picture
indicates
same person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210028558.5A
Other languages
Chinese (zh)
Inventor
王玉龙
魏文虎
王晶
赵海秀
徐童
张乐剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Posts and Telecommunications
Original Assignee
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Posts and Telecommunications filed Critical Beijing University of Posts and Telecommunications
Priority to CN202210028558.5A priority Critical patent/CN114360071A/en
Publication of CN114360071A publication Critical patent/CN114360071A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The method for realizing off-line handwritten signature verification based on artificial intelligence is used for carrying out feature extraction and result classification by using an SVM (support vector machine) and a twin neural network framework aiming at the handwritten signature verification problem under an off-line scene, and simultaneously carrying out pixel inversion on an input signature picture by using an inverse authentication network thought to obtain a plurality of groups of data and simultaneously verifying the data; the method not only uses a deep learning method, but also combines a machine learning method, so that the method has higher reliability and accuracy.

Description

Method for realizing off-line handwritten signature verification based on artificial intelligence
Technical Field
The invention relates to a method for realizing off-line handwritten signature verification based on artificial intelligence, belonging to the technical field of information, in particular to the technical field of handwritten signature verification.
Background
With the development of the current network technology, people pay more and more attention to their privacy security. In many work areas, authentication is an indispensable link. Signature verification is one type of identity verification, can assist in verifying personal identity in many times, and has the advantages of simplicity and quickness. Compared with biological characteristics such as fingerprints, the signature is easier to acquire.
Although there are many signature verification schemes available today, there are still many problems. Such as: the Chinese patent with the application number of CN201680028530.4 realizes a signature authentication method, a terminal, a handwriting pen and a system. The method comprises the steps that when a user signs on a touch panel of the terminal through a handwriting pen, the terminal detects first signature data, wherein the first signature data comprise coordinate information of a contact point of the handwriting pen and the touch panel and pressure information of the handwriting pen on the touch panel at the contact point; and the terminal receives second signature data sent by the handwriting pen, wherein the second signature data comprises the motion track information of the handwriting pen and the pressure information of the user on the handwriting pen, which are detected by the handwriting pen when the user signs on the touch panel through the handwriting pen. And matching and comparing the first signature data and the second signature data with pre-stored signature template data to obtain a matching rate. And when the matching rate is not lower than a preset matching rate threshold value, the signature passes the authentication. According to the embodiment of the invention, more user signature characteristic data are extracted through the stylus pen, the difficulty of an attacker simulating user signature information is increased, and the safety of the signature authentication system is improved. However, the patent has the following disadvantages: 1. only online handwritten signatures can be processed, and offline handwritten signatures cannot be processed; 2. only information such as pressure information, coordinate information, motion trail and the like on the touch panel is collected, and the characteristics of handwriting shape and the like are not extracted, so that the accuracy is not high enough; 3. a large amount of acquisition can not be carried out firstly and then the post-processing is carried out, and the application scene is not rich enough.
For another example: chinese patent application No. CN202011436507.3 discloses a handwritten signature comparison method and system based on image recognition, the method is applied to a handwritten signature comparison system, and the system has a basic database, a graph database, a background management module and a display interface, wherein the method includes: firstly, respectively collecting handwritten signature data of all customer managers and customers, importing the handwritten signature data into a graph database through a background management system, preprocessing the handwritten signature data, and directly butting the background management system to enter the graph database if the handwritten signature data is an electronic signature; the client signature characteristics are extracted through the existing image recognition technology, the client signature characteristics are compared with the signature characteristics of a client manager in a database, a similarity ranking is obtained, early warning reminding is carried out on result data exceeding a threshold value, and background manual processing is carried out. However, this patent has the following disadvantages: 1. the method mainly aims to find out the signatures which are probably leaders so as to carry out further processing, and the application scene is too large in limitation; 2. the precision is not high enough, the signature of the leader can be found only within a certain error range, and the leader cannot be specifically corresponding to a certain person; 3. the signature data of the incoming user cannot be processed in real time.
Therefore, the problem of off-line handwritten signature authentication is not well solved by the existing signature verification technical scheme, and the problem of processing and solving aiming at tasks independent of writers is not solved, so that the signature verification technical scheme is not suitable for signature authenticity verification work of any person. Therefore, the problem of how to reliably and accurately realize the off-line handwritten signature authentication becomes a technical problem which is urgently needed to be solved in the technical field of handwritten signature verification at present.
Disclosure of Invention
In view of the above, the present invention is directed to a method for implementing reliable and accurate offline handwritten signature authentication.
In order to achieve the above object, the present invention provides a method for implementing off-line handwritten signature verification based on artificial intelligence, which comprises the following steps:
(1) preprocessing the signature picture M1 to be verified according to a first feature extraction method to obtain texture features F1 of the signature picture M1 to be verified; preprocessing the real signature picture M2 according to a first feature extraction method to obtain texture features F2 of the real signature picture M2; the specific content of the first feature extraction method is as follows: carrying out operations of graying, background removal, histogram displacement, size adjustment, interpolation and quantization, gray level co-occurrence matrix calculation, texture feature calculation and the like on the signature picture in sequence;
(2) inputting the texture features F1 and F2 into a first decision device to obtain a first decision result D0; the value of the first decision result D0 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person; the first decision device is an SVM classifier;
(3) preprocessing the signature picture M1 to be verified according to a second feature extraction method to obtain a depth feature P1 of the signature picture M1 to be verified; preprocessing the flip picture IM1 of the signature picture M1 to be verified according to a second feature extraction method to obtain the depth feature IP1 of the flip picture IM1 of the signature picture M1 to be verified;
preprocessing the real signature picture M2 according to a second feature extraction method to obtain a depth feature P2 of the real signature picture M2; preprocessing the flip picture IM2 of the real signature picture M2 according to a second feature extraction method to obtain the depth feature IP2 of the flip picture IM2 of the real signature picture M2;
the second feature extraction method is a method for extracting the depth features of the picture through a backbone network and an attention network;
(4) combining the depth features P1 and P2 to form a fusion feature FP1, inputting the fusion feature FP1 into a second decision device, and obtaining a second decision result D1; the value of the second decision result D1 can only be 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
combining the depth feature P1 and the IP2 to form a fusion feature FP2, inputting the fusion feature FP2 to a third decision device, and obtaining a third decision result D2; the value of the third decision result D2 is only 0 or 1, where 0 indicates that it is not a signature of the same person, and 1 indicates that it is a signature of the same person;
combining the depth features IP1 and P2 to form a fusion feature FP3, inputting the fusion feature FP3 to a fourth decision device, and obtaining a fourth decision result D3; the value of the fourth decision result D3 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
combining the depth features IP1 and IP2 to form a fusion feature FP4, inputting the fusion feature FP4 to a fifth decision device, and obtaining a fifth decision result D4; the value of the fifth decision result D4 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
the second, third, fourth and fifth judgers are composed of twin neural networks with completely same structures and parameters;
(5) the final verification result D is calculated according to the following formula,
D=D0+D1+D2+D3+D4
if the D value is greater than or equal to 3, the signature is true, otherwise it is false.
The specific content of the second feature extraction method comprises the following operation steps:
(31) firstly, adjusting the size of a signature picture to a set size;
(32) inputting the signature picture after finishing the size adjustment into a characteristic pyramid network, and performing the process of series 3 x 3 convolution layers,
3 × 3 pooling layers, activation layer processing, resulting in 3 kinds of characteristic maps with sizes of 110 × 77, 55 × 38, 27 × 19, named fe1, fe2, fe3, respectively;
(33) performing primary upsampling on the feature map fe3 to adjust the size of the feature map fe3 to 55 × 38, which is named as fe4, performing connection operation on the feature map fe2 and fe4 to obtain cfe2, performing primary upsampling on fe4 to adjust the size of the feature map fe4 to 110 × 77, which is named as fe5, performing connection operation on fe1 and fe5 to obtain cfe1, and finally performing connection operation on cfe1, cfe2 and fe3 to obtain combined feature fe;
(34) carrying out contrast enhancement on the signature picture and then carrying out down-sampling to obtain a characteristic Rfe;
(35) inputting the obtained feature fe as the Q feature of the attention network, the obtained feature Rfe as the K feature of the attention network, and the obtained feature Rfe as the V feature of the attention network into the attention network;
(36) the attention network outputs the depth features of the signature picture.
The specific process of the twin neural network is as follows: using a convolution layer of 3 × 3, a pooling layer of 3 × 3 and a ReLU activation function to obtain two feature vectors of 110 × 77 × 128, connecting the two feature vectors of 110 × 77 × 128 into a feature vector of 110 × 77 × 256, performing operations such as convolution and pooling to obtain a feature of 110 × 77 × 32, flattening the feature of 110 × 77 × 32 by using a flattening Flatten layer to obtain a one-dimensional feature vector with a length of 271040, and inputting the one-dimensional feature vector into a fully-connected layer of 271040 × 1 to obtain a decision result.
The twin neural network is trained in advance by using a binary cross entropy loss function.
The specific content of the histogram shift operation is as follows: and moving the histogram of the gray value of the pixel of the signature picture to zero and keeping white in the background of the signature picture, specifically, subtracting the minimum gray value in the signature picture from the pixel of the signature picture.
The invention has the beneficial effects that: aiming at the problem of handwritten signature verification in an off-line scene, the method uses frames such as an SVM (support vector machine) and a twin neural network to carry out feature extraction and result classification, and simultaneously uses an inverse identification network idea to carry out pixel inversion on an input picture so as to obtain a plurality of groups of data and carry out verification simultaneously; the invention not only uses the deep learning method, but also combines the machine learning method, so that the reliability and the accuracy of the method are higher.
Drawings
Fig. 1 is a flowchart of a method for implementing off-line handwritten signature verification based on artificial intelligence according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the accompanying drawings.
Referring to fig. 1, a method for implementing off-line handwritten signature verification based on artificial intelligence proposed by the present invention is described, said method comprising the following operation steps:
(1) preprocessing the signature picture M1 to be verified according to a first feature extraction method to obtain texture features F1 of the signature picture M1 to be verified; preprocessing the real signature picture M2 according to a first feature extraction method to obtain texture features F2 of the real signature picture M2; the specific content of the first feature extraction method is as follows: carrying out operations of graying, background removal, histogram displacement, size adjustment, interpolation and quantization, gray level co-occurrence matrix calculation, texture feature calculation and the like on the signature picture in sequence;
in an embodiment, the signature picture is resized to a standard size 220 x 155; the specific content of the interpolation and quantization operation is that after the size adjustment, the signature picture needs to be quantized by using some preset gray values, that is, it is ensured that the gray pixel values in the signature picture are converted to some specific values.
In an embodiment, the gray-level values at fixed intervals are set to be 0, 10, 20, e.g., 150, e.g., 10,147 and 150, and the gray-level value of each pixel point of the signature picture is converted into a specified gray-level value closest to the pixel point, for example, 7.3 and 150.
The gray co-occurrence matrix is a list of the frequencies at which various pixel gray mixes occur in an image. The gray co-occurrence matrix analyzes the relationship between two pixels at a time, called the reference pixel and the neighboring pixels, and then records the occurrence of each pixel combination in the matrix. The co-occurrence matrix is defined by the joint probability density of the pixels at two positions, which not only reflects the distribution characteristics of the brightness, but also reflects the position distribution characteristics between the pixels with the same brightness or close to the brightness, and is a second-order statistical characteristic related to the brightness change of the image. Which is the basis for defining a set of texture features. The gray level co-occurrence matrix of an image can reflect the comprehensive information of the gray level of the image about the direction, the adjacent interval and the change amplitude, and is the basis for analyzing the local mode of the image and the arrangement rule of the local mode.
The specific content of the operation of calculating the texture features is as follows: and calculating detail texture characteristics such as homogeneity, contrast, correlation and the like of the signature picture by using a gray level co-occurrence matrix (GLCM), and then calculating an average value of the detail texture characteristics, wherein the obtained average value is the texture characteristics of the signature picture.
(2) Inputting the texture features F1 and F2 into a first decision device to obtain a first decision result D0; the value of the first decision result D0 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person; the first decision device is an SVM classifier;
(3) preprocessing the signature picture M1 to be verified according to a second feature extraction method to obtain a depth feature P1 of the signature picture M1 to be verified; preprocessing the flip picture IM1 of the signature picture M1 to be verified according to a second feature extraction method to obtain the depth feature IP1 of the flip picture IM1 of the signature picture M1 to be verified;
according to the idea of inverse authentication network, the color change of the image does not affect the attribution of the signature, so that for two input images, M1 (signature image to be verified) and M2 (real signature image), pixel inversion is respectively performed to obtain images IM1 (signature image to be verified after pixel inversion) and IM2 (real signature image after pixel inversion), in principle, the comparison result between M1 and M2 should be consistent with the comparison result between three pairs of data, i 1 and IM2, i 1 and M2, and i 1 and IM 2.
The image inversion is to invert the value of each pixel point in the image, the value range of the pixel point in the embodiment is 0-255.0, and if the value of a certain pixel point is a, the pixel inversion is to convert the value a into 255.0-a. And converting the values of all pixel points of the whole image to finish pixel inversion.
Preprocessing the real signature picture M2 according to a second feature extraction method to obtain a depth feature P2 of the real signature picture M2; preprocessing the flip picture IM2 of the real signature picture M2 according to a second feature extraction method to obtain the depth feature IP2 of the flip picture IM2 of the real signature picture M2;
the second feature extraction method is a method for extracting depth features of the picture through a backbone network backbone and Attention (Attention) network; in an embodiment, a ReLU layer is used as an activation function behind each convolution layer of the backbone network backbone part, and a BatchNormlification layer is used for normalization after each convolution layer;
the backbone network backbone adopts an FPN (characteristic pyramid network), the FPN is a classical network utilizing image multi-scale characteristics, characteristic graphs of various sizes can be obtained through a plurality of down-sampling operations respectively, and then the characteristic graphs of various sizes can be adjusted to the same size and fused through up-sampling and splicing.
For details of Attention (Attention) networks, see the literature: vaswani A, Shazeer N, Parmar N, et al.Attention Is All You Need [ J ]. arXiv, 2017.
(4) Combining the depth features P1 and P2 to form a fusion feature FP1, inputting the fusion feature FP1 into a second decision device, and obtaining a second decision result D1; the value of the second decision result D1 can only be 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
combining the depth feature P1 and the IP2 to form a fusion feature FP2, inputting the fusion feature FP2 to a third decision device, and obtaining a third decision result D2; the value of the third decision result D2 is only 0 or 1, where 0 indicates that it is not a signature of the same person, and 1 indicates that it is a signature of the same person;
combining the depth features IP1 and P2 to form a fusion feature FP3, inputting the fusion feature FP3 to a fourth decision device, and obtaining a fourth decision result D3; the value of the fourth decision result D3 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
combining the depth features IP1 and IP2 to form a fusion feature FP4, inputting the fusion feature FP4 to a fifth decision device, and obtaining a fifth decision result D4; the value of the fifth decision result D4 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
the second, third, fourth and fifth judgers are composed of twin neural networks with completely same structures and parameters;
(6) the final verification result D is calculated according to the following formula,
D=D0+D1+D2+D3+D4
if the D value is greater than or equal to 3, the signature is true, otherwise it is false.
The specific content of the second feature extraction method comprises the following operation steps:
(31) firstly, resizing the signature picture to a set size, in the embodiment, resizing to 220 × 155;
(32) inputting the signature pictures after the size adjustment into a feature pyramid network, and processing the signature pictures through a series of 3 × 3 convolution layers, 3 × 3 pooling layers and activation layers to obtain 3 feature pictures with the sizes of 110 × 77, 55 × 38 and 27 × 19, which are named as fe1, fe2 and fe3 respectively;
(33) performing primary upsampling on the feature map fe3 to adjust the size of the feature map fe3 to 55 × 38, which is named as fe4, performing connection operation on the feature map fe2 and fe4 to obtain cfe2, performing primary upsampling on fe4 to adjust the size of the feature map fe4 to 110 × 77, which is named as fe5, performing connection operation on fe1 and fe5 to obtain cfe1, and finally performing connection operation on the cfe1, the cfe2 and the fe3 to obtain combined feature fe, wherein the size of the combined feature fe is 110 × 77 × 128 (width × height channels);
(34) the most important part in a signature picture is a part with strokes, but the background part is not important, so that the signature picture is subjected to contrast enhancement and then downsampling to obtain the characteristic Rfe; in an embodiment, the contrast enhanced signature is down-sampled to 110 × 77 × 3, and then input to the convolution layer to adjust the number of channels to 128, i.e., the size of the combined feature Rfe is 110 × 77 × 128 (width × height number of channels);
(35) note that the network uses a self-attentive approach, requiring Q, K, V three sets of features to be defined; inputting the obtained feature fe as the Q feature of the attention network, the obtained feature Rfe as the K feature of the attention network, and the obtained feature Rfe as the V feature of the attention network into the attention network; therefore, attention is paid to more attention stroke information of the network, background information (most of backgrounds are white/black, the proportion is large) is omitted as much as possible, and the characteristic of higher discrimination is obtained.
(36) The attention network outputs the depth features of the signature picture.
The specific process of the twin neural network is as follows: using a convolution layer of 3 × 3, a pooling layer of 3 × 3 and a ReLU activation function to obtain two feature vectors of 110 × 77 × 128, connecting the two feature vectors of 110 × 77 × 128 into a feature vector of 110 × 77 × 256, performing operations such as convolution and pooling to obtain a feature of 110 × 77 × 32, flattening the feature of 110 × 77 × 32 by using a flattening Flatten layer to obtain a one-dimensional feature vector with a length of 271040, and inputting the one-dimensional feature vector into a fully-connected layer of 271040 × 1 to obtain a decision result.
The twin neural network is trained in advance by using a binary cross entropy loss function. In an embodiment, the loss functions of the four-way twin neural networks are added together as a total loss function for network training.
The specific content of the histogram shift operation is as follows: shifting the histogram of the gray values of the pixels of the signature picture to zero and keeping white (in the embodiment, the white value is 255) in the background of the signature picture, specifically, subtracting the minimum gray value in the signature picture from the pixels of the signature picture.
The inventors have conducted a number of experiments on the method of the present invention and obtained good experimental results, which indicates that the method of the present invention is effectively feasible.

Claims (5)

1. The method for realizing off-line handwritten signature verification based on artificial intelligence is characterized by comprising the following steps: the method comprises the following operation steps:
(1) preprocessing the signature picture M1 to be verified according to a first feature extraction method to obtain texture features F1 of the signature picture M1 to be verified; preprocessing the real signature picture M2 according to a first feature extraction method to obtain texture features F2 of the real signature picture M2; the specific content of the first feature extraction method is as follows: carrying out operations of graying, background removal, histogram displacement, size adjustment, interpolation and quantization, gray level co-occurrence matrix calculation, texture feature calculation and the like on the signature picture in sequence;
(2) inputting the texture features F1 and F2 into a first decision device to obtain a first decision result D0; the value of the first decision result D0 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person; the first decision device is an SVM classifier;
(3) preprocessing the signature picture M1 to be verified according to a second feature extraction method to obtain a depth feature P1 of the signature picture M1 to be verified; preprocessing the flip picture IM1 of the signature picture M1 to be verified according to a second feature extraction method to obtain the depth feature IP1 of the flip picture IM1 of the signature picture M1 to be verified;
preprocessing the real signature picture M2 according to a second feature extraction method to obtain a depth feature P2 of the real signature picture M2; preprocessing the flip picture IM2 of the real signature picture M2 according to a second feature extraction method to obtain the depth feature IP2 of the flip picture IM2 of the real signature picture M2;
the second feature extraction method is a method for extracting the depth features of the picture through a backbone network and an attention network;
(4) combining the depth features P1 and P2 to form a fusion feature FP1, inputting the fusion feature FP1 into a second decision device, and obtaining a second decision result D1; the value of the second decision result D1 can only be 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
combining the depth feature P1 and the IP2 to form a fusion feature FP2, inputting the fusion feature FP2 to a third decision device, and obtaining a third decision result D2; the value of the third decision result D2 is only 0 or 1, where 0 indicates that it is not a signature of the same person, and 1 indicates that it is a signature of the same person;
combining the depth features IP1 and P2 to form a fusion feature FP3, inputting the fusion feature FP3 to a fourth decision device, and obtaining a fourth decision result D3; the value of the fourth decision result D3 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
combining the depth features IP1 and IP2 to form a fusion feature FP4, inputting the fusion feature FP4 to a fifth decision device, and obtaining a fifth decision result D4; the value of the fifth decision result D4 is only 0 or 1, where 0 indicates that the signature is not the same person, and 1 indicates that the signature is the same person;
the second, third, fourth and fifth judgers are composed of twin neural networks with completely same structures and parameters;
(5) the final verification result D is calculated according to the following formula,
D=D0+D1+D2+D3+D4
if the D value is greater than or equal to 3, the signature is true, otherwise it is false.
2. The method for realizing off-line handwritten signature verification based on artificial intelligence as claimed in claim 1, wherein: the specific content of the second feature extraction method comprises the following operation steps:
(31) firstly, adjusting the size of a signature picture to a set size;
(32) inputting the signature pictures after the size adjustment into a feature pyramid network, and processing the signature pictures through a series of 3 × 3 convolution layers, 3 × 3 pooling layers and activation layers to obtain 3 feature pictures with the sizes of 110 × 77, 55 × 38 and 27 × 19, which are named as fe1, fe2 and fe3 respectively;
(33) performing primary upsampling on the feature map fe3 to adjust the size of the feature map fe3 to 55 × 38, which is named as fe4, performing connection operation on the feature map fe2 and fe4 to obtain cfe2, performing primary upsampling on fe4 to adjust the size of the feature map fe4 to 110 × 77, which is named as fe5, performing connection operation on fe1 and fe5 to obtain cfe1, and finally performing connection operation on cfe1, cfe2 and fe3 to obtain combined feature fe;
(34) carrying out contrast enhancement on the signature picture and then carrying out down-sampling to obtain a characteristic Rfe;
(35) inputting the obtained feature fe as the Q feature of the attention network, the obtained feature Rfe as the K feature of the attention network, and the obtained feature Rfe as the V feature of the attention network into the attention network;
(36) the attention network outputs the depth features of the signature picture.
3. The method for realizing off-line handwritten signature verification based on artificial intelligence as claimed in claim 1, wherein: the specific process of the twin neural network is as follows: using a convolution layer of 3 × 3, a pooling layer of 3 × 3 and a ReLU activation function to obtain two feature vectors of 110 × 77 × 128, connecting the two feature vectors of 110 × 77 × 128 into a feature vector of 110 × 77 × 256, performing operations such as convolution and pooling to obtain a feature of 110 × 77 × 32, flattening the feature of 110 × 77 × 32 by using a flattening Flatten layer to obtain a one-dimensional feature vector with a length of 271040, and inputting the one-dimensional feature vector into a fully-connected layer of 271040 × 1 to obtain a decision result.
4. The method for realizing off-line handwritten signature verification based on artificial intelligence as claimed in claim 1, wherein: the twin neural network is trained in advance by using a binary cross entropy loss function.
5. The method for realizing off-line handwritten signature verification based on artificial intelligence as claimed in claim 1, wherein: the specific content of the histogram shift operation is as follows: and moving the histogram of the gray value of the pixel of the signature picture to zero and keeping white in the background of the signature picture, specifically, subtracting the minimum gray value in the signature picture from the pixel of the signature picture.
CN202210028558.5A 2022-01-11 2022-01-11 Method for realizing off-line handwritten signature verification based on artificial intelligence Pending CN114360071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210028558.5A CN114360071A (en) 2022-01-11 2022-01-11 Method for realizing off-line handwritten signature verification based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210028558.5A CN114360071A (en) 2022-01-11 2022-01-11 Method for realizing off-line handwritten signature verification based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN114360071A true CN114360071A (en) 2022-04-15

Family

ID=81109196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210028558.5A Pending CN114360071A (en) 2022-01-11 2022-01-11 Method for realizing off-line handwritten signature verification based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN114360071A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393966A (en) * 2022-10-27 2022-11-25 中鑫融信(北京)科技有限公司 Dispute mediation data processing method and system based on credit supervision
CN115966029A (en) * 2023-03-09 2023-04-14 珠海金智维信息科技有限公司 Offline signature authentication method and system based on attention mechanism
CN117475519A (en) * 2023-12-26 2024-01-30 厦门理工学院 Off-line handwriting identification method based on integration of twin network and multiple channels

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393966A (en) * 2022-10-27 2022-11-25 中鑫融信(北京)科技有限公司 Dispute mediation data processing method and system based on credit supervision
CN115966029A (en) * 2023-03-09 2023-04-14 珠海金智维信息科技有限公司 Offline signature authentication method and system based on attention mechanism
CN115966029B (en) * 2023-03-09 2023-11-07 珠海金智维信息科技有限公司 Offline signature authentication method and system based on attention mechanism
CN117475519A (en) * 2023-12-26 2024-01-30 厦门理工学院 Off-line handwriting identification method based on integration of twin network and multiple channels
CN117475519B (en) * 2023-12-26 2024-03-12 厦门理工学院 Off-line handwriting identification method based on integration of twin network and multiple channels

Similar Documents

Publication Publication Date Title
Marra et al. A full-image full-resolution end-to-end-trainable CNN framework for image forgery detection
Makhmudkhujaev et al. Facial expression recognition with local prominent directional pattern
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN114360071A (en) Method for realizing off-line handwritten signature verification based on artificial intelligence
Baltzakis et al. A new signature verification technique based on a two-stage neural network classifier
CN111444881A (en) Fake face video detection method and device
CN109977757A (en) A kind of multi-modal head pose estimation method based on interacting depth Recurrent networks
Chuang et al. Vehicle licence plate recognition using super-resolution technique
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN111222433B (en) Automatic face auditing method, system, equipment and readable storage medium
CN115393944A (en) Micro-expression identification method based on multi-dimensional feature fusion
Hadis et al. The impact of preprocessing on face recognition using pseudorandom pixel placement
Arafah et al. Face recognition system using Viola Jones, histograms of oriented gradients and multi-class support vector machine
CN113688821A (en) OCR character recognition method based on deep learning
Mohamed et al. Automated face recogntion system: Multi-input databases
CN112232221A (en) Method, system and program carrier for processing human image
KR20180092453A (en) Face recognition method Using convolutional neural network and stereo image
CN115601807A (en) Face recognition method suitable for online examination system and working method thereof
Sai et al. Student Attendance Monitoring System Using Face Recognition
Khan et al. An offline signature recognition and verification system based on neural network
CN113902942A (en) Multi-modal feature-based homogeneous user group mining method
CN112906508A (en) Face living body detection method based on convolutional neural network
CN113963390A (en) Deformable convolution combined incomplete human face image restoration method based on generation countermeasure network
Okarma et al. A Hybrid Method for Objective Quality Assessment of Binary Images
Singh et al. Performance Analysis of ELA-CNN model for Image Forgery Detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination