CN109190579B - Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning - Google Patents

Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning Download PDF

Info

Publication number
CN109190579B
CN109190579B CN201811076494.6A CN201811076494A CN109190579B CN 109190579 B CN109190579 B CN 109190579B CN 201811076494 A CN201811076494 A CN 201811076494A CN 109190579 B CN109190579 B CN 109190579B
Authority
CN
China
Prior art keywords
picture
signature
handwriting
sigan
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811076494.6A
Other languages
Chinese (zh)
Other versions
CN109190579A (en
Inventor
贾世杰
王思越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Jiaotong University
Original Assignee
Dalian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Jiaotong University filed Critical Dalian Jiaotong University
Priority to CN201811076494.6A priority Critical patent/CN109190579B/en
Publication of CN109190579A publication Critical patent/CN109190579A/en
Application granted granted Critical
Publication of CN109190579B publication Critical patent/CN109190579B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/30Writer recognition; Reading and verifying signatures
    • G06V40/33Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The invention provides a method for identifying signature handwriting of a generative confrontation network SIGAN based on dual learning. The method of the invention firstly adopts the generative confrontation network technology to research the signature handwriting authentication problem, and designs a special SIGAN (signature Identification GAN) network to realize the signature handwriting authentication by referring to the mate learning thought. And taking the loss value of the trained discriminator as an authentication threshold value, and comparing the loss value with the loss value obtained by the signature handwriting picture to be tested through a network, thereby determining the authenticity of the signature handwriting. An experimental data set containing five hard-tipped pen type signatures is constructed, and contains the real signature of the person and the deliberate imitation signature of other people. Experimental results show that the average accuracy of the signature handwriting identification model based on the SIGAN reaches 91.2%, is improved by 3.6% compared with the traditional image identification method, and is far higher than the subjective test result of human eyes by 72.3%.

Description

Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning
Technical Field
The invention relates to the technical field of image processing, model recognition and the like, in particular to a method for identifying signature handwriting of a generative confrontation network SIGAN based on dual learning.
Background
Handwriting authentication is a special technique for authenticating a writer according to the writing skill habit characteristics of the writer and the reflection of the writer in written handwriting and drawings. The identification of the signature is an important part in handwriting identification and has wide application in social life, such as contract signing, document confirmation, written documents and the like; the result of signature handwriting authentication in criminal investigation can be used as an important clue for solving a case, and the result of signature handwriting authentication can also be used as a forensic evidence and the like.
The patent designs a generation type countermeasure network SIGAN based on dual learning, wherein the SIGAN is provided with two generators and two discriminators, one generator converts an X-domain picture (in the example, a Song body signature picture) into a Y-domain picture (in the example, a handwritten signature handwriting picture), the other generator does the opposite, the X-domain picture and the Y-domain picture are reconstructed mutually, and the two discriminators try to distinguish true and false pictures in the two domains, so that the reconstruction error is hopefully minimized.
There is an important difference between the countertraining process and the traditional neural network. A neural network needs to have a cost function to evaluate how well the network performs. This function forms the basis of the neural network learning content and the learning situation. Traditional neural networks require a cost function that is elaborated by human scientists. However, for such a complex process as a generative model, it is not easy to construct a good cost function. This is where the adversarial network flashes. The countermeasure network can learn its own cost function, i.e. complex error rules, without having to elaborate and construct a cost function.
In practical application, the collection of the signature handwriting is usually the handwriting draft of the person, the collected handwriting is usually a color picture, the signature is not necessarily in the middle of the picture, and the patent needs to preprocess the picture of the signature handwriting. Because the type of the signature to be authenticated is not necessarily clear, under the condition of not determining the type of the signature, signature scripts of various types of the signatures also need to be acquired when the real signature of the user is acquired, and under the condition of determining the type of the signature, the signature of the user of the type only needs to be acquired, so that the authentication accuracy is improved.
Disclosure of Invention
According to the technical problems that the generator and the discriminator network mutually confront and jointly promote, more training samples are needed, and the model is too free and uncontrollable, the signature handwriting authentication method of the generation type confrontation network SIGAN based on dual learning is provided. The invention mainly utilizes the computer image recognition technology to verify the signature handwriting, and adopts the following technical means:
a method for identifying signature handwriting of a generative confrontation network SIGAN based on dual learning comprises the following steps:
s1: pre-collecting a real signature handwriting picture of a person, preprocessing the collected real signature handwriting picture of the person, and performing data enhancement on the preprocessed signature handwriting picture to obtain a handwritten signature handwriting picture; generating a Song body signature picture according to the collected true signature handwriting picture of the person, and splicing the Song body signature picture with the handwritten signature handwriting picture to obtain a spliced signature picture;
s2: taking the spliced signature picture as a training sample in a countermeasure network, wherein the countermeasure network SIGAN comprises two signature handwriting generators G with consistent structuresA、GBTwo signature handwriting discriminators D with consistent structuresA、DB
S3: training an antagonistic network SIGAN model according to a minimized loss function L (u, v), and optimizing the antagonistic network SIGAN model based on a minimized reconstruction error criterion; the minimization loss function L (u, v) is:
L(u,v)=αLpixel(u,v)+βLadv(u,v)
wherein alpha and beta are normalized weighting coefficients, v is a handwritten signature handwriting picture, and u is a standard Song body signature picture;
the loss function of the countermeasure network SIGAN model consists of two parts, namely pixel loss and countermeasure loss;
the pixel loss is:
Figure BDA0001800902740000021
wherein G isAIs a generated handwritten signature picture, GBIs the generated Song body signature picture, theta is the parameter of the generator network;
the resistance loss:
Figure BDA0001800902740000031
Figure BDA0001800902740000032
Figure BDA0001800902740000033
wherein DAIs the similarity of the generated handwritten signature to the real handwritten signature, DBSimilarity between the generated Song body signature and the real Song body signature;
the process of optimizing the training model comprises:
s301: generator GATranslating the standard Song font Chinese character picture into hand-written signature picture, and corresponding discriminator DAFor judging whether the hand-written signature picture is a real picture or GAA translated picture;
s302: generator GBTranslating the handwritten signature picture into a standard Song style Chinese character picture and a corresponding discriminator DBFor judging whether standard Song style Chinese character picture is real picture or GBA translated picture;
s303: passing generator GAThe results of the translation are fed to generator GBThe obtained result is the one-time reconstruction of the standard Song style Chinese character picture; passing generator GBThe results of the translation are fed to generator GAThe obtained result is the one-time reconstruction of the handwritten signature picture;
s304: and the generated countermeasure network SIGAN obtains an optimized model by continuously iterating and minimizing the reconstruction error.
S4: and calling the optimized model to identify the signature handwriting to be identified. After the training is finished, the loss function value of the discriminator is stabilized, and D is obtained at the momentALoss value L ofAadvAs an authentication threshold; comparing the threshold value with the loss value loss obtained by the signature handwriting picture to be tested through the network during testing, and if the loss value loss is less<LAadvAnd if not, identifying the handwriting as false.
Further, the step S4 includes a step of preprocessing the signature handwriting to be authenticated.
Further, the processing process of collecting the person's true signature handwriting picture includes:
a. converting the person real signature handwriting picture into a binary picture;
b. removing redundant blank parts of the binary image boundary;
c. and (5) supplementing the edge-removed binary picture into a square picture, and then adjusting the resolution to be a fixed resolution.
Further, the data enhancement process includes: respectively cutting the existing signature handwriting picture into pictures with the resolution less than the fixed resolution in an up-down symmetrical mode, and then readjusting the resolution of the cut pictures to be the fixed resolution of 256 multiplied by 256.
Further, the generator GAConverting X-domain pictures into Y-domain pictures, the generator GBAnd converting the Y-domain picture into an X-domain picture, wherein the X-domain picture is a handwritten signature handwriting picture, and the Y-domain picture is a standard Song-style Chinese character picture generated according to the handwritten signature handwriting.
Further, the picture stitching includes:
and transversely splicing the processed signature handwriting picture and the processed standard Chinese character picture to obtain a spliced signature picture.
Compared with the prior art, the invention has the following advantages:
1. compared with the handwriting verification by manual signature, the invention uses the computer image recognition technology to perform the handwriting verification, and has the advantages of subjectivity, high speed and the like of avoiding manual verification.
2. Compared with the traditional image identification method, the generation type countermeasure network has no problems of lack of standard basis for feature selection and the like. Because handwriting authentication is one of the biological recognition technologies, the handwriting authentication has the advantages of easy acquisition and popularization.
In conclusion, the conventional image identification method still has the problems of lack of standard basis for feature selection, low identification precision and the like. The technical scheme of the invention adopts the countermeasure type generation network technology to solve the problem of signature handwriting authentication for the first time, adopts the dual learning idea, designs the special SIGAN network to realize the task of signature handwriting authentication, has larger promotion than the traditional image identification method, is far higher than the subjective testing effect of human eyes, and does not need a larger number of training samples.
Based on the reason, the method can be popularized in the fields of handwriting identification and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a block diagram of the algorithm of the present invention.
Fig. 2 is a network framework and data flow diagram of the present invention.
FIG. 3 is a diagram of a signature script generator structure of the generative confrontation network SIGAN according to the present invention.
FIG. 4 is a diagram of a signature script discriminator structure of the generative confrontation network SIGAN according to the present invention.
FIG. 5 is a training flow diagram of the method of the present invention.
FIG. 6 is a flow chart of an optimized training model of the method of the present invention.
FIG. 7 is a test flow chart of the method of the present invention.
FIG. 8 is a diagram illustrating an example of a library according to an embodiment of the present invention.
FIG. 9 is a graph comparing the average accuracy of handwriting authentication under the mixed pen type training in example 1 of the present invention.
FIG. 10 is a graph showing the comparison of time taken for the mixed pen type training in example 1 of the present invention
FIG. 11 shows the simulated signature erroneously determined to be true in example 1 of the present invention.
Fig. 12 shows the true signature determined to be false by mistake in embodiment 1 of the present invention.
FIG. 13 is a graph comparing the results of human eye identification and SIGAN identification in example 3 of the present invention.
Detailed Description
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
As shown in FIG. 1, the invention provides a method for authenticating signature handwriting based on a generative confrontation network of dual learning, which comprises the following steps:
s1: pre-collecting a real signature handwriting picture of a person, preprocessing the collected real signature handwriting picture of the person, and performing data enhancement on the preprocessed signature handwriting picture to obtain a handwritten signature handwriting picture; generating a Song body signature picture according to the collected true signature handwriting picture of the person, and splicing the Song body signature picture with the handwritten signature handwriting picture to obtain a spliced signature picture;
s2: taking the spliced signature picture as a training sample in a countermeasure network, wherein the countermeasure network SIGAN comprises two signature handwriting generators G with consistent structuresA、GBTwo signature handwriting discriminators D with consistent structuresA、DB
S3: training an antagonistic network SIGAN model according to a minimized loss function L (u, v), and optimizing the antagonistic network SIGAN model based on a minimized reconstruction error criterion;
s4: and calling the optimized model to identify the signature handwriting to be identified.
The invention provides a signature handwriting authentication method of a generative confrontation network SIGAN based on dual learning, as shown in figure 2, the integral framework and data of the generative confrontation network SIGANCourse of flow, generator GATranslating the standard Song font Chinese character picture into hand-written signature picture, and corresponding discriminator DAFor judging whether the hand-written signature picture is a real picture or GAA translated picture; generator GBTranslating the handwritten signature into standard Song style Chinese character names and corresponding discriminator DBFor judging whether standard Song style Chinese character picture is real picture or GBA translated picture; passing generator GAThe results of the translation are fed to generator GBThe obtained result is the one-time reconstruction of the standard Song style Chinese character picture; passing generator GBThe results of the translation are fed to generator GAAnd obtaining a result, namely reconstructing the handwritten signature picture once. DAIs the similarity of the generated handwritten signature to the real handwritten signature, DBThe similarity between the generated Song body signature and the real Song body signature is determined.
As shown in fig. 3, its generator GA、GBThe method has the same network structure, the full connection layer in the traditional convolutional neural network is cancelled, and only convolution and deconvolution are adopted to realize the tasks of feature extraction and generation. GA、GBThe generator consists of a convolutional network of 16 layers, the first 8 layers are convolved and downsampled, and the last 8 layers are deconvoluted and upsampled.
As shown in FIG. 4, the discriminator DA、DBThe same network structure is based on vgg16, and the last two full connection layers are deleted to form the discriminator.
As shown in fig. 5, the training process includes:
(1) setting a network parameter epoch to 30, a blocksize to 1, and a learning rate to 0.0005;
(2) the processed signature handwriting picture is used as input and sent to a generative confrontation network SIGAN for dual learning to start training, and a training model is obtained; as shown in fig. 6, the optimization process is as follows:
s301: generator GATranslating the standard Song font Chinese character picture into hand-written signature picture, and corresponding discriminator DAFor judging whether hand-written signature picture is trueWhether the real picture is GAA translated picture;
s302: generator GBTranslating the handwritten signature picture into a standard Song style Chinese character picture and a corresponding discriminator DBFor judging whether standard Song style Chinese character picture is real picture or GBA translated picture;
s303: passing generator GAThe results of the translation are fed to generator GBThe obtained result is the one-time reconstruction of the standard Song style Chinese character picture; passing generator GBThe results of the translation are fed to generator GAThe obtained result is the one-time reconstruction of the handwritten signature picture;
s304: and the generated countermeasure network SIGAN obtains an optimized model by continuously iterating and minimizing the reconstruction error.
(3) After 30 epochs are finished, the training models and D are savedAA loss value;
as shown in fig. 7, the test flow includes:
calling the trained model and the signature handwriting picture to be authenticated;
the generative countermeasure network SIGAN is trained by minimizing a loss function L (u, v), the loss function of which consists of two parts, pixel loss and countermeasure loss.
L(u,v)=αLpixel(u,v)+βLadv(u,v)
Wherein alpha and beta are normalized weighting coefficients, v is a handwriting signature handwriting picture, and u is a standard Song body signature picture.
Pixel loss:
Figure BDA0001800902740000071
wherein G isAIs a generated handwritten signature picture, GBIs the generated song body signature picture, and θ is a parameter of the generator network.
The resistance loss:
Figure BDA0001800902740000072
Figure BDA0001800902740000073
Figure BDA0001800902740000074
wherein DAIs the similarity of the generated handwritten signature to the real handwritten signature, DBThe similarity between the generated Song body signature and the real Song body signature is determined.
After the training is finished, the loss function value of the discriminator is stabilized, and D is obtained at the momentALoss value L ofAadvAs an authentication threshold. And comparing the threshold value with a loss value loss obtained by the signature handwriting picture to be tested through a network during testing.
If loss<LAadvThen the handwriting is identified as true;
if loss>LAadvIf so, identifying the handwriting as false;
example 1
The experimental picture library of this embodiment includes 640 signature pictures, wherein 320 positive samples of the library are respectively written with 64 signatures by 5 kinds of pens (a gel pen, a ball-point pen, a pencil, a blue pen, and a black pen) of the person of the peer-of-science in liu-guarantian; the negative examples comprise 320 persons, and the other 4 students in the laboratory use 5 pens to imitate the signature handwriting of the homologies in Liu-jinggao, each pen imitating 16 persons, as shown in fig. 8. The proportion of a training set, a verification set and a test set of the image library is 4:1:5, wherein the test set comprises 320 signature pictures, and consists of 160 randomly extracted positive sample signature pictures and 160 randomly extracted negative sample signature pictures; all remaining pictures constitute the training set and the validation set. While the present embodiment only uses positive samples for training against the generated network, the AlexNet image classifier used in the comparative experiment uses all positive and negative samples in the training set.
This example uses the validation set to derive parameters α, β as 0.6, 0.4, respectively, and the other parameter settings for SIGAN and AlexNet are shown in the following table:
table 1 network parameter settings
Figure BDA0001800902740000081
In this embodiment, the average accuracy is used as an index for determining the quality of the model, and a specific calculation formula is as follows:
A=(TP+TN)/(TP+TN+FP+FN)(9)
wherein: TP is the number of positive samples predicted as positive by the model; TN is the number of negative samples predicted by the model as a negative class; FP is the number of positive samples predicted by the model as a negative class; FN is the number of negative samples predicted by the model as positive class;
true class rate: TPR is TP/(TP + FN) (10)
True negative class rate: TNR ═ TN/(TN + FP) (11)
The experimental training set includes all the pen types, 160, 80, 40, 20, 10, 5 signature handwriting pictures in the training set are randomly extracted for training (the number of each pen type picture is the same), and as shown in fig. 9, comparison of average accuracy of handwriting authentication is given under the training condition of data enhancement and no data enhancement. The mixed pen training time is shown in fig. 10, and the test time is 0.27s per image. Table 2 shows the identification results of the training models of the 160 training sets.
TABLE 2 test results of the training models of the Whole training set (enhanced with data)
Figure BDA0001800902740000091
From the above experimental results, the following conclusions can be drawn:
the model trained by using 160 signature handwriting pictures of all training sets has the highest accuracy rate which reaches 90.0%, but the accuracy rate of the model is gradually reduced along with the reduction of the number of the training sets, and when the number of the training sets is lower than 10 (without data enhancement), the average accuracy rate is sharply reduced to be lower than 80%. This shows that the larger the number of training sets, the more beneficial the accuracy of model identification.
It can be seen from fig. 9 that the identification accuracy of the model can be effectively improved by adopting data enhancement, and the less the number of original training samples is, the more obvious the data enhancement effect is; when the number of the original training samples is only 5, the identification accuracy can be improved by 17% through data enhancement.
③ from fig. 10, it can be seen that the SIGAN authentication performance constructed herein is superior to that of DualGAN.
From table 2, it can be seen that the highest identification accuracy of the neutral pen handwriting pictures reaches 100%, and the average accuracy of the two pen handwriting pictures is low. The reason is that the fountain pen signature handwriting has different thicknesses and the phenomenon of water break, while the gel pen has stable water discharge and consistent thickness, and the signature handwriting is most clear. Examples of the signature handwriting picture authentication error are shown in fig. 11 and 12.
Example 2
Five groups of training sets only containing a single pen type are also set in the experiment, each group only contains one pen type, the experiment result is shown in table 3, the training time of each group is 79 minutes, and the testing time is 0.27s for each image.
TABLE 3 identification accuracy of models trained with different pen types
Figure BDA0001800902740000092
Figure BDA0001800902740000101
The identification average accuracy obtained by training with the neutral pen type picture is the highest from the comprehensive test effect, but the average accuracy is lower than the identification result obtained by training and testing with the same pen type picture. If the pen types adopted by the training set and the test set are inconsistent, the test accuracy is reduced to a certain degree (< 13.5%), wherein the average accuracy obtained by the test of the blue pen type by adopting the pencil type is the lowest (71.8%).
And secondly, the identification average accuracy of the single pen type training model is lower than that of the mixed pen type training model, which shows that in practical application, if the signature to be tested is known to use the pen type, the signature handwriting of the pen type only needs to be used in the model training process, and the pen type of the signature handwriting to be tested is unknown, so that signature handwriting pictures of different pen types are collected as much as possible for training.
In order to verify the advantages of the method, the neural network classifier based on the depth model is realized, and the test results are compared. The depth model adopts AlexNet, the output of the AlexNet is changed into two types, the final three layers are finely adjusted by using all training samples, the experimental result is shown in Table 4, the training time is 34 minutes, and the testing time is 0.23 second for each image. As can be seen from table 4, the authentication accuracy of the AlexNet classifier is lower than that of the SIGAN method herein, and the (fine-tuning) training of the AlexNet depth model requires positive and negative samples, whereas the training of the SIGAN method herein requires only positive samples.
TABLE 4 comparison of the results of the AlexNet and SIGAN models
Figure BDA0001800902740000102
Example 3
The experiment for identifying human eyes is also set, the members of the experiment consist of five classmates which never participate in signature handwriting collection, and the test set is consistent with the computer test set. The experiment is divided into six groups, and the number of the signature handwriting pictures of each training set is respectively 5, 10, 20, 40, 80 and 160. Before the test starts, five students are given enough time to see the training set picture (generally 5-10 minutes), and the test is carried out after the learning is finished. The results of the experiment are shown in FIG. 13. The average duration of the test was 3.2s per image.
The highest accuracy rate of five people is only 79 percent and the lowest accuracy rate is only 59 percent, and the highest average accuracy rate of the SIGAN network is 14 percent higher than that of human eye identification, which shows that the signature handwriting identification accuracy rate based on the SIGAN network is superior to that of human eye subjective identification under the condition of sufficient training set quantity, and the identification speed is high.
And secondly, the influence of the increased number of the training sets on human eyes is smaller than the influence of the increased number of the training sets on the network model, but the human eye identification accuracy is higher than the accuracy of the countermeasure network when the number of the training sets is small, so that the countermeasure network can achieve a better effect only by using enough training sets.
In the above embodiments of the present invention, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. A method for identifying signature handwriting of a generative confrontation network SIGAN based on dual learning is characterized by comprising the following steps:
s1: pre-collecting a real signature handwriting picture of a person, preprocessing the collected real signature handwriting picture of the person, and performing data enhancement on the preprocessed signature handwriting picture to obtain a handwritten signature handwriting picture; generating a Song body signature picture according to the collected true signature handwriting picture of the person, and splicing the Song body signature picture with the handwritten signature handwriting picture to obtain a spliced signature picture;
s2: taking the spliced signature picture as a training sample in a countermeasure network; the generated countermeasure network SIGAN comprises two signature handwriting generators G with consistent structuresA、GBTwo signature handwriting discriminators D with consistent structuresA、DB
S3: training a generative antagonistic network SIGAN model according to a minimum loss function L (u, v), and optimizing the generative antagonistic network SIGAN model based on a minimum reconstruction error criterion;
the minimization loss function L (u, v) in step S3 is:
L(u,v)=αLpixel(u,v)+βLadv(u,v)
wherein alpha and beta are normalized weighting coefficients, v is a handwritten signature handwriting picture, and u is a standard Song body signature picture;
the loss function of the generated countermeasure network SIGAN model consists of two parts, namely pixel loss and countermeasure loss;
the pixel loss is:
Figure FDA0003210372380000011
wherein G isA() Is a generated handwritten signature picture, GB() Is the generated Song body signature picture, theta is the parameter of the generator network;
the resistance loss:
Figure FDA0003210372380000012
Figure FDA0003210372380000013
Figure FDA0003210372380000014
wherein DA() Is the similarity of the generated handwritten signature to the real handwritten signature, DB() Similarity between the generated Song body signature and the real Song body signature;
after the training is finished, the loss function value of the discriminator is stabilized, and D is obtained at the momentALoss value L ofAadvAs an authentication threshold; when in test, the threshold value and the signature pen to be tested are usedComparing loss values loss obtained by the trace picture through the network, if loss is less<LAadvIf the handwriting is true, the handwriting is identified, otherwise, the handwriting is identified as false;
s4: and calling the optimized model to identify the signature handwriting to be identified.
2. The method for authenticating signature scripts of generative countermeasure network SIGAN based on dual learning as claimed in claim 1, wherein the step S4 further comprises the step of preprocessing the signature scripts to be authenticated.
3. The method for authenticating signature handwriting of generative confrontation network SIGAN based on dual learning as claimed in claim 1 or 2, wherein the process of preprocessing the collected picture of the genuine signature handwriting comprises:
a. converting the person real signature handwriting picture into a binary picture;
b. removing redundant blank parts of the binary image boundary;
c. and (5) supplementing the edge-removed binary picture into a square picture, and then adjusting the resolution to be a fixed resolution.
4. The method for signature handwriting authentication of generative confrontation network SIGAN based on dual learning as claimed in claim 1, wherein said data enhancement process comprises: respectively cutting the existing signature handwriting picture into pictures with the resolution less than the fixed resolution in an up-down symmetrical mode, and then readjusting the resolution of the cut pictures to be the fixed resolution of 256 multiplied by 256.
5. The method for signature handwriting authentication of generative confrontation network SIGAN based on dual learning as claimed in claim 1, wherein said generator G is used for generating signature handwritingAConverting X-domain pictures into Y-domain pictures, the generator GBConverting the Y-domain picture into an X-domain picture, wherein the X-domain picture is a handwritten signature handwriting picture, and the Y-domain picture is generated according to the handwritten signature handwritingStandard song style chinese character pictures.
6. The method for signature handwriting authentication of generative confrontation network SIGAN based on dual learning as claimed in claim 1, wherein the picture stitching comprises:
and transversely splicing the preprocessed signature handwriting picture with the corresponding standard Song-style Chinese character picture to obtain a spliced signature picture.
7. The method for signature handwriting authentication of generative confrontation network SIGAN based on dual learning as claimed in claim 1, wherein the process of optimizing training model comprises:
s301: generator GATranslating the standard Song font Chinese character picture into hand-written signature picture, and corresponding discriminator DAFor judging whether the hand-written signature picture is a real picture or GAA translated picture;
s302: generator GBTranslating the handwritten signature picture into a standard Song style Chinese character picture and a corresponding discriminator DBFor judging whether standard Song style Chinese character picture is real picture or GBA translated picture;
s303: passing generator GAThe results of the translation are fed to generator GBThe obtained result is the one-time reconstruction of the standard Song style Chinese character picture; passing generator GBThe results of the translation are fed to generator GAThe obtained result is the one-time reconstruction of the handwritten signature picture;
s304: and the generated countermeasure network SIGAN obtains an optimized model by continuously iterating and minimizing the reconstruction error.
CN201811076494.6A 2018-09-14 2018-09-14 Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning Active CN109190579B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811076494.6A CN109190579B (en) 2018-09-14 2018-09-14 Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811076494.6A CN109190579B (en) 2018-09-14 2018-09-14 Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning

Publications (2)

Publication Number Publication Date
CN109190579A CN109190579A (en) 2019-01-11
CN109190579B true CN109190579B (en) 2021-11-16

Family

ID=64911535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811076494.6A Active CN109190579B (en) 2018-09-14 2018-09-14 Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning

Country Status (1)

Country Link
CN (1) CN109190579B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110096977B (en) * 2019-04-18 2021-05-11 中金金融认证中心有限公司 Training method of handwriting authentication model, handwriting authentication method, device and medium
CN111046760B (en) * 2019-11-29 2023-08-08 山东浪潮科学研究院有限公司 Handwriting identification method based on domain countermeasure network
CN111553277B (en) * 2020-04-28 2022-04-26 电子科技大学 Chinese signature identification method and terminal introducing consistency constraint
CN111833267A (en) * 2020-06-19 2020-10-27 杭州电子科技大学 Dual generation countermeasure network for motion blur restoration and operation method thereof
CN114155613B (en) * 2021-10-20 2023-09-15 杭州电子科技大学 Offline signature comparison method based on convenient sample acquisition
CN115281662B (en) * 2022-09-26 2023-01-17 北京科技大学 Intelligent auxiliary diagnosis system for instable chronic ankle joints

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803082A (en) * 2017-01-23 2017-06-06 重庆邮电大学 A kind of online handwriting recognition methods based on conditional generation confrontation network
CN107577985A (en) * 2017-07-18 2018-01-12 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100254578A1 (en) * 2009-04-06 2010-10-07 Mercedeh Modir Shanechi Handwriting authentication method, system and computer program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803082A (en) * 2017-01-23 2017-06-06 重庆邮电大学 A kind of online handwriting recognition methods based on conditional generation confrontation network
CN107577985A (en) * 2017-07-18 2018-01-12 南京邮电大学 The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation

Also Published As

Publication number Publication date
CN109190579A (en) 2019-01-11

Similar Documents

Publication Publication Date Title
CN109190579B (en) Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning
CN104463101B (en) Answer recognition methods and system for character property examination question
CN111476200B (en) Face de-identification generation method based on generation of confrontation network
CN108764074A (en) Subjective item intelligently reading method, system and storage medium based on deep learning
CN103258157B (en) A kind of online handwriting authentication method based on finger information and system
CN108921092B (en) Melanoma classification method based on convolution neural network model secondary integration
CN109871851B (en) Chinese character writing normalization judging method based on convolutional neural network algorithm
Zois et al. A comprehensive study of sparse representation techniques for offline signature verification
CN105335719A (en) Living body detection method and device
CN113408535B (en) OCR error correction method based on Chinese character level features and language model
CN113591747B (en) Multi-scene iris recognition method based on deep learning
CN111091493B (en) Image translation model training method, image translation method and device and electronic equipment
CN112966685B (en) Attack network training method and device for scene text recognition and related equipment
CN109670559A (en) Recognition methods, device, equipment and the storage medium of handwritten Chinese character
CN110969681A (en) Method for generating handwriting characters based on GAN network
CN113095156B (en) Double-current network signature identification method and device based on inverse gray scale mode
CN110188750A (en) A kind of natural scene picture character recognition method based on deep learning
Pirrone et al. Papy-s-net: A siamese network to match papyrus fragments
WO2022156214A1 (en) Liveness detection method and apparatus
Sharma et al. Sign language gesture recognition
CN111079823A (en) Verification code image generation method and system
CN116403252A (en) Face recognition classification method based on multi-target feature selection of bidirectional dynamic grouping
Suwanwiwat et al. Icfhr 2018 competition on thai student signatures and name components recognition and verification (tsncrv2018)
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
Yatbaz et al. Deep learning based stress prediction from offline signatures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant