CN109190579A - A kind of handwriting signature identification method of the production confrontation network SIGAN based on paired-associate learning - Google Patents
A kind of handwriting signature identification method of the production confrontation network SIGAN based on paired-associate learning Download PDFInfo
- Publication number
- CN109190579A CN109190579A CN201811076494.6A CN201811076494A CN109190579A CN 109190579 A CN109190579 A CN 109190579A CN 201811076494 A CN201811076494 A CN 201811076494A CN 109190579 A CN109190579 A CN 109190579A
- Authority
- CN
- China
- Prior art keywords
- picture
- signature
- handwriting
- sigan
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/30—Writer recognition; Reading and verifying signatures
- G06V40/33—Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
- Character Discrimination (AREA)
Abstract
The present invention provides the handwriting signature identification method of production confrontation network SIGAN based on paired-associate learning a kind of.The method of the present invention uses production confrontation network technology research handwriting signature identification problem for the first time, by using for reference paired-associate learning thought, devises special SIGAN (Signature Identification GAN) network to realize that handwriting signature is identified.Using the penalty values of arbiter after training as identification threshold value, compared with handwriting signature picture to be measured by the penalty values that network obtains, so that it is determined that the handwriting signature true and false.The experimental data set comprising five kinds of hard-tipped pen pen types signature is constructed herein, includes my actual signature and other people deliberately imitation signature.The experimental results showed that the handwriting signature based on SIGAN identifies that model average accuracy up to 91.2%, promotes 3.6% than traditional image-recognizing method, is much higher than human eye subjective test results 72.3%.
Description
Technical field
The present invention relates to the technical fields such as image procossing, model identification, specifically, more particularly to a kind of based on antithesis
The handwriting signature identification method of the production confrontation network SIGAN of habit.
Background technique
Handwriting verification is to be accustomed to feature, the reflection in the writing and drawing of writing according to the writing skill of people, to identify
Write the know-how of people.The identification of signature is pith in handwriting verification, is had a wide range of applications in social life, such as
It signs a contract, file confirmation, written confirmation etc.;The result that handwriting signature is identified in criminal investigation can be used as important line of solving a case
Rope, handwriting signature qualification result can also be as court evidences etc..
This patent devises a kind of production confrontation network SIGAN, SIGAN generator with there are two based on paired-associate learning
With two arbiters, the picture (being Song typeface signature picture in this example) in the domain X is converted into the picture (this example in the domain Y by a generator
In be handwritten signature person's handwriting picture), and another generator does opposite thing, and the domain X picture is mutually reconstructed with the domain Y picture, and
Two arbiters attempt to distinguish true and false picture in two domains, final to wish to minimize reconstructed error.
There are an important differences with traditional neural network for dual training process.One neural network needs a cost
Function, how is assessment network performance.This function constitutes neural network learning content and learns the basis of situation.Traditional mind
The cost function for needing the scientist mankind to meticulously build through network.But the process complicated in this way for production model
For, one good cost function of building is by no means easy.Here it is in place of the flash of light of antagonism network.Confrontation network can learn
The cost function of oneself is complicated to wrong rule, and need not well-designed and one cost function of construction.
In practical applications, the acquisition of handwriting signature is usually my handwritten paper, and collected is usually color image,
And sign not necessarily among picture, this patent needs to pre-process handwriting signature picture.Due to signature to be identified
Not necessarily clear pen type is also required to acquisition variety of pens in the case where uncertain pen type when acquiring my actual signature
The handwriting signature of type, in the case where determining pen type, it is only necessary to acquire signing in person for the pen type, this is helping to improve identification just
True rate.
Summary of the invention
Confronted with each other, collectively promoted according to generator set forth above, arbiter network, need more training sample and
The excessively free uncontrollable technical problem of model, and the label of production confrontation network SIGAN based on paired-associate learning a kind of are provided
Name handwriting verification method.The present invention mainly carries out handwriting signature verifying using using computer image recognition technology, and the present invention adopts
Technological means is as follows:
A kind of handwriting signature identification method of the production confrontation network SIGAN based on paired-associate learning, comprising the following steps:
S1: my actual signature person's handwriting picture is collected in advance, and my the actual signature person's handwriting picture being collected into is carried out
Pretreatment carries out data enhancing to pretreated handwriting signature picture, obtains handwritten signature person's handwriting picture;According to what is be collected into
Actual signature person's handwriting picture in person generates Song typeface signature picture, and splices with handwritten signature person's handwriting picture, obtains splicing signature map
Piece;
S2: using the splicing signature picture as the training sample in confrontation network, the confrontation network SIGAN includes two
A consistent handwriting signature generator G of structureA、GB, two consistent handwriting signature arbiter D of structureA、DB;
S3: network SIGAN model is fought according to loss function L (u, v) training is minimized, and based on minimum reconstructed error
Criterion optimizes the confrontation network SIGAN model;The minimum loss function L (u, v) are as follows:
L (u, v)=α Lpixel(u,v)+βLadv(u,v)
Wherein α and β is normalized weighing factors, and v is handwritten signature person's handwriting picture, and u is standard Song typeface signature picture;
The loss function of the confrontation network SIGAN model is made of two parts of pixel loss and confrontation loss;
Pixel loss are as follows:
Wherein GAIt is the handwritten signature picture generated, GBIt is the Song typeface signature picture generated, θ is the parameter of generator network;
Confrontation loss:
Wherein DAIt is the similarity of the handwritten signature and true handwritten signature generated, DBThe Song typeface signature that generates with it is true
The similarity of real Song typeface signature;
It is described optimization training pattern process include:
S301: generator GAStandard Song typeface Chinese character picture is translated as handwritten signature picture, corresponding arbiter DAWith
To differentiate that handwritten signature picture is true picture or GAThe picture that translation comes;
S302: generator GBHandwritten form signature picture is translated as standard Song typeface Chinese character picture, corresponding arbiter DB
It is true picture or G for discrimination standard Song typeface Chinese character pictureBThe picture that translation comes;
S303: pass through generator GAThe result of translation gives generator GB, obtained result is to standard Song typeface Chinese character
The primary reconstruct of picture;Pass through generator GBThe result of translation gives generator GA, obtained result is to hand-written signature map
The primary reconstruct of piece;
S304: production fights network SIGAN and minimizes reconstructed error by continuous iteration, the model after being optimized.
S4: the model after calling above-mentioned optimization identifies handwriting signature to be identified.After training, arbiter
Loss function value is stable, will D at this timeAPenalty values LAadvAs identification threshold value;By this threshold value and signature pen to be measured when test
The penalty values loss that mark picture is obtained by network compares size, if loss < LAadv, then it is true for verifying handwriting, and otherwise identifies pen
Mark is false.
It further, further include that pretreated step is carried out to handwriting signature to be identified in the step S4.
Further, it is collected into my treatment process of actual signature person's handwriting picture and includes: for described pair
A, my the actual signature person's handwriting picture is converted into two-value picture;
B, remove the extra blank parts of the two-value picture boundary;
C, by after trimming two-value picture mend at square picture, then by resolution adjustment be fixed resolution.
Further, the data enhancement process includes: in a manner of symmetrical up and down, respectively by existing handwriting signature figure
Piece cuts into the picture less than fixed resolution, then by the photo resolution after shearing readjust fixed resolution 256 ×
256。
Further, the generator GAThe domain X picture is converted into the domain Y picture, the generator GBThe domain Y picture is converted
At the domain X picture, wherein the domain X picture is handwritten signature person's handwriting picture, the domain Y picture is to be generated according to handwritten signature person's handwriting
Standard Song typeface Chinese character picture.
Further, the picture, which splices, includes:
The handwriting signature picture handled well is horizontally-spliced with the standard Chinese character picture handled well, obtain splicing signature
Picture.
Compared with the prior art, the invention has the following advantages that
1, it is verified compared to artificial handwriting signature, the present invention carries out person's handwriting verifying using computer image recognition technology and keeps away
The subjectivity of manpower-free's verifying, the advantages that speed is fast.
2, compared to traditional image-recognizing method, production fights network, and there is no feature selectings to lack standard foundation
The problems such as.Since handwriting verification is one of biological identification technology, there is easily acquisition, popularization.
To sum up, there are still feature selectings to lack standard foundation, identifies that precision is more low for traditional image-recognizing method
Problem.Technical solution of the present invention uses confrontation type to generate network technology for the first time and solves the problems, such as that handwriting signature is identified, using antithesis
Study thoughts devise special SIGAN network to realize the task of handwriting signature identification, have than traditional image-recognizing method
Biggish promotion is also much higher than human eye subjective testing effect, and is not necessarily to large number of training sample.
The present invention can promote in fields such as handwriting verifications based on the above reasons.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to do simply to introduce, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with
It obtains other drawings based on these drawings.
Fig. 1 is algorithm frame figure of the invention.
Fig. 2 is inventive network frame and data flow figure.
Fig. 3 is the handwriting signature generator structure that production of the present invention fights network SIGAN.
Fig. 4 is the handwriting signature arbiter structure that production of the present invention fights network SIGAN.
Fig. 5 is the training flow chart of the method for the present invention.
Fig. 6 is the optimization training pattern flow chart of the method for the present invention.
Fig. 7 is the test flow chart of the method for the present invention.
Fig. 8 is picture library of embodiment of the present invention exemplary diagram.
Fig. 9 is the handwriting verification average accuracy comparison diagram under the mixing pen type training of the embodiment of the present invention 1.
Figure 10 is that the embodiment of the present invention 1 mixes pen type training time comparison diagram used
Figure 11 is that the embodiment of the present invention 1 is mistaken for really imitating signature.
Figure 12 is that the embodiment of the present invention 1 is mistaken for false actual signature.
Figure 13 is the identification of 3 human eye of the embodiment of the present invention and SIGAN qualification result comparison diagram.
Specific embodiment
In order to enable those skilled in the art to better understand the solution of the present invention, below in conjunction in the embodiment of the present invention
Attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only
The embodiment of a part of the invention, instead of all the embodiments.Based on the embodiments of the present invention, ordinary skill people
The model that the present invention protects all should belong in member's every other embodiment obtained without making creative work
It encloses.
It should be noted that description and claims of this specification and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to the embodiment of the present invention described herein can in addition to illustrating herein or
Sequence other than those of description is implemented.In addition, term " includes " and " having " and their any deformation, it is intended that cover
Cover it is non-exclusive include, for example, the process, method, system, product or equipment for containing a series of steps or units are not necessarily limited to
Step or unit those of is clearly listed, but may include be not clearly listed or for these process, methods, product
Or other step or units that equipment is intrinsic.
As shown in Figure 1, the handwriting signature identification for the production confrontation network that the present invention provides a kind of based on paired-associate learning
Method, comprising the following steps:
S1: my actual signature person's handwriting picture is collected in advance, and my the actual signature person's handwriting picture being collected into is carried out
Pretreatment carries out data enhancing to pretreated handwriting signature picture, obtains handwritten signature person's handwriting picture;According to what is be collected into
Actual signature person's handwriting picture in person generates Song typeface signature picture, and splices with handwritten signature person's handwriting picture, obtains splicing signature map
Piece;
S2: using the splicing signature picture as the training sample in confrontation network, the confrontation network SIGAN includes two
A consistent handwriting signature generator G of structureA、GB, two consistent handwriting signature arbiter D of structureA、DB;
S3: network SIGAN model is fought according to loss function L (u, v) training is minimized, and based on minimum reconstructed error
Criterion optimizes the confrontation network SIGAN model;
S4: the model after calling above-mentioned optimization identifies handwriting signature to be identified.
The present invention provides a kind of, and the production based on paired-associate learning fights the handwriting signature identification method of network SIGAN,
As shown in Fig. 2, the integral frame of its production confrontation network SIGAN and the process of data flow, generator GABy standard Song
Body Chinese character picture is translated as handwritten signature picture, corresponding arbiter DAFor differentiating that handwritten signature picture is true picture
Or GAThe picture that translation comes;Generator GBHandwritten form signature is translated as standard Song typeface Chinese character name, corresponding arbiter
DBIt is true picture or G for discrimination standard Song typeface Chinese character pictureBThe picture that translation comes;Pass through generator GAThe knot of translation
Fruit gives generator GB, obtained result is the primary reconstruct to standard Song typeface Chinese character picture;Pass through generator GBTranslation
As a result generator G is givenA, obtained result is the primary reconstruct to hand-written signature map piece.DAIt is the handwritten signature generated and true
The similarity of real handwritten signature, DBIt is the similarity of the Song typeface signature and true Song typeface signature that generate.
As shown in figure 3, its generator GA、GBWith same network structure, which eliminates traditional convolutional neural networks
In full articulamentum, feature extraction and generation task are realized only with convolution and deconvolution.GA、GBGenerator is by 16 layers
Convolutional network composition, preceding 8 layers of progress convolution sum down-sampling, rear 8 layers of progress deconvolution and up-sampling.
As shown in figure 4, its arbiter DA、DBThere is same network structure, is all to delete last two based on vgg16
The full articulamentum of layer constitutes arbiter.
As shown in figure 5, its training process includes:
(1) it sets 30, batchsize for network parameter epoch and is set as 1, learning rate to be set as 0.0005;
(2) production that the handwriting signature picture being disposed gives paired-associate learning as input is fought into network SIGAN
Start to train, obtains training pattern;As shown in fig. 6, its optimization process is as follows:
S301: generator GAStandard Song typeface Chinese character picture is translated as handwritten signature picture, corresponding arbiter DAWith
To differentiate that handwritten signature picture is true picture or GAThe picture that translation comes;
S302: generator GBHandwritten form signature picture is translated as standard Song typeface Chinese character picture, corresponding arbiter DB
It is true picture or G for discrimination standard Song typeface Chinese character pictureBThe picture that translation comes;
S303: pass through generator GAThe result of translation gives generator GB, obtained result is to standard Song typeface Chinese character
The primary reconstruct of picture;Pass through generator GBThe result of translation gives generator GA, obtained result is to hand-written signature map
The primary reconstruct of piece;
S304: production fights network SIGAN and minimizes reconstructed error by continuous iteration, the model after being optimized.
After (3) 30 epoch, training pattern and D are savedAPenalty values;
As shown in fig. 7, its testing process includes:
Call trained model and handwriting signature picture to be identified;
Production is trained to fight network SIGAN by minimizing loss function L (u, v), production fights network SIGAN
Loss function by pixel loss and confrontation loss two parts form.
L (u, v)=α Lpixel(u,v)+βLadv(u,v)
Wherein α and β is normalized weighing factors, and v is handwritten signature person's handwriting picture, and u is standard Song typeface signature picture.
Pixel loss:
Wherein GAIt is the handwritten signature picture generated, GBIt is the Song typeface signature picture generated, θ is the parameter of generator network.
Confrontation loss:
Wherein DAIt is the similarity of the handwritten signature and true handwritten signature generated, DBThe Song typeface signature that generates with it is true
The similarity of real Song typeface signature.
After training, the loss function value of arbiter is stable, will D at this timeAPenalty values LAadvAs identification threshold
Value.When test by this threshold value compared with the penalty values loss that handwriting signature picture to be measured is obtained by network size.
If loss < LAadv, then it is true for verifying handwriting;
If loss > LAadv, then it is false for verifying handwriting;
Embodiment 1
It includes 640 signature pictures that the present embodiment, which tests picture library, and wherein positive sample 320 is opened, by Liu Yanjiao classmate
64 oneself signatures have been write respectively with 5 kinds of pens (gel ink pen, ball pen, pencil, blue pen, black pen);Negative sample
320, deliberately imitate the handwriting signature of Liu Yanjiao classmate, each mould of every kind of pen type with 5 kinds of pens respectively by other 4 classmates in laboratory
It is 16 imitative, as shown in Figure 8.This paper picture library training set, verifying collect, the ratio of test set is 4:1:5, and wherein test set includes 320
Signature picture, is formed by randomly selecting 160 positive samples signature pictures and randomly selecting 160 negative samples signature picture;It is surplus
Under all picture composing training collection and verifying collection.The present embodiment only uses positive sample therein about the training that confrontation generates network
This, and the AlexNet Image Classifier used in comparative experiments has used positive sample and negative sample all in training set.
The present embodiment show that parameter alpha, β are respectively 0.6,0.4 using verifying collection, joins about other of SIGAN and AlexNet
Number setting is as shown in the table:
The setting of 1 network parameter of table
The present embodiment uses average accuracy as the index of decision model superiority and inferiority, and specific calculation formula is as follows:
A=(TP+TN)/(TP+TN+FP+FN) (9)
Wherein: TP is the positive sample number of class of being positive by model prediction;TN is the negative sample number of class of being negative by model prediction;FP
To be negative the positive sample number of class by model prediction;FN is the negative sample number of class of being positive by model prediction;
Real class rate: TPR=TP/ (TP+FN) (10)
Very negative class rate: TNR=TN/ (TN+FP) (11)
This group of Experiment Training collection includes all pen types, randomly selects 160,80,40,20,10,5 width in training set respectively
Handwriting signature picture is trained (every kind of pen type picture number is identical), as shown in figure 9, giving is enhanced using data and do not adopted
Handwriting verification average accuracy comparison in the case of being trained with data enhancing.The mixing pen type training time is as shown in Figure 10,
Testing time is each image 0.27s.Table 2 is the qualification result of 160 whole training set training patterns.
The test result (being enhanced using data) of the whole training set training patterns of table 2
Pass through the above experimental result, it can be deduced that draw a conclusion:
1. opening handwriting signature picture using whole training sets 160 trains the model accuracy highest come, reach
90.0%, but with the reduction of training set quantity, the accuracy of model is also gradually decreased, when training set quantity is lower than 10
(not using data to enhance), average accuracy drops sharply to 80% or less.This illustrates that the quantity of training set the more more are conducive to
Improve the accuracy of model identification.
2. as seen from Figure 9, can effectively improve the identification accuracy of model, original training sample using data enhancing
Quantity is fewer, and the effect of data enhancing is more obvious;When original training sample number is only 5 width, can make to reflect by data enhancing
Determine accuracy and promotes 17%.
3. the SIGAN constructed herein as seen from Figure 10 identifies performance better than DualGAN.
4. reach 100% it can be seen from Table 2 that the identification accuracy highest of gel ink pen person's handwriting picture, and two kinds of pen person's handwritings
The average accuracy of picture is all relatively low.This is because pen handwriting signature thickness is different, there is the phenomenon that cutting off the water supply, and gel ink pen goes out
Water is stablized, and thickness is consistent, and handwriting signature is clearest.Handwriting signature picture identification error example is as shown in Figure 11,12.
Embodiment 2
This experiment is also provided with five groups of training sets for containing only single pen type, and every group only includes a kind of pen type, experiment knot
Fruit is as shown in table 3, and every group duration 79 minutes trained, length of testing speech each image 0.27s.
The identification accuracy of the model of the different pen type training of table 3
1. in terms of integration test effect, the identification average accuracy highest being trained using neutral pen type picture,
But the qualification result for being below trained and testing using same pen type picture.If training set and test set use pen type
Inconsistent, test accuracy will appear a degree of decline (< 13.5%), wherein using the blue fountain pen ink of pencil pen type training
The average accuracy that type is tested is minimum (71.8%).
2. the identification that the identification average accuracy of single pen type training pattern is below mixing pen type training pattern is average just
True rate, this explanation is in practical applications if it is known that signature to be measured is only needed in training pattern using the pen type using pen type
Handwriting signature, and the pen type of handwriting signature to be measured is unknown, then should collect the handwriting signature figure of different pen types as much as possible
Piece is trained.
For the advantage for verifying the method for the present invention, the neural network classifier based on depth model is realized, and test is tied
Fruit is compared.Depth model uses AlexNet, outputs it and is changed to two classes, using whole training samples to last three layers
It is finely adjusted, experimental result is as shown in table 4, and training duration 34 minutes, length of testing speech was each image 0.23 second.It can be with by table 4
Find out, the identification accuracy of AlexNet classifier is lower than this paper SIGAN method, and (fine tuning) to AlexNet depth model
Training needs positive sample and negative sample, and the training of this paper SIGAN method only needs positive sample.
The qualification result of 4 AlexNet of table and SIGAN model compares
Embodiment 3
Be also provided with herein human eye identification experiment, this experiment member by five from have neither part nor lot in handwriting signature collection classmate
Composition, test set are consistent with computer testing collection.This experiment is divided into six groups, and every group of training set handwriting signature picture number is respectively
5,10,20,40,80,160.Before the test begins, see that training set picture (generally requires 5- the enough time to five classmates
10 minutes), it is tested after the completion of study.Experimental result is as shown in figure 13.Test average duration each image 3.2s.
1. highest accuracy has only reached 79%, minimum only 59% in five people, and the highest average of SIGAN network is just
True rate is higher by 14% than the highest average accuracy that human eye is identified, this explanation is based in the enough situations of training set quantity
The handwriting signature identification accuracy of SIGAN network identifies accuracy better than human eye subjectivity, and identifies fast speed.
2. the quantity of training set, which increases, is less than influence to network model to the influence of human eye, but training set quantity is very
Human eye identification accuracy is greater than the accuracy of confrontation network when few, illustrates that fighting network needs enough training sets that could reach
To better effect.
In the above embodiment of the invention, it all emphasizes particularly on different fields to the description of each embodiment, does not have in some embodiment
The part of detailed description, reference can be made to the related descriptions of other embodiments.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into
Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution
The range of scheme.
Claims (8)
1. a kind of handwriting signature identification method of the production confrontation network SIGAN based on paired-associate learning, which is characterized in that including
Following steps:
S1: my actual signature person's handwriting picture is collected in advance, and my the actual signature person's handwriting picture being collected into is located in advance
Reason carries out data enhancing to pretreated handwriting signature picture, obtains handwritten signature person's handwriting picture;According to be collected into I
Actual signature person's handwriting picture generates Song typeface signature picture, and splices with handwritten signature person's handwriting picture, obtains splicing signature picture;
S2: using the splicing signature picture as the training sample in confrontation network;The production fights network SIGAN
The consistent handwriting signature generator G of two structuresA、GB, two consistent handwriting signature arbiter D of structureA、DB;
S3: network SIGAN model is fought according to loss function L (u, v) training production is minimized, and is missed based on reconstruct is minimized
Poor criterion optimizes production confrontation network SIGAN model;
S4: the model after calling above-mentioned optimization identifies handwriting signature to be identified.
2. the handwriting signature identification of production confrontation network SIGAN based on paired-associate learning according to claim 1 a kind of
Method, which is characterized in that further include that pretreated step is carried out to handwriting signature to be identified in the step S4.
3. the handwriting signature mirror of production confrontation network SIGAN based on paired-associate learning according to claim 1 or 2 a kind of
Determine method, which is characterized in that described pair is collected into my treatment process of actual signature person's handwriting picture and includes:
A, my the actual signature person's handwriting picture is converted into two-value picture;
B, remove the extra blank parts of the two-value picture boundary;
C, by after trimming two-value picture mend at square picture, then by resolution adjustment be fixed resolution.
4. the handwriting signature identification of production confrontation network SIGAN based on paired-associate learning according to claim 1 a kind of
Method, which is characterized in that the data enhancement process includes: in a manner of symmetrical up and down, respectively by existing handwriting signature figure
Piece cuts into the picture less than fixed resolution, then by the photo resolution after shearing readjust fixed resolution 256 ×
256。
5. the handwriting signature identification of production confrontation network SIGAN based on paired-associate learning according to claim 1 a kind of
Method, which is characterized in that the generator GAThe domain X picture is converted into the domain Y picture, the generator GBThe domain Y picture is converted into
The domain X picture, wherein the domain X picture is handwritten signature person's handwriting picture, the domain Y picture is to be generated according to handwritten signature person's handwriting
Standard Song typeface Chinese character picture.
6. the handwriting signature identification of production confrontation network SIGAN based on paired-associate learning according to claim 1 a kind of
Method, which is characterized in that the picture, which splices, includes:
The handwriting signature picture handled well is horizontally-spliced with the standard Song typeface Chinese character picture handled well, obtain splicing signature
Picture.
7. the handwriting signature identification of production confrontation network SIGAN based on paired-associate learning according to claim 1 a kind of
Method, which is characterized in that the minimum loss function L (u, v) in the step S3 are as follows:
L (u, v)=α Lpixel(u,v)+βLadv(u,v)
Wherein α and β is normalized weighing factors, and v is handwritten signature person's handwriting picture, and u is standard Song typeface signature picture;
The loss function of the production confrontation network SIGAN model is made of two parts of pixel loss and confrontation loss;
Pixel loss are as follows:
Wherein GAIt is the handwritten signature picture generated, GBIt is the Song typeface signature picture generated, θ is the parameter of generator network;
Confrontation loss:
Wherein DAIt is the similarity of the handwritten signature and true handwritten signature generated, DBThe Song typeface signature that generates with it is true
The similarity of Song typeface signature;
After training, the loss function value of arbiter is stable, will D at this timeAPenalty values LAadvAs identification threshold value;It surveys
When examination by this threshold value compared with the penalty values loss that handwriting signature picture to be measured is obtained by network size, if loss < LAadv, then
It is true for verifying handwriting, and it is false for otherwise verifying handwriting.
8. the handwriting signature identification of production confrontation network SIGAN based on paired-associate learning according to claim 1 a kind of
Method, which is characterized in that it is described optimization training pattern process include:
S301: generator GAStandard Song typeface Chinese character picture is translated as handwritten signature picture, corresponding arbiter DAFor sentencing
Other handwritten signature picture is true picture or GAThe picture that translation comes;
S302: generator GBHandwritten form signature picture is translated as standard Song typeface Chinese character picture, corresponding arbiter DBIt is used to
Discrimination standard Song typeface Chinese character picture is true picture or GBThe picture that translation comes;
S303: pass through generator GAThe result of translation gives generator GB, obtained result is to standard Song typeface Chinese character picture
Primary reconstruct;Pass through generator GBThe result of translation gives generator GA, obtained result is to hand-written signature map piece
Primary reconstruct;
S304: production fights network SIGAN and minimizes reconstructed error by continuous iteration, the model after being optimized.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811076494.6A CN109190579B (en) | 2018-09-14 | 2018-09-14 | Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811076494.6A CN109190579B (en) | 2018-09-14 | 2018-09-14 | Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109190579A true CN109190579A (en) | 2019-01-11 |
CN109190579B CN109190579B (en) | 2021-11-16 |
Family
ID=64911535
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811076494.6A Active CN109190579B (en) | 2018-09-14 | 2018-09-14 | Generation type countermeasure network SIGAN signature handwriting identification method based on dual learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109190579B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110096977A (en) * | 2019-04-18 | 2019-08-06 | 中金金融认证中心有限公司 | The training method and handwriting verification method, equipment and medium of handwriting verification model |
CN111046760A (en) * | 2019-11-29 | 2020-04-21 | 山东浪潮人工智能研究院有限公司 | Handwriting identification method based on domain confrontation network |
CN111553277A (en) * | 2020-04-28 | 2020-08-18 | 电子科技大学 | Chinese signature identification method and terminal introducing consistency constraint |
CN111833267A (en) * | 2020-06-19 | 2020-10-27 | 杭州电子科技大学 | Dual generation countermeasure network for motion blur restoration and operation method thereof |
CN114155613A (en) * | 2021-10-20 | 2022-03-08 | 杭州电子科技大学 | Offline signature comparison method based on convenient sample acquisition |
CN115281662A (en) * | 2022-09-26 | 2022-11-04 | 北京科技大学 | Intelligent auxiliary diagnosis system for instable chronic ankle joints |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100254578A1 (en) * | 2009-04-06 | 2010-10-07 | Mercedeh Modir Shanechi | Handwriting authentication method, system and computer program |
CN106803082A (en) * | 2017-01-23 | 2017-06-06 | 重庆邮电大学 | A kind of online handwriting recognition methods based on conditional generation confrontation network |
CN107577985A (en) * | 2017-07-18 | 2018-01-12 | 南京邮电大学 | The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation |
-
2018
- 2018-09-14 CN CN201811076494.6A patent/CN109190579B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100254578A1 (en) * | 2009-04-06 | 2010-10-07 | Mercedeh Modir Shanechi | Handwriting authentication method, system and computer program |
CN106803082A (en) * | 2017-01-23 | 2017-06-06 | 重庆邮电大学 | A kind of online handwriting recognition methods based on conditional generation confrontation network |
CN107577985A (en) * | 2017-07-18 | 2018-01-12 | 南京邮电大学 | The implementation method of the face head portrait cartooning of confrontation network is generated based on circulation |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110096977A (en) * | 2019-04-18 | 2019-08-06 | 中金金融认证中心有限公司 | The training method and handwriting verification method, equipment and medium of handwriting verification model |
CN110096977B (en) * | 2019-04-18 | 2021-05-11 | 中金金融认证中心有限公司 | Training method of handwriting authentication model, handwriting authentication method, device and medium |
CN111046760A (en) * | 2019-11-29 | 2020-04-21 | 山东浪潮人工智能研究院有限公司 | Handwriting identification method based on domain confrontation network |
CN111046760B (en) * | 2019-11-29 | 2023-08-08 | 山东浪潮科学研究院有限公司 | Handwriting identification method based on domain countermeasure network |
CN111553277A (en) * | 2020-04-28 | 2020-08-18 | 电子科技大学 | Chinese signature identification method and terminal introducing consistency constraint |
CN111553277B (en) * | 2020-04-28 | 2022-04-26 | 电子科技大学 | Chinese signature identification method and terminal introducing consistency constraint |
CN111833267A (en) * | 2020-06-19 | 2020-10-27 | 杭州电子科技大学 | Dual generation countermeasure network for motion blur restoration and operation method thereof |
CN114155613A (en) * | 2021-10-20 | 2022-03-08 | 杭州电子科技大学 | Offline signature comparison method based on convenient sample acquisition |
CN114155613B (en) * | 2021-10-20 | 2023-09-15 | 杭州电子科技大学 | Offline signature comparison method based on convenient sample acquisition |
CN115281662A (en) * | 2022-09-26 | 2022-11-04 | 北京科技大学 | Intelligent auxiliary diagnosis system for instable chronic ankle joints |
Also Published As
Publication number | Publication date |
---|---|
CN109190579B (en) | 2021-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109190579A (en) | A kind of handwriting signature identification method of the production confrontation network SIGAN based on paired-associate learning | |
CN107423700B (en) | Method and device for verifying testimony of a witness | |
Turnbull | Social construction research and theory building | |
CN106023220A (en) | Vehicle exterior part image segmentation method based on deep learning | |
CN104239858B (en) | A kind of method and apparatus of face characteristic checking | |
CN107330444A (en) | A kind of image autotext mask method based on generation confrontation network | |
CN106127164A (en) | The pedestrian detection method with convolutional neural networks and device is detected based on significance | |
CN108171103A (en) | Object detection method and device | |
CN108647595B (en) | Vehicle weight identification method based on multi-attribute depth features | |
CN110009057A (en) | A kind of graphical verification code recognition methods based on deep learning | |
CN109977922A (en) | A kind of pedestrian's mask generation method based on generation confrontation network | |
CN103235957B (en) | A kind of online handwriting authentication method and system based on palmar side surface information | |
CN113011357A (en) | Depth fake face video positioning method based on space-time fusion | |
CN107491729B (en) | Handwritten digit recognition method based on cosine similarity activated convolutional neural network | |
Daston | Cloud physiognomy | |
CN109598225A (en) | Sharp attention network, neural network and pedestrian's recognition methods again | |
Nichols | Movies and Methods: Vol. II | |
CN109670559A (en) | Recognition methods, device, equipment and the storage medium of handwritten Chinese character | |
CN110188750A (en) | A kind of natural scene picture character recognition method based on deep learning | |
CN109977832A (en) | A kind of image processing method, device and storage medium | |
CN110458145A (en) | A kind of offline person's handwriting Individual Identification System and method based on two-dimentional behavioral characteristics | |
Cordasco et al. | Gender identification through handwriting: an online approach | |
CN110490153A (en) | A kind of offline person's handwriting Individual Identification System and method based on Three-Dimensional Dynamic feature | |
Widrow | The “rubber-mask” technique-II. pattern storage and recognition | |
Das et al. | ICFHR 2020 competition on short answer assessment and Thai student signature and name components recognition and verification (SASIGCOM 2020) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |