CN106778586B - Off-line handwritten signature identification method and system - Google Patents
Off-line handwritten signature identification method and system Download PDFInfo
- Publication number
- CN106778586B CN106778586B CN201611122474.9A CN201611122474A CN106778586B CN 106778586 B CN106778586 B CN 106778586B CN 201611122474 A CN201611122474 A CN 201611122474A CN 106778586 B CN106778586 B CN 106778586B
- Authority
- CN
- China
- Prior art keywords
- signature
- sample
- image
- features
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/30—Writer recognition; Reading and verifying signatures
- G06V40/33—Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an off-line handwritten signature identification method and a system, comprising the following steps: s1, preprocessing the off-line signature sample in the off-line signature sample library; s2, extracting a plurality of features including moment features, local binary pattern features, gray level co-occurrence matrix features and pulse coupling neural network features from the preprocessed signature image; s3, training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library; s4, acquiring a signature to be detected, and preprocessing the signature to be detected to obtain a plurality of characteristics of the signature to be detected; and S5, matching the multiple characteristics of the signature to be detected with the corresponding characteristics of the offline signature sample in the standard sample library, and identifying whether the signature to be detected is a real signature or a fake signature. The invention can effectively and accurately identify the off-line signature to be detected.
Description
Technical Field
The invention relates to the field of computer mode identification, in particular to an off-line handwritten signature identification method and system.
Background
With the rapid development of information technology, the security problem in the process of increasing the living standard of people is challenged unprecedentedly, and the real-time and accurate personal identity authentication is very important. The traditional identity authentication is based on a password, an IC card and the like, and has various defects, such as the password can be stolen and forgotten, the IC card can be lost and stolen, and the like. There is a great need for a reliable, convenient and easy-to-use personal identification technique to overcome the deficiencies of the conventional methods. The identity authentication mode based on the handwritten signature can fundamentally solve the defects.
Compared with other biometric technologies, the signature identification technology has the advantages of sufficient dynamic information, difficulty in simulation, high distinguishability, high information acquisition efficiency and the like, respects the privacy, and is very prominent in the aspects of collectability of signature characteristics, acceptable degree of human body injury and robustness, thereby having wide application prospect and application value. Therefore, the method has important social value and practical significance for effectively, reliably and quickly identifying the signature.
Signature authentication is divided into online and offline, online signatures provide more dynamic information, and such information is less susceptible to emulation and therefore easier to authenticate than offline. The cross error rate of the current online signature system is reduced to below 1%, and practical products are also available abroad. The off-line signature verification is that after a writer writes a signature on ordinary paper, the writer's signature is extracted by using an optical imaging device such as a camera and a scanner. The requirements of the off-line signature authentication on the equipment environment are much looser than that of the on-line mode, and if the verification accuracy is improved to a greater extent, the off-line signature authentication has a greater application prospect than that of the on-line mode.
Disclosure of Invention
The invention aims to solve the technical problem of providing a stable and effective off-line handwritten signature identification method and system aiming at the defect of unstable off-line handwritten signature identification in the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows:
an off-line handwritten signature authentication method is provided, which comprises the following steps:
s1, preprocessing the off-line signature sample in the off-line signature sample library;
s2, extracting a plurality of features including moment features, local binary pattern features, gray level co-occurrence matrix features and pulse coupling neural network features from the preprocessed signature image;
s3, training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library;
s4, acquiring a signature to be detected, and preprocessing the signature to be detected to obtain a plurality of characteristics of the signature to be detected;
and S5, matching the multiple characteristics of the signature to be detected with the corresponding characteristics of the offline signature sample in the standard sample library, and identifying whether the signature to be detected is a real signature or a fake signature.
In the method of the present invention, step S3 specifically includes:
selecting any one off-line signature sample in the off-line signature sample library as a test sample, and the others as training samples, and calculating the mean value of the feature vectors of the training samples and the distance between the feature vectors of the training samples to obtain the mean value variance of the distance;
calculating the mean distance of the characteristic vectors of the test sample and the training sample, comparing the mean variance of the distances of the training sample, calculating the similarity degree of the test sample and the training sample, if the similarity degree is greater than a preset threshold value, determining the test sample is a real signature, otherwise, determining the test sample is a fake signature, and obtaining a standard sample library after training;
and counting the error rejection rate and the error acceptance rate in the standard sample library.
In the method of the present invention, step S3 specifically includes: establishing classifiers corresponding to different characteristics of the off-line signature sample, and training to obtain classifiers according with the identification rate;
step S5 specifically includes: and identifying a plurality of characteristics of the signature to be detected through corresponding classifiers, and judging whether the signature to be detected is a real signature according to output results of different classifiers.
In the method of the present invention, step S3 specifically includes: establishing classifiers corresponding to different characteristics of the off-line signature samples, marking corresponding characteristic vectors according to the true and false samples, randomly selecting the characteristic vectors of part of samples in the off-line signature sample library to train, taking the rest of samples as test samples, and counting the prediction results of the test samples to obtain the identification accuracy.
In the method of the invention, the preprocessing in the step S1 includes binarization, boundary shearing, size normalization, tilt correction and distance reduction;
the binarization is specifically as follows: dividing the gray histogram of the image into two parts by using the optimal threshold value, and enabling the variance between the two parts to be the maximum value;
the shearing boundary specifically comprises: performing horizontal and vertical projection on the signature image, and cutting the boundary according to the projection histogram to obtain the signature image without the boundary;
the size normalization is specifically as follows: filling the upper and lower boundaries to enable the signature to be positioned in the middle of the picture, and scaling the picture according to the proportion;
the inclination correction specifically comprises: taking pixel points of the signature image as characteristic points, fitting the characteristic points to the direction of a base line by using a least square method by using the relation between the characteristic points and the base line in the image, wherein the direction is the inclination direction of the signature;
the contraction distance is specifically as follows: projecting the signature image in the vertical direction to obtain a projection histogram; and counting the number of the lowest positions in the projection histogram and the distance between the lowest positions, judging whether the lowest positions are blank areas of the signature image according to the distance of the lowest areas, and if so, cutting the blank areas to obtain the signature image after the distance reduction.
In the method, different characteristics are extracted for different pretreatment processes, and the method specifically comprises the following steps: extracting pulse coupling neural network characteristics from the gray level image of the cut boundary, extracting texture characteristics from the normalized gray level image, wherein the texture characteristics comprise local binary pattern characteristics and gray level co-occurrence matrix characteristics, and extracting low-order moment characteristics from the normalized binary image.
The invention also provides an off-line handwritten signature authentication system, which comprises:
the off-line signature sample library acquisition module is used for acquiring off-line signature samples;
the sample library preprocessing module is used for preprocessing the off-line signature samples in the off-line signature sample library and extracting a plurality of characteristics including moment characteristics, local binary pattern characteristics, gray level co-occurrence matrix characteristics and pulse coupling neural network characteristics from the preprocessed signature images;
the sample library training module is used for training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library;
the signature processing module to be tested is used for acquiring a signature to be tested and preprocessing the signature to be tested to obtain a plurality of characteristics of the signature to be tested;
and the signature identification module is used for matching a plurality of characteristics of the signature to be detected with corresponding characteristics of the offline signature samples in the standard sample library and identifying whether the signature to be detected is a real signature or a forged signature.
In the system, the sample library training module is specifically used for selecting any one off-line signature sample in the off-line signature sample library as a test sample, and the others as training samples, and calculating the mean value of the feature vectors of the training samples and the distance between the feature vectors of the training samples to obtain the mean value variance of the distance;
the sample base training module is also used for calculating the distance between the test sample and the mean value of the characteristic vector of the training sample, comparing the variance of the mean value of the distance of the training sample, calculating the similarity degree of the test sample and the training sample, if the similarity degree is greater than a preset threshold value, the test sample is a real signature, otherwise, the test sample is a fake signature, and obtaining a standard sample base after training; and counting the error rejection rate and the error acceptance rate in the standard sample library.
In the system, the sample library training module is specifically used for establishing classifiers corresponding to different characteristics of the off-line signature sample and training to obtain classifiers according with the identification rate;
the signature identification module is specifically used for identifying a plurality of characteristics of the signature to be detected through corresponding classifiers and judging whether the signature to be detected is a real signature according to output results of different classifiers.
In the system, the sample base preprocessing module is specifically used for carrying out binarization, boundary shearing, size normalization, inclination correction and distance reduction processing on samples in the sample base;
the binarization is specifically as follows: dividing the gray histogram of the image into two parts by using the optimal threshold value, and enabling the variance between the two parts to be the maximum value;
the shearing boundary specifically comprises: performing horizontal and vertical projection on the signature image, and cutting the boundary according to the projection histogram to obtain the signature image without the boundary;
the size normalization is specifically as follows: filling the upper and lower boundaries to enable the signature to be positioned in the middle of the picture, and scaling the picture according to the proportion;
the inclination correction specifically comprises: taking pixel points of the signature image as characteristic points, fitting the characteristic points to the direction of a base line by using a least square method by using the relation between the characteristic points and the base line in the image, wherein the direction is the inclination direction of the signature;
the contraction distance is specifically as follows: projecting the signature image in the vertical direction to obtain a projection histogram; and counting the number of the lowest positions in the projection histogram and the distance between the lowest positions, judging whether the lowest positions are blank areas of the signature image according to the distance of the lowest areas, and if so, cutting the blank areas to obtain the signature image after the distance reduction.
The invention has the following beneficial effects: according to the invention, the signature is preprocessed, so that the influence of some unnecessary external factors is eliminated, a plurality of characteristics including moment characteristics, local binary pattern characteristics, gray level co-occurrence matrix characteristics and pulse coupling neural network characteristics are extracted according to different preprocessing results, and the extraction of the characteristics takes the shape, texture and pseudo-dynamic characteristics into consideration; the off-line signature to be tested can be effectively and accurately identified through the matching of the characteristics.
The invention further uses a simple statistical classifier to perform statistics on the identification effect of various identification methods, and can also use the classifier to perform statistics on the identification effect of the signature image. Experiments show that the methods can realize stable and effective off-line handwritten signature identification.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a method for off-line handwritten signature authentication in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of an off-line handwritten signature authentication process;
FIG. 3a is a signature graph after boundary clipping;
FIG. 3b is a signature graph after size normalization;
4a-4c are signature diagrams during the process of projection segmentation and reduction;
FIG. 5 is a block diagram of a signature authentication system;
FIG. 6 is a schematic diagram of local binary pattern feature extraction;
fig. 7 is a schematic diagram of gray level co-occurrence matrix feature extraction.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The authentication method for the offline handwritten signature of the embodiment of the invention, as shown in fig. 1, comprises the following steps:
s1, preprocessing the off-line signature sample in the off-line signature sample library; the main purpose of preprocessing is to remove interference information and prepare for feature extraction; the preprocessing can be mainly carried out by using methods such as smooth denoising, size and position normalization, binarization, boundary shearing, segmentation, projection distance reduction, inclination correction and the like. Off-line signature samples are signatures written on printing paper by using a pen, and then are input into a computer by scanning gray scale through a scanner, but at present, no better off-line handwritten signature library exists, and the workload of acquiring data by self is too large. In the embodiment of the invention, an offline signature competition signature library in 2011 is selected, 10 signatories, 200 real signatures and 200 fake signatures are selected, and the corresponding authentication accuracy is obtained through evaluation.
S2, extracting the characteristics of the preprocessed signature image, wherein the handwritten signature has larger randomness and is easily influenced by factors such as environment, unilateral characteristics can not accurately represent the writing style of a signer, and comprehensive consideration needs to be carried out from multiple aspects to ensure that the extracted characteristics have strong distinguishability, high reliability, good independence and overlarge characteristic dimension. The shape characteristics comprise moment characteristics which can describe overall structural characteristics such as signature appearance, word position inclination, gravity center shift and the like; the texture features comprise local binary patterns and gray level co-occurrence matrixes, the texture simply can visually reflect the visual features of the signature image, and the signature image is described through the gray level distribution of pixels and surrounding space neighborhoods of the pixels; the pseudo-dynamic feature extraction based on the coupled neural network can indirectly represent the change of dynamic information such as pressure and the like when a signer writes a signature through gray level transformation, and the model of the coupled neural network is similar to visual nerve imaging, so that some pseudo-dynamic information of the signature can be better represented. The invention can more completely and accurately represent the writing style of the signer by selecting different feature extraction methods. S3, training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library;
s4, acquiring a signature to be detected, preprocessing the signature to be detected, wherein the preprocessing process is the same as that of the preprocessing process mentioned in the step S1, and then extracting features, wherein the extracted features are the same as those of the preprocessing process mentioned in the step S2, so that a plurality of features of the signature to be detected are obtained;
and S5, matching the multiple characteristics of the signature to be detected with the corresponding characteristics of the offline signature sample in the standard sample library, and identifying whether the signature to be detected is a real signature or a fake signature.
In step S1, preprocessing the off-line signature samples in the off-line signature sample library, including binarization, size normalization and tilt correction, projection distance reduction, etc., extracting relevant features of the image, including moment and block distance transformation, is also an authentication method, calculating the distance from a black point in one binarized signature image to the nearest black point in another signature image, and using the sum of the nearest distances as the distance between the two signatures, which is a commonly used authentication algorithm. Local Binary Pattern (LBP), Gray Level Co-occurrence Matrix (GLCM), and Pulse Coupled Neural Network (PCNN) features, etc., and the weighted euclidean distance may be used as the feature distance metric.
Specifically, the method comprises the following steps:
1) the basic principle of the binarization method is that the OTSU algorithm is to divide the gray histogram of an image into two parts by using an optimal threshold value, so that the variance between the two parts is maximized, i.e. the separability is maximized.
2) The method for cutting the boundary uses a projection method, the signature image is horizontally and vertically projected, and the boundary is cut according to the projection histogram to obtain the signature image without the boundary;
3) the size normalization uses filling the upper and lower boundaries to make the signature positioned in the middle of the picture, and then scaling the picture according to the proportion;
4) the least square method is used for tilt correction, pixel points of the signature image are used as characteristic points, and the characteristic points are fitted to the direction of a base line by the least square method by utilizing the relation between the characteristic points and the base line in the graph, namely the tilt direction of the signature;
5) the projection method distance reduction is mainly used for signatures with obvious distances between words, the signature image is projected in the vertical direction, and whether the region with the minimum value in the projection histogram is cut or not is judged.
In a preferred embodiment of the present invention, the size normalization comprises the following steps:
determining four boundaries of a signature image, and cutting all blanks except the boundaries;
judging the length-width ratio of the cut image, if the width is larger than the length, filling blanks with the same width in the upper and lower boundaries of the signature image, and enabling the signature to be positioned in the middle of the filled square image;
assuming that the size of the filled squares is N × N and the size to be achieved after normalization is m × m, the squares are scaled by a scale factor of m/64 to obtain normalized signature images, as shown in fig. 3 b.
The projection method comprises the following specific steps:
projecting the signature image in the vertical direction to obtain a projection histogram, as shown in fig. 4 a;
counting the number of the lowest positions in the projection histogram and the distance between the lowest positions;
and judging whether the minimum region is a blank region of the signature image according to the distance of the minimum region, and if so, cutting the blank region to obtain a signature image after the distance is reduced, as shown in fig. 4b and 4 c.
Extracting a plurality of features of the preprocessed signature image, which specifically comprises the following steps: distance feature extraction is carried out on the binarized image with the normalized size, coupling pulse neural network feature extraction is carried out on the binarized image with the cut boundary, and gray level co-occurrence matrix and local binary pattern feature extraction are carried out on the gray level image with the normalized size;
the moment features mainly describe character shapes and physical meanings, and eight features such as length-width ratio, character position direction, stretching length, stretching degree, horizontal deviation, vertical deviation, horizontal stretching balance degree, vertical stretching balance degree and the like are extracted to form a moment feature vector.
The local binary pattern features mainly describe local texture features of the image, and a statistical histogram of an LBP feature spectrum can be used as a feature vector; the LBP transformed map of the equivalent pattern may be calculated, and the histogram of the number of occurrences of each pixel value in the LBP map may be counted as the feature vector.
The gray level co-occurrence matrix feature describes the joint distribution of two pixels with a certain spatial position relationship, can be regarded as a joint histogram of two pixel gray level pairs, carries out GLCM feature extraction on a gray level image with normalized size, carries out GLCM transformation on the image in four directions, and then respectively extracts the energy, correlation, entropy and contrast of the four matrices as GLCM feature vectors; sixteen extracted features of energy, contrast, correlation, entropy and the like in four directions form a feature vector.
And the pulse coupling neural network characteristics mainly reflect the texture information of the image. The PCNN model is more sensitive to the detail response of dark pixels, the texture information of the image and some contained pseudo dynamic information are exactly reflected by the result of multiple iterations, PCNN iteration operation is carried out on the signed gray image, the entropy value of the output matrix after each iteration is calculated, the entropy of the output matrix of each iteration is used as a characteristic value, 30 iterations are carried out, and a 30-dimensional entropy time sequence is obtained and used as a PCNN characteristic vector.
In a preferred embodiment of the invention, feature extraction is used to further reduce dimensionality by transforming a high-dimensional feature space to a low-dimensional feature space, and then removing redundant and irrelevant features. After obtaining the characteristic vector with lower characteristic dimension, calculating the distance between the characteristic vectors, estimating the similarity, judging the truth of the characteristic vector by using a threshold value method, or directly training and predicting the truth by using a classifier, and then counting the experimental results, wherein the experimental results comprise a moment characteristic extraction submodule, a local binary pattern characteristic extraction submodule, a gray level co-occurrence matrix characteristic extraction submodule and a pulse coupling neural network characteristic extraction submodule
In a preferred embodiment of the present invention, feature extraction is performed by:
moment feature extraction: for a digital image of size M × N, the geometric moment of order (p + q) is defined as:
the gravity center of the signature image can be obtained by zero order and first order geometric moments, the gravity center is taken as a coordinate origin to obtain a central moment, and the central moment is a measurement reflecting how the gray level in the corresponding region is distributed relative to the gray level gravity center. The second-order moment can calculate the variance of coordinates of black points in the image to obtain the extension degrees in the horizontal direction and the vertical direction, and correspondingly, eight characteristics such as the length-width ratio, the character position direction, the stretching length, the extension degree, the horizontal offset degree, the vertical offset degree, the horizontal extension balance degree and the vertical extension balance degree can be calculated to be used as moment characteristic vectors;
the specific steps of the local binary pattern feature extraction are as follows:
firstly, dividing a normalized gray level image into small areas of 3 x 3;
and for the pixels in each small area, comparing the gray values of the adjacent 8 pixels with the gray values, if the values of the surrounding pixels are greater than the value of the central pixel, marking the position of the pixel as 1, otherwise, marking the position of the pixel as 0. Thus, 8 points in the 3 × 3 neighborhood can generate 8-bit binary numbers through comparison, and the LBP value of the pixel point in the center of the window is obtained as shown in fig. 6;
the excessive binary patterns are disadvantageous to the extraction, classification and information access of the texture, the LBP of the equivalent pattern should be considered, when a cyclic binary number corresponding to a certain LBP has at most two transitions from 0 to 1 or from 1 to 0, the binary number corresponding to the LBP is called an equivalent pattern class, and the LBP of the general pattern can be converted into the LBP of the equivalent pattern;
then calculating a histogram of each small region, namely the frequency of occurrence of each number; the histogram is then normalized.
Finally, connecting the obtained statistical histograms of all the regions into a feature vector, namely an LBP texture feature vector of the whole image;
extracting gray level co-occurrence matrix characteristics: the gray level co-occurrence matrix can reflect the comprehensive information of the gray level of the image about the direction, the adjacent interval and the change amplitude, is the basis of analyzing the local mode and the arrangement rule of the image, and if f (x, y) is a two-dimensional digital image with the size of M multiplied by N and the gray level of Ng, the gray level co-occurrence matrix meeting a certain spatial relationship is P (i, j) # { (x1, y1), (x2, y2) ∈ M multiplied by N | f (x1, y1) ═ i, f (x2, y2) } j }
Where # (x) indicates the number of elements in the set x, it is obvious that P is a matrix of Ng × Ng, and if the distance between (x1, y1) and (x2, y2) is d and the angle between the two and the horizontal axis of coordinates is θ, gray level co-occurrence matrices P (i, j, d, θ) of various pitches and angles can be obtained.
In the invention, the gray level of an image is quantized into 8 levels, the distance d is 1, the angles are respectively 0, 45, 90 and 135 degrees, GLCM transformation in 4 directions is carried out on a signature image, the schematic diagram of GLCM transformation is shown in figure 7, in order to describe texture conditions more intuitively by co-occurrence matrixes, parameters reflecting matrix conditions, such as energy, correlation, contrast and entropy, are derived from the co-occurrence matrixes, and the four parameters are respectively calculated by the co-occurrence matrixes in four directions to serve as GLCM feature vectors;
extracting characteristics of a pulse coupling neural network: and performing feature extraction on the signature image by adopting an entropy time sequence of PCNN. Setting specific model parameters, using the pixel value of a signature gray image as external excitation of a neuron, carrying out iterative operation through a PCNN model, obtaining an output matrix after each iteration, and then obtaining an entropy value as a characteristic vector of the image according to the calculation of the Shannon formula entropy.
The feature matching in step S5 may be performed by a method including the first method of weighting euclidean distances and evaluating signature discrimination effects by calculating distances between feature vectors using a threshold method, the second method of directly training the feature vectors of the signatures using a support vector machine classifier and then predicting a final statistical prediction result to evaluate a corresponding discrimination rate, and the third method of directly training the feature vectors of the signatures using a BP neural network classifier and then predicting a final statistical prediction result to evaluate a corresponding discrimination rate.
Weighted euclidean distance: enough samples need to be trained to get the corresponding approximate threshold. The system first adopts the signature to train and obtains a template value. When the signature needs to be identified, the weighted Euclidean distance value between the characteristic vector and the template value is directly calculated. Training by adopting a large number of signatures and determining a threshold value, and judging the distance value of the measured sample to be false when the distance value of the measured sample is greater than the threshold value; otherwise it is true. The distance calculation formula is as follows:
wherein dist is the distance value of the sample to be measured, FiIs a characteristic value of the sample to be measured, muiTo train the mean, σ, of a certain feature of the sampleiIs the standard deviation and n is the characteristic number.
The thresholding method only allows a rough assessment of the discrimination effect of each signature discrimination method if there are not a large number of samples in the experiment.
Support vector machine classifier: the linear classifier with the largest interval on the feature space is used, the learning strategy is to maximize the interval and solve the problem of binary classification of signature identification. The system firstly carries out pretreatment and feature extraction on all samples, then carries out authenticity marking and storage on the extracted features, selects part of sample features as a training set in the feature vector space of all samples, uses the rest of the sample features as a test set, and finally counts the accuracy of the result of the test samples predicted by a classifier to realize signature identification. Of course, the most important design of the support vector is the parameter design of the kernel function, and the parameters of the kernel function can be adjusted through experiments to achieve the best discrimination effect.
BP neural network classifier: normalizing the characteristic vectors of the signature images marked with authenticity, then selecting a part of data as training data, taking the rest data as test data, and then counting and identifying the accuracy. The BP neural network classifier is used similarly to a support vector machine, the truth standard of the characteristic vectors is required, part of samples are selected as training samples, the characteristic vectors of all the samples are required to be normalized, and then testing prediction and statistical identification accuracy are carried out. The important thing of the BP neural network classifier is the design of the neural network, the design of the hidden layer and the output layer of the input layer, and the like, which is mainly set by taking the statistics of a large number of experiments of relevant experts as reference.
Experimental results show that the signature identification method can be used for off-line signature identification, and the identification result is objective and stable. In order to improve the identification effect, one embodiment of the invention provides that the signature image is divided into blocks, and the parts with higher importance are selected for fusion and then distinguished.
The signature image blocking adopts a simple uniform blocking method, the normalized signature image is divided into 4 x 4 parts, the width and the height of the signature are counted, and the signature is cut according to the proportion.
The off-line handwritten signature identification system structure diagram is shown in fig. 5, and the existing signature sample is preprocessed, which roughly comprises binarization, cutting boundary and size normalization, so as to obtain a normalized binary image, a normalized gray image and a gray image after cutting boundary, different features are extracted aiming at different preprocessing processes, the gray image of the cutting boundary extracts PCNN features, the normalized gray image extracts texture features comprise LBP and GLCM, and the normalized binary image extracts low-order distance features. After the feature vectors are obtained through feature extraction, classification and identification are carried out, and a support vector machine classifier, a BP neural network classifier and a simple statistical classifier are respectively used.
The off-line handwritten signature identification system of the embodiment of the invention comprises:
the off-line signature sample library acquisition module is used for acquiring off-line signature samples;
the sample library preprocessing module is used for preprocessing the off-line signature samples in the off-line signature sample library and extracting a plurality of characteristics including moment characteristics, local binary pattern characteristics, gray level co-occurrence matrix characteristics and pulse coupling neural network characteristics from the preprocessed signature images;
the sample library training module is used for training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library;
the signature processing module to be tested is used for acquiring a signature to be tested and preprocessing the signature to be tested to obtain a plurality of characteristics of the signature to be tested;
and the signature identification module is used for matching a plurality of characteristics of the signature to be detected with corresponding characteristics of the offline signature samples in the standard sample library and identifying whether the signature to be detected is a real signature or a forged signature.
The sample library training module is specifically used for selecting any one off-line signature sample in the off-line signature sample library as a test sample, taking the other off-line signature sample as a training sample, calculating the mean value of the feature vectors of the training sample and the distance between the feature vectors of the training sample, and obtaining the mean value variance of the distance;
the sample base training module is also used for calculating the distance between the test sample and the mean value of the characteristic vector of the training sample, comparing the variance of the mean value of the distance of the training sample, calculating the similarity degree of the test sample and the training sample, if the similarity degree is greater than a preset threshold value, the test sample is a real signature, otherwise, the test sample is a fake signature, and obtaining a standard sample base after training; and counting the error rejection rate and the error acceptance rate in the standard sample library.
The sample library training module is specifically used for establishing classifiers corresponding to different characteristics of the off-line signature sample and carrying out training to obtain classifiers according with the identification rate;
the signature identification module is specifically used for identifying a plurality of characteristics of the signature to be detected through corresponding classifiers and judging whether the signature to be detected is a real signature according to output results of different classifiers.
The sample library preprocessing module is specifically used for carrying out binarization, boundary shearing, size normalization, inclination correction and distance reduction processing on samples in the sample library;
the binarization is specifically as follows: dividing the gray histogram of the image into two parts by using the optimal threshold value, and enabling the variance between the two parts to be the maximum value;
the shearing boundary specifically comprises: performing horizontal and vertical projection on the signature image, and cutting the boundary according to the projection histogram to obtain the signature image without the boundary;
the size normalization is specifically as follows: filling the upper and lower boundaries to enable the signature to be positioned in the middle of the picture, and scaling the picture according to the proportion;
the inclination correction specifically comprises: taking pixel points of the signature image as characteristic points, fitting the characteristic points to the direction of a base line by using a least square method by using the relation between the characteristic points and the base line in the image, wherein the direction is the inclination direction of the signature;
the contraction distance is specifically as follows: projecting the signature image in the vertical direction to obtain a projection histogram; and counting the number of the lowest positions in the projection histogram and the distance between the lowest positions, judging whether the lowest positions are blank areas of the signature image according to the distance of the lowest areas, and if so, cutting the blank areas to obtain the signature image after the distance reduction.
In conclusion, the invention performs shearing boundary and size normalization on the signature during preprocessing, thereby eliminating the influence of some unnecessary external factors; during feature extraction, low-order moment features, namely shape features, are extracted aiming at the normalized binary image, gray level co-occurrence matrix features are extracted for the normalized gray level image, local binary pattern features and pulse coupling neural network features are extracted for the gray level image of a cut boundary, and shape, texture and pseudo dynamic features are considered in the feature extraction; when matching, a simple statistical classifier is used for counting the identification effects of various identification methods, and a classical support vector machine and a BP neural network classifier are used for counting the identification effects of the signature images. Experiments show that the methods can realize stable and effective off-line handwritten signature identification.
It will be understood that modifications and variations can be made by persons skilled in the art in light of the above teachings and all such modifications and variations are intended to be included within the scope of the invention as defined in the appended claims.
Claims (8)
1. An off-line handwritten signature authentication method is characterized by comprising the following steps:
s1, preprocessing the off-line signature sample in the off-line signature sample library, wherein the preprocessing comprises binarization, boundary shearing, size normalization, inclination correction and distance reduction; dividing the normalized signature image into 4 x 4 parts, specifically counting the width and height of the signature, and segmenting according to proportion;
s2, extracting a plurality of features of the preprocessed signature image, and extracting the shape, texture and pseudo dynamic features of the signature respectively; the shape features comprise moment features and are used for describing overall structural features including signature appearance, word position inclination and gravity center offset; the texture features comprise local binary pattern features and gray level co-occurrence matrix features, are used for visually reflecting the visual features of the signature image and describe the signature image through the gray level distribution of pixels and surrounding space neighborhoods of the pixels; the pseudo dynamic features comprise pulse coupling neural network features and are used for indirectly representing the change of dynamic information including pressure when a signer writes a signature through gray level transformation; specifically, pulse coupling neural network characteristics are extracted from the gray level image of the cut boundary, local binary pattern characteristics and gray level co-occurrence matrix characteristics are extracted from the normalized gray level image, and low-order moment characteristics are extracted from the normalized binary image;
s3, training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library;
s4, acquiring a signature to be detected, and preprocessing the signature to be detected to obtain a plurality of characteristics of the signature to be detected;
and S5, matching the multiple characteristics of the signature to be detected with the corresponding characteristics of the offline signature sample in the standard sample library, and identifying whether the signature to be detected is a real signature or a fake signature.
2. The method according to claim 1, wherein step S3 is specifically:
selecting any one off-line signature sample in the off-line signature sample library as a test sample, and the others as training samples, and calculating the mean value of the feature vectors of the training samples and the distance between the feature vectors of the training samples to obtain the mean value variance of the distance;
calculating the mean distance of the characteristic vectors of the test sample and the training sample, comparing the mean variance of the distances of the training sample, calculating the similarity degree of the test sample and the training sample, if the similarity degree is greater than a preset threshold value, determining the test sample is a real signature, otherwise, determining the test sample is a fake signature, and obtaining a standard sample library after training;
and counting the error rejection rate and the error acceptance rate in the standard sample library.
3. The method according to claim 1, wherein step S3 is specifically: establishing classifiers corresponding to different characteristics of the off-line signature sample, and training to obtain classifiers according with the identification rate;
step S5 specifically includes: and identifying a plurality of characteristics of the signature to be detected through corresponding classifiers, and judging whether the signature to be detected is a real signature according to output results of different classifiers.
4. The method according to claim 3, wherein step S3 is specifically: establishing classifiers corresponding to different characteristics of the off-line signature samples, marking corresponding characteristic vectors according to the true and false samples, randomly selecting the characteristic vectors of part of samples in the off-line signature sample library to train, taking the rest of samples as test samples, and counting the prediction results of the test samples to obtain the identification accuracy.
5. The method according to claim 1, wherein the preprocessing in step S1 includes binarization, clipping boundary, size normalization, tilt correction, and reduction;
the binarization is specifically as follows: dividing the gray histogram of the image into two parts by using the optimal threshold value, and enabling the variance between the two parts to be the maximum value;
the shearing boundary specifically comprises: performing horizontal and vertical projection on the signature image, and cutting the boundary according to the projection histogram to obtain the signature image without the boundary;
the size normalization is specifically as follows: filling the upper and lower boundaries to enable the signature to be positioned in the middle of the picture, and scaling the picture according to the proportion;
the inclination correction specifically comprises: taking pixel points of the signature image as characteristic points, fitting the characteristic points to the direction of a base line by using a least square method by using the relation between the characteristic points and the base line in the image, wherein the direction is the inclination direction of the signature;
the contraction distance is specifically as follows: projecting the signature image in the vertical direction to obtain a projection histogram; and counting the number of the lowest positions in the projection histogram and the distance between the lowest positions, judging whether the lowest positions are blank areas of the signature image according to the distance of the lowest areas, and if so, cutting the blank areas to obtain the signature image after the distance reduction.
6. An off-line handwritten signature authentication system, comprising:
the off-line signature sample library acquisition module is used for acquiring off-line signature samples;
the sample library preprocessing module is used for preprocessing the off-line signature samples in the off-line signature sample library and extracting a plurality of characteristics including distance characteristics, local binary pattern characteristics, gray level co-occurrence matrix characteristics and pulse coupling neural network characteristics from the preprocessed signature images; preprocessing comprises binaryzation, boundary shearing, size normalization, inclination correction and distance reduction; dividing the normalized signature image into 4 x 4 parts, specifically counting the width and height of the signature, and segmenting according to proportion; extracting a plurality of features from the preprocessed signature image, and extracting the shape, texture and pseudo dynamic features of the signature respectively; the shape features comprise moment features and are used for describing overall structural features including signature appearance, word position inclination and gravity center offset; the texture features comprise local binary pattern features and gray level co-occurrence matrix features, are used for visually reflecting the visual features of the signature image and describe the signature image through the gray level distribution of pixels and surrounding space neighborhoods of the pixels; the pseudo dynamic features comprise pulse coupling neural network features and are used for indirectly representing the change of dynamic information including pressure when a signer writes a signature through gray level transformation; specifically, pulse coupling neural network characteristics are extracted from the gray level image of the cut boundary, local binary pattern characteristics and gray level co-occurrence matrix characteristics are extracted from the normalized gray level image, and low-order moment characteristics are extracted from the normalized binary image;
the sample library training module is used for training a plurality of characteristics of the extracted off-line signature sample to obtain a trained standard sample library;
the signature processing module to be tested is used for acquiring a signature to be tested and preprocessing the signature to be tested to obtain a plurality of characteristics of the signature to be tested;
and the signature identification module is used for matching a plurality of characteristics of the signature to be detected with corresponding characteristics of the offline signature samples in the standard sample library and identifying whether the signature to be detected is a real signature or a forged signature.
7. The system according to claim 6, wherein the sample library training module is specifically configured to select any one of the offline signature samples in the offline signature sample library as a test sample, and the others as training samples, and calculate a mean value of feature vectors of the training samples and distances between the feature vectors of the training samples to obtain a mean variance of the distances;
the sample base training module is also used for calculating the distance between the test sample and the mean value of the characteristic vector of the training sample, comparing the variance of the mean value of the distance of the training sample, calculating the similarity degree of the test sample and the training sample, if the similarity degree is greater than a preset threshold value, the test sample is a real signature, otherwise, the test sample is a fake signature, and obtaining a standard sample base after training; and counting the error rejection rate and the error acceptance rate in the standard sample library.
8. The system of claim 6, wherein the sample library training module is specifically configured to establish classifiers corresponding to different features of the offline signature sample, and train the classifiers to obtain a classifier that meets the authentication rate;
the signature identification module is specifically used for identifying a plurality of characteristics of the signature to be detected through corresponding classifiers and judging whether the signature to be detected is a real signature according to output results of different classifiers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611122474.9A CN106778586B (en) | 2016-12-08 | 2016-12-08 | Off-line handwritten signature identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611122474.9A CN106778586B (en) | 2016-12-08 | 2016-12-08 | Off-line handwritten signature identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106778586A CN106778586A (en) | 2017-05-31 |
CN106778586B true CN106778586B (en) | 2020-11-17 |
Family
ID=58881715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611122474.9A Expired - Fee Related CN106778586B (en) | 2016-12-08 | 2016-12-08 | Off-line handwritten signature identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106778586B (en) |
Families Citing this family (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107944336A (en) * | 2017-10-11 | 2018-04-20 | 中国科学院自动化研究所 | Handwriting signature verification system based on cloud computing |
CN108921006B (en) * | 2018-05-03 | 2020-08-04 | 西北大学 | Method for establishing handwritten signature image authenticity identification model and authenticity identification method |
CN108960055B (en) * | 2018-05-30 | 2021-06-08 | 广西大学 | Lane line detection method based on local line segment mode characteristics |
CN108921209A (en) * | 2018-06-21 | 2018-11-30 | 杭州骑轻尘信息技术有限公司 | Image identification method, device and electronic equipment |
CN108960210A (en) * | 2018-08-10 | 2018-12-07 | 武汉优品楚鼎科技有限公司 | It is a kind of to grind the method, system and device for reporting board-like identification and segmentation |
CN109344856B (en) * | 2018-08-10 | 2022-04-08 | 华南理工大学 | Offline signature identification method based on multilayer discriminant feature learning |
CN109409254A (en) * | 2018-10-10 | 2019-03-01 | 成都优易数据有限公司 | A kind of electronic contract handwritten signature identification method based on twin neural network |
CN109657498B (en) * | 2018-12-28 | 2021-09-24 | 广西师范大学 | Differential privacy protection method for top-k symbiotic mode mining in multiple streams |
CN109817348B (en) * | 2019-01-18 | 2020-04-14 | 郑静 | Mobile medical signature management method and related product |
CN110096977B (en) * | 2019-04-18 | 2021-05-11 | 中金金融认证中心有限公司 | Training method of handwriting authentication model, handwriting authentication method, device and medium |
CN110222666B (en) * | 2019-06-14 | 2022-09-13 | 河海大学常州校区 | Signature authentication method and system |
CN110245615B (en) * | 2019-06-17 | 2022-09-13 | 河海大学常州校区 | Signature authentication method, system and storage medium |
CN110263740A (en) * | 2019-06-26 | 2019-09-20 | 四川新网银行股份有限公司 | Different type block letter document dubbing method based on OCR technique |
CN110619274A (en) * | 2019-08-14 | 2019-12-27 | 深圳壹账通智能科技有限公司 | Identity verification method and device based on seal and signature and computer equipment |
CN112395919B (en) * | 2019-08-16 | 2022-09-13 | 河海大学常州校区 | Off-line signature authentication method and system based on multi-feature metric learning |
CN111178203B (en) * | 2019-12-20 | 2021-01-29 | 江苏常熟农村商业银行股份有限公司 | Signature verification method and device, computer equipment and storage medium |
CN111275070B (en) * | 2019-12-26 | 2023-11-14 | 厦门商集网络科技有限责任公司 | Signature verification method and device based on local feature matching |
CN111209567B (en) * | 2019-12-30 | 2022-05-03 | 北京邮电大学 | Method and device for judging perceptibility of improving robustness of detection model |
CN111178290A (en) * | 2019-12-31 | 2020-05-19 | 上海眼控科技股份有限公司 | Signature verification method and device |
ES2845282B2 (en) * | 2020-01-24 | 2023-02-28 | Univ Madrid Autonoma | METHOD AND SYSTEM OF VERIFICATION OF SIGNATURES AND DYNAMIC HANDWRITINGS THROUGH DEEP LEARNING |
CN113449543B (en) * | 2020-03-24 | 2022-09-27 | 百度在线网络技术(北京)有限公司 | Video detection method, device, equipment and storage medium |
TWI777188B (en) * | 2020-07-07 | 2022-09-11 | 新光人壽保險股份有限公司 | Contract signature authentication method and device |
CN111753809A (en) * | 2020-07-10 | 2020-10-09 | 上海眼控科技股份有限公司 | Method and equipment for correcting handwritten signature |
CN112036323B (en) * | 2020-09-01 | 2024-02-27 | 中国银行股份有限公司 | Signature handwriting authentication method, client and server |
CN112115921B (en) * | 2020-09-30 | 2024-08-06 | 北京百度网讯科技有限公司 | Authenticity identification method and device and electronic equipment |
CN112836636A (en) * | 2021-02-02 | 2021-05-25 | 北京惠朗时代科技有限公司 | Method and device for identifying authenticity of signature image |
CN113095156B (en) * | 2021-03-23 | 2022-08-16 | 西安深信科创信息技术有限公司 | Double-current network signature identification method and device based on inverse gray scale mode |
CN113269136B (en) * | 2021-06-17 | 2023-11-21 | 南京信息工程大学 | Off-line signature verification method based on triplet loss |
CN113313092B (en) * | 2021-07-29 | 2021-10-29 | 太平金融科技服务(上海)有限公司深圳分公司 | Handwritten signature recognition method, and claims settlement automation processing method, device and equipment |
CN113705560A (en) * | 2021-09-01 | 2021-11-26 | 平安医疗健康管理股份有限公司 | Data extraction method, device and equipment based on image recognition and storage medium |
CN114155613B (en) * | 2021-10-20 | 2023-09-15 | 杭州电子科技大学 | Offline signature comparison method based on convenient sample acquisition |
CN113920589B (en) * | 2021-10-28 | 2024-09-27 | 平安银行股份有限公司 | Signature recognition method, device, equipment and medium based on artificial intelligence |
CN114186205A (en) * | 2021-11-09 | 2022-03-15 | 海南火链科技有限公司 | Signature identification method and device based on block chain and computer equipment |
CN115880304B (en) * | 2023-03-08 | 2023-05-09 | 曲阜市巨力铁路轨道工程股份有限公司 | Pillow defect identification method based on complex scene |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477580A (en) * | 2003-06-12 | 2004-02-25 | 上海交通大学 | Off-line Chinese signature identification method |
-
2016
- 2016-12-08 CN CN201611122474.9A patent/CN106778586B/en not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1477580A (en) * | 2003-06-12 | 2004-02-25 | 上海交通大学 | Off-line Chinese signature identification method |
Non-Patent Citations (3)
Title |
---|
一种基于特征层数据融合的目标识别方法;陈文颉等;《第四届全球智能控制与自动化大会论文集》;20020630;第2094-2098页 * |
加权DTW方法及其在手写签名鉴别中的应用;刘蕾;《中国优秀硕士学位论文全文数据库 信息科技辑》;20110715(第7期);第26-27页 * |
基于结构特征的离线手写签名鉴别;李亚婷等;《中国科技论文在线》;20160330;第1-9页 * |
Also Published As
Publication number | Publication date |
---|---|
CN106778586A (en) | 2017-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106778586B (en) | Off-line handwritten signature identification method and system | |
CN105956582B (en) | A kind of face identification system based on three-dimensional data | |
US10049262B2 (en) | Method and system for extracting characteristic of three-dimensional face image | |
Pansare et al. | Handwritten signature verification using neural network | |
CN101142584B (en) | Method for facial features detection | |
CN103093215B (en) | Human-eye positioning method and device | |
Wu et al. | A secure palm vein recognition system | |
US20190370536A1 (en) | Gait recognition system to identify walking subject | |
EP2434431A1 (en) | Method and device for classifying image | |
CN101576953A (en) | Classification method and device of human body posture | |
CN108960088A (en) | The detection of facial living body characteristics, the recognition methods of specific environment | |
CN1912889A (en) | Deformed fingerprint identification method based on local triangle structure characteristic collection | |
CN110766016A (en) | Code spraying character recognition method based on probabilistic neural network | |
Ferrer et al. | Signature verification using local directional pattern (LDP) | |
CN110222660B (en) | Signature authentication method and system based on dynamic and static feature fusion | |
CN110188646B (en) | Human ear identification method based on fusion of gradient direction histogram and local binary pattern | |
Gao et al. | On Designing a SwinIris Transformer Based Iris Recognition System | |
Gyimah et al. | An improved geo-textural based feature extraction vector for offline signature verification | |
Ribarić et al. | Personal recognition based on the Gabor features of colour palmprint images | |
Houtinezhad et al. | Off-line signature verification system using features linear mapping in the candidate points | |
KR101473991B1 (en) | Method and apparatus for detecting face | |
Szczepanik et al. | Data management for fingerprint recognition algorithm based on characteristic points’ groups | |
CN112395919B (en) | Off-line signature authentication method and system based on multi-feature metric learning | |
CN112183156B (en) | Living body detection method and equipment | |
Palanikumar et al. | Advanced palmprint recognition using unsharp masking and histogram equalization |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201117 Termination date: 20211208 |
|
CF01 | Termination of patent right due to non-payment of annual fee |