CN114547575A - Electronic signature verification method and device, computer equipment and storage medium - Google Patents

Electronic signature verification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114547575A
CN114547575A CN202210171442.7A CN202210171442A CN114547575A CN 114547575 A CN114547575 A CN 114547575A CN 202210171442 A CN202210171442 A CN 202210171442A CN 114547575 A CN114547575 A CN 114547575A
Authority
CN
China
Prior art keywords
signature
image
identity
verified
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210171442.7A
Other languages
Chinese (zh)
Inventor
王正松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210171442.7A priority Critical patent/CN114547575A/en
Publication of CN114547575A publication Critical patent/CN114547575A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/64Protecting data integrity, e.g. using checksums, certificates or signatures

Abstract

The application discloses an electronic signature verification method and device, computer equipment and a storage medium, and belongs to the technical field of artificial intelligence. The identity of a person to be verified and signed is identified when the person to be verified and signed is in real-time video call, after the identity verification is passed, the identity document of the person to be verified and signed is obtained from a document database, identity information on the identity document is obtained through OCR (optical character recognition), in the real-time video call process, a handwritten signature image is identified through a handwritten signature identification model, the identity information on the handwritten signature image is obtained, and whether the electronic signature verification is passed or not is determined by comparing the identity information on the identity document with the identity information on the handwritten signature image. In addition, the present application also relates to a blockchain technique, and video image data can be stored in the blockchain. According to the method and the device, the consistency of video connection can be guaranteed while the handwritten signature verification is completed, the processing timeliness is improved and lower, and malicious impersonation can be avoided to replace other person signature behaviors.

Description

Electronic signature verification method and device, computer equipment and storage medium
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to an electronic signature verification method and device, computer equipment and a storage medium.
Background
When an insurance company surveys traffic accidents, a survey document needs to be made and signed and confirmed by the parties. When a site survey is taken, the identity of the party is verified by the survey personnel and the authenticity of the signing action and the validity of the signature are verified on site. In recent years, the remote video connection technology is also gradually applied to business scenes such as insurance accident handling, however, the current electronic signature technology cannot achieve the effect of on-site survey.
The current common practice of electronic signatures is to send a signature link to the party by short message, and the party opens the H5 link from the short message for signature. The method has the disadvantages that the signature needs to jump into the short message for signature, if the signature is carried out in the process of passing the video, the continuity of video connection is damaged, part of customers cannot return to a video call interface, if the signature link is sent after the video is finished, the parties may not see the short message in time, so that the case processing timeliness is reduced, and if someone maliciously pretends to take the signature of another person, the claim settling personnel cannot find the short message in time.
Disclosure of Invention
The embodiment of the application aims to provide an electronic signature verification method, an electronic signature verification device, computer equipment and a storage medium, so as to solve the technical problem that the processing timeliness is low due to the fact that the continuity of video connection is damaged in the existing electronic signature verification mode of short message link skipping.
In order to solve the above technical problem, an embodiment of the present application provides an electronic signature verification method, which adopts the following technical solutions:
an electronic signature verification method, comprising:
receiving an electronic signature verification instruction, and acquiring video image data of a person to be verified;
analyzing the video image data to obtain the facial feature data of the signature person to be verified;
importing the facial feature data into a preset identity recognition model to obtain an identity label of the signature person to be verified;
acquiring the identity document of the signature person to be verified in a preset document database based on the identity label, and performing OCR (optical character recognition) on the identity document to obtain first identity information;
acquiring a handwritten signature image of the signature person to be verified, and performing normalization processing on the handwritten signature image;
inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image;
performing vector splicing on the shallow layer feature vector and the deep layer feature vector to obtain a spliced feature vector;
classifying and identifying the splicing characteristic vectors by using a pre-trained classifier to obtain second identity information;
and comparing the first identity information with the second identity information to generate an electronic signature verification result.
Further, the step of analyzing the video image data to obtain the facial feature data of the signature person to be verified specifically includes:
analyzing the video image data, and extracting key frames of the video images obtained by analysis to obtain key frame images;
recognizing the key frame image by using a pre-trained content recognition model to obtain a facial image of the signature person to be verified;
and taking the characteristics of the facial image to obtain the facial characteristic data of the signature person to be verified.
Further, the step of extracting the features of the facial image to obtain the facial feature data of the signature person to be verified specifically includes:
constructing an image feature extractor;
and extracting the features of the facial image by using the image feature extractor to obtain the facial feature data of the signature person to be verified.
Further, the identity recognition model includes an embedding unit, a convolution unit and a full connection unit, and the step of importing the facial feature data into a preset identity recognition model to obtain the identity tag of the person to be verified, specifically includes:
carrying out vector conversion processing on the facial feature data through an embedding unit of the identity recognition model to obtain an initial feature vector;
performing convolution calculation on the initial characteristic vector by adopting a convolution unit of the identity recognition model to obtain an initial characteristic label;
and calculating the similarity between the initial characteristic label and a preset characteristic label by adopting a full-connection unit of the identity recognition model, and outputting the characteristic label corresponding to the maximum value of the similarity to obtain the identity label of the signature person to be verified.
Further, the step of acquiring the handwritten signature image of the signature person to be verified and normalizing the handwritten signature image includes:
acquiring a handwritten signature image of the person to be verified, and performing format normalization processing on the handwritten signature image;
and carrying out interpolation processing on the handwritten signature image after format normalization processing according to a preset standard character.
Further, the handwritten signature recognition model includes a convolutional layer, a pooling layer and a classification layer, and the step of inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image specifically includes:
performing feature extraction on the handwritten signature image after normalization processing to obtain signature feature data;
performing convolution processing on the signature characteristic data by adopting a convolution layer of the handwritten signature recognition model to obtain a shallow layer characteristic vector;
pooling the signature characteristic data by using a pooling layer of the handwritten signature recognition model to obtain deep characteristic vectors;
the classifier is arranged in a classification layer of the handwritten signature recognition model, and the step of classifying and recognizing the splicing feature vectors by utilizing the pre-trained classifier to obtain second identity information specifically comprises the following steps:
and classifying and identifying the spliced feature vectors by adopting a classification layer of the handwritten signature identification model to obtain second identity information.
Further, before the step of inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image, the method further includes:
acquiring a sample signature image, and labeling the sample signature image to obtain a labeling result;
performing feature extraction on the sample signature image to obtain a sample signature feature;
importing the sample signature characteristics into a preset initial identification model, wherein the initial identification model comprises a convolutional layer, a pooling layer and a classification layer;
carrying out convolution processing on the sample signature characteristics by adopting the convolution layer of the initial identification model to obtain a sample shallow layer characteristic vector;
pooling the sample signature characteristics by using a pooling layer of the initial identification model to obtain a sample deep layer characteristic vector;
carrying out vector splicing on the sample shallow layer feature vector and the sample deep layer feature vector to obtain a sample splicing feature vector;
classifying and identifying the sample splicing characteristic vectors by adopting a classification layer of the initial identification model to obtain an identification result of the sample signature image;
and based on the recognition result and the labeling result, iteratively updating the initial recognition model by adopting a back propagation algorithm until the model is fitted to obtain the handwritten signature recognition model.
In order to solve the above technical problem, an embodiment of the present application further provides an electronic signature verification apparatus, which adopts the following technical scheme:
an electronic signature verification device, comprising:
the image data acquisition module is used for receiving the electronic signature verification instruction and acquiring video image data of a person to be verified;
the facial feature extraction module is used for analyzing the video image data to obtain facial feature data of the signature person to be verified;
the identity tag identification module is used for importing the facial feature data into a preset identity identification model to obtain an identity tag of the person to be verified;
the identity information identification module is used for acquiring the identity document of the signature person to be verified in a preset document database based on the identity label and carrying out OCR (optical character recognition) on the identity document to obtain first identity information;
the normalization processing module is used for acquiring the handwritten signature image of the signature person to be verified and carrying out normalization processing on the handwritten signature image;
the depth feature acquisition module is used for inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image;
the characteristic vector splicing module is used for carrying out vector splicing on the shallow characteristic vector and the deep characteristic vector to obtain a spliced characteristic vector;
the characteristic classification and identification module is used for classifying and identifying the spliced characteristic vectors by utilizing a pre-trained classifier to obtain second identity information;
and the electronic signature verification module is used for comparing the first identity information with the second identity information to generate an electronic signature verification result.
In order to solve the above technical problem, an embodiment of the present application further provides a computer device, which adopts the following technical solutions:
a computer device comprising a memory having computer readable instructions stored therein and a processor that when executed implements the steps of the electronic signature verification method as in any one of the above.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, which adopts the following technical solutions:
a computer readable storage medium having computer readable instructions stored thereon which, when executed by a processor, implement the steps of the electronic signature verification method as claimed in any one of the preceding claims.
Compared with the prior art, the embodiment of the application mainly has the following beneficial effects:
the application discloses an electronic signature verification method and device, computer equipment and a storage medium, and belongs to the technical field of artificial intelligence. The identity of a person to be verified and signed is identified when the person to be verified and signed is in real-time video call, after the identity verification is passed, the identity document of the person to be verified and signed is obtained from a document database, identity information on the identity document is obtained through OCR (optical character recognition), in the real-time video call process, a handwritten signature image is identified through a handwritten signature identification model, the identity information on the handwritten signature image is obtained, and whether the electronic signature verification is passed or not is determined by comparing the identity information on the identity document with the identity information on the handwritten signature image. This application is through in real-time video conversation process, combines face identification and handwritten signature to discern and checks user's handwritten signature information, can guarantee the continuity of video connecting line when accomplishing handwritten signature verification, and it is lower to improve the processing ageing to can avoid maliciously impersonating to substitute other people's signing action.
Drawings
In order to more clearly illustrate the solution of the present application, the drawings needed for describing the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and that other drawings can be obtained by those skilled in the art without inventive effort.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 illustrates a flow diagram of one embodiment of a method of electronic signature verification according to the present application;
FIG. 3 illustrates a schematic structural diagram of one embodiment of an electronic signature verification device according to the present application;
FIG. 4 shows a schematic block diagram of one embodiment of a computer device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as a web browser application, a shopping application, a search application, an instant messaging tool, a mailbox client, social platform software, and the like.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, e-book readers, MP3 players (Moving Picture experts Group Audio Layer III, mpeg compression standard Audio Layer 3), MP4 players (Moving Picture experts Group Audio Layer IV, mpeg compression standard Audio Layer 4), laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, for example, a background server that provides support for pages displayed on the terminal devices 101, 102, and 103, and may be an independent server, or a cloud server that provides basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a web service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN), and a big data and artificial intelligence platform.
It should be noted that, the electronic signature verification method provided in the embodiments of the present application is generally executed by a server, and accordingly, the electronic signature verification apparatus is generally disposed in the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continuing reference to FIG. 2, a flow diagram of one embodiment of an electronic signature verification method in accordance with the present application is shown. The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like. The electronic signature verification method comprises the following steps:
s201, receiving an electronic signature verification instruction, and acquiring video image data of a person to be verified.
Specifically, after receiving an electronic signature verification instruction uploaded by a user side, a server starts a video connection in real time, and obtains video image data of a person to be verified and signed in real time through a camera of the user side, wherein the video image data is used for completing face recognition so as to obtain an identity tag of the person to be verified and signed.
In this embodiment, the electronic device (for example, the server shown in fig. 1) on which the electronic signature verification method operates may receive the electronic signature verification instruction through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future.
S202, analyzing the video image data to obtain the facial feature data of the signature person to be verified.
Specifically, the server analyzes the video image data to obtain a video image, obtains a key frame image carrying face information of the signature person to be verified by extracting a key frame of the video image, and performs content identification and feature extraction on the key frame image to obtain face feature data of the signature person to be verified.
In order to obtain the facial feature data of the signature person to be verified, the method comprises the steps of training an image feature extractor in advance to extract features in a facial image, wherein the image feature extractor is obtained by training based on a special transformer model structure, and extracting the features of the facial image by using the image feature extractor which is trained in advance to obtain the facial feature data.
The Transformer network architecture is a neural network architecture based on a U-shaped structure, wherein the Transformer network architecture comprises a coding layer and a decoding layer, the coding layer and the decoding layer correspond to each other, a feature coding vector output by the coding layer is sent to the corresponding decoding layer for decoding, and features of an input image in each dimension can be completely obtained through image feature extraction through a U-shaped coding-decoding network structure. In addition, a corresponding self-attention layer is arranged in front of the coding layer and used for improving the weight of key features, and a corresponding full-connection layer is arranged behind the decoding layer and composed of a softmax function and used for realizing result normalization.
The method is used for constructing the image generation network transform network architecture and comprises an encoding layer and a decoding layer, wherein a plurality of convolution kernels are preset in the encoding layer, a plurality of deconvolution kernels are also preset in the decoding layer, each convolution kernel corresponds to one deconvolution kernel, the preset convolution kernels and the deconvolution kernels are initial convolution kernels and initial deconvolution kernels which are not trained, and the sizes of the convolution kernels and the deconvolution kernels are set according to characteristic dimensions of an image, such as 128 × 128 and 256 × 256.
S203, importing the facial feature data into a preset identity recognition model to obtain an identity label of the signature person to be verified.
The identity recognition model is built based on a Convolutional Neural network structure, and a Convolutional Neural Network (CNN) is a feed-forward Neural network (fed Neural network) which contains convolution calculation and has a deep structure and is one of representative algorithms of deep learning (deep learning). Convolutional neural networks have a feature learning (representation learning) capability, and can perform shift-invariant classification (shift-invariant classification) on input information according to a hierarchical structure thereof, and are also called "shift-invariant artificial neural networks". The convolutional neural network is constructed by imitating a visual perception (visual perception) mechanism of a living being, can perform supervised learning and unsupervised learning, and has stable effect and no additional characteristic engineering requirement on data, and the convolutional kernel parameter sharing in a convolutional layer and the sparsity of interlayer connection enable the convolutional neural network to learn grid-like topology (pixels and audio) features with small calculation amount.
Specifically, the server pre-trains an identity recognition model based on a CNN convolutional neural network structure, then calculates the similarity between the initial feature label and a preset feature label in the pre-trained identity recognition model, and outputs the feature label corresponding to the maximum similarity value to obtain the identity label of the signature person to be verified.
S204, acquiring the identity document of the signature person to be verified in a preset document database based on the identity label, and performing OCR (optical character recognition) on the identity document to obtain first identity information.
Among them, OCR (Optical Character Recognition) refers to a process in which an electronic device (e.g., a scanner or a digital camera) checks a Character printed on paper, determines its shape by detecting dark and light patterns, and then translates the shape into a computer text by a Character Recognition method; the method is characterized in that characters in a paper document are converted into an image file with a black-white dot matrix in an optical mode aiming at print characters, and the characters in the image are converted into a text format through recognition software for further editing and processing by word processing software. How to debug or use auxiliary information to improve recognition accuracy is the most important issue of OCR, and the term of icr (intelligent Character recognition) is generated accordingly. The main indicators for measuring the performance of the OCR system are: the rejection rate, the false recognition rate, the recognition speed, the user interface friendliness, the product stability, the usability, the feasibility and the like.
Specifically, the server acquires the identity document of the signature person to be verified in a preset document database based on the identity label, performs OCR (optical character recognition) on the identity document, acquires information on the identity document, and acquires first identity information. The person who wants to verify the signature needs to upload his/her identity document in advance, for example, in the driving field, the driver needs to upload his/her identity document such as an identity card and a driver's license in advance.
S205, acquiring the handwritten signature image of the signature person to be verified, and performing normalization processing on the handwritten signature image.
The normalization processing comprises format normalization processing and character interpolation processing, and because different writing habits of different users are different, even if the same user writes in different writing environments, the written handwritten characters are different, and different writing devices also influence the recognition of the handwritten characters, the format normalization processing and the character interpolation processing need to be carried out on the handwritten signature image, so that the subsequent recognition of the handwritten signature is facilitated.
Specifically, a handwritten signature image input by a user is received, the format of the handwritten signature image is normalized according to format requirements, each character in the handwritten signature image is adjusted to be the height and width of a uniform proportion, then interpolation processing is performed on each character in the handwritten signature image according to preset standard characters, a character track is obtained, and meanwhile, the character track is adjusted, so that subsequent recognition is easier, the character track meets the requirements of the standard characters, and therefore image normalization processing is an essential step for the recognition of subsequent handwritten signatures.
And S206, inputting the hand-written signature image after the normalization processing into a pre-trained hand-written signature recognition model to obtain a shallow layer characteristic vector and a deep layer characteristic vector of the hand-written signature image.
Specifically, the handwritten signature recognition model is built based on a convolutional neural network structure, the handwritten signature recognition model comprises a convolutional layer, a pooling layer and a classification layer, the convolutional layer is used for carrying out convolution processing on input data to obtain shallow layer characteristics, the pooling layer is used for carrying out pooling processing on the input data to obtain deep layer characteristics, and the classification layer is used for classifying the input data according to the shallow layer characteristics and the deep layer characteristics. And the server inputs the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image.
And S207, carrying out vector splicing on the shallow layer characteristic vector and the deep layer characteristic vector to obtain a spliced characteristic vector.
Specifically, after obtaining the shallow feature vector and the deep feature vector, the server performs vector splicing on the shallow feature vector and the deep feature vector in a head-to-tail splicing manner to obtain a spliced feature vector.
And S208, classifying and identifying the splicing characteristic vectors by using a pre-trained classifier to obtain second identity information.
Specifically, the pre-trained classifier is placed on a classification layer of the handwritten signature recognition model, and feature classification recognition can be realized by adopting a Sigmoid function. And the server classifies and identifies the spliced characteristic vectors by using a pre-trained classifier to obtain second identity information.
S209, comparing the first identity information with the second identity information to generate an electronic signature verification result.
Specifically, the server compares the first identity information with the second identity information, outputs a result that the electronic signature passes the verification when the first identity information is consistent with the second identity information, outputs a result that the electronic signature fails the verification when the first identity information is inconsistent with the second identity information, and prompts a person to be verified to perform signature verification again.
In the embodiment, the handwritten signature information of the user is verified by combining face recognition and handwritten signature recognition in the real-time video call process, so that the continuity of video connection lines is ensured while the handwritten signature verification is finished, the processing timeliness is improved, and malicious impersonation can be avoided to replace the signature behavior of other people.
Further, the step of analyzing the video image data to obtain the facial feature data of the signature person to be verified specifically includes:
analyzing the video image data, and extracting key frames of the video images obtained by analysis to obtain key frame images;
recognizing the key frame image by using a pre-trained content recognition model to obtain a facial image of the signature person to be verified;
and extracting the features of the facial image to obtain the facial feature data of the signature person to be verified.
Specifically, the server analyzes video image data, extracts key frames of the video images obtained through analysis to obtain key frame images, identifies the key frame images by using a pre-trained content identification model to obtain facial images of the signature personnel to be verified, and extracts the features of the facial images by using a pre-trained image feature extractor to obtain facial feature data of the signature personnel to be verified.
The server firstly obtains parameters of two adjacent video frames in a video image, and calculates histogram data and gray-scale map data of the two adjacent video frames, wherein the histogram data comprises a frame difference value of a histogram, and the gray-scale map data comprises a mean difference value of a gray-scale map and a variance difference value of the gray-scale map. And carrying out weighted summation on the frame difference value of the histogram, the mean difference value of the grayscale image and the variance difference value of the grayscale image to obtain weighted Euclidean distance of two adjacent video frames, determining a lens conversion boundary of the behavior recording video by comparing the weighted Euclidean distance with a preset lens change threshold, and then determining a key frame image in the behavior recording video according to the lens conversion boundary.
Before feature extraction, preprocessing is performed on the key frame image, and the preprocessing includes a sharpening process and a face region identification process. The method comprises the steps of identifying a key frame image based on a preset content identification model to obtain a human body contour and a background contour of the key frame image, further identifying a face region from the human body contour, then denoising the key frame image to remove salt and pepper noise points of the key frame image to obtain a denoised image so as to ensure the definition of the image, and obtaining a clear face image of a signature person to be verified based on the denoised image and the face region.
In the above embodiment, the video image data is analyzed, the key frame extraction is performed on the video image obtained through the analysis to obtain a key frame image, the content identification is performed on the key frame image to obtain the facial image of the signature person to be verified, and the feature of the facial image is extracted through the image feature extractor to obtain the facial feature data of the signature person to be verified.
Further, the step of extracting the features of the facial image to obtain the facial feature data of the signature person to be verified specifically includes:
constructing an image feature extractor;
and extracting the features of the facial image by using the image feature extractor to obtain the facial feature data of the signature person to be verified.
Specifically, an image generation network is constructed based on a transformer network architecture, a convolution kernel and a deconvolution kernel in the image generation network are trained through a preset training sample, an image feature extractor is constructed through the trained convolution kernel and deconvolution kernel, and features of a facial image are extracted through the image feature extractor to obtain facial feature data.
It should be noted that the trained convolution kernels and deconvolution kernels are screened based on a deep learning compression algorithm, a deep learning compression (deep compression) algorithm trains a neural network to obtain weights of the trained convolution layers of the neural network, a weight threshold is set, then the convolution layers lower than the weight threshold are deleted, and then iterative training is performed to remove redundant layers once and again through iterative training. And finally, clustering and sharing the weight of the convolution layer reserved in the neural network, taking the value of the clustering center point as the value of all the weights, continuously adjusting the number of the clustering center point and the center point to obtain a better model compression effect, and finally carrying out Huffman coding on the weights. According to the proposal, the method of Deep compression can compress the neural network without losing precision, wherein the size of the neural network can be compressed to 35 to 49 times of the original size, and the storage application is more effective during reasoning.
In the above embodiment, an image feature extractor is obtained by constructing an image generation network, and training the image generation network by using a training sample in combination with a deep learning compression method, where the image feature extractor is used to extract facial feature data of a signature person to be verified.
Further, the identity recognition model includes an embedding unit, a convolution unit and a full connection unit, and the step of importing the facial feature data into a preset identity recognition model to obtain the identity tag of the person to be verified, specifically includes:
carrying out vector conversion processing on the facial feature data through an embedding unit of the identity recognition model to obtain an initial feature vector;
performing convolution calculation on the initial characteristic vector by adopting a convolution unit of the identity recognition model to obtain an initial characteristic label;
and calculating the similarity between the initial characteristic label and a preset characteristic label by adopting a full-connection unit of the identity recognition model, and outputting the characteristic label corresponding to the maximum value of the similarity to obtain the identity label of the signature person to be verified.
Specifically, the identity recognition model comprises an embedding unit, a convolution unit and a full-connection unit, vector conversion processing is carried out on face feature data through the embedding unit to obtain an initial feature vector, convolution calculation is carried out on the initial feature vector through the convolution unit to obtain an initial feature tag, the full-connection unit is used for calculating the similarity between the initial feature tag and a preset feature tag, the feature tag corresponding to the maximum similarity value is output, and the identity tag of the signature person to be verified is obtained. Wherein, this application is through 1: and N, calculating the similarity of the initial characteristic label and a preset characteristic label by using the similarity calculation method, and outputting the characteristic label corresponding to the maximum similarity as the identity label of the signature person to be verified.
It should be noted that the identity recognition model needs to be trained in advance, an initial recognition model is built based on the convolutional neural network structure, then a training sample with a label is obtained, wherein the training sample is a picture with a face collected in advance, and the face features such as forehead, nose, mouth, eyebrow and the like are marked on the picture in advance, then training the initial recognition model by using the training sample with the label to obtain a training result, calculating the recognition error of the training result and the labeling result based on the loss function of the initial recognition model, transferring the recognition error in the initial recognition model by using a back propagation algorithm, comparing the transferred recognition error with a preset error threshold value, and iterating the initial identification model according to the error comparison result, and adjusting corresponding model parameters in each iteration until the model is fitted to obtain the identity identification model.
In the embodiment, the identity tag of the signature person to be verified is finally obtained by performing feature extraction, convolution operation and similarity calculation on the face feature data through a pre-trained identity recognition model.
Further, the step of acquiring the handwritten signature image of the signature person to be verified and normalizing the handwritten signature image includes:
acquiring a handwritten signature image of the person to be verified, and performing format normalization processing on the handwritten signature image;
and carrying out interpolation processing on the handwritten signature image after format normalization processing according to a preset standard character.
Specifically, a handwritten signature image of a person to be checked is obtained, format normalization processing is performed on the handwritten signature image, and interpolation processing is performed on the handwritten signature image after format normalization processing according to a preset standard character. The format normalization process changes the size of the character on the premise of keeping the overall shape of the character unchanged, so that the original shape of the character can be kept to a greater extent, and the distortion degree is reduced. The character interpolation processing rule carries out interpolation processing on characters according to a specific function, simulates a character track by utilizing a linear interpolation construction function, and then adjusts the character track according to the requirement of a standard character, thereby improving the recognition rate of handwritten characters.
In the embodiment, the handwritten signature image is normalized so that the handwritten signature image meets the requirement of standard processing, and further processing of a subsequent handwritten signature recognition model is facilitated.
Further, the handwritten signature recognition model includes a convolutional layer, a pooling layer and a classification layer, and the step of inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image specifically includes:
performing feature extraction on the handwritten signature image after normalization processing to obtain signature feature data;
performing convolution processing on the signature characteristic data by adopting a convolution layer of the handwritten signature recognition model to obtain a shallow layer characteristic vector;
pooling the signature characteristic data by using a pooling layer of the handwritten signature recognition model to obtain deep characteristic vectors;
the classifier is arranged in a classification layer of the handwritten signature recognition model, and the step of classifying and recognizing the splicing feature vectors by utilizing the pre-trained classifier to obtain second identity information specifically comprises the following steps:
and classifying and identifying the spliced feature vectors by adopting a classification layer of the handwritten signature identification model to obtain second identity information.
Specifically, the handwritten signature recognition model further comprises an input layer and an output layer, wherein the input layer is used for receiving input data and performing feature extraction on the input data, and the output layer is used for outputting a classification result. The server extracts the features of the normalized handwritten signature image to obtain signature feature data, performs convolution processing on the signature feature data by using a convolution layer of the handwritten signature recognition model to obtain shallow feature vectors, performs pooling processing on the signature feature data by using a pooling layer of the handwritten signature recognition model to obtain deep feature vectors, and performs classification and recognition on the spliced feature vectors by using a classification layer of the handwritten signature recognition model to obtain second identity information.
The convolution operation is performed by multiplying different local parts of the matrix by elements at each position of a convolution kernel matrix, and then summing the multiplication results. Pooling is a method by which adjacent positional features are polymerized. The pooled features have certain translation and rotation invariance, so that feature translation processing can be realized, the processed feature quantity can be reduced, the calculation efficiency is increased, and the method is suitable for deep feature analysis. The common pooling operation generally includes two modes, namely average pooling and maximum pooling, namely, taking the maximum value of the corresponding region or averaging the maximum value to be used as the element value after pooling, and finally obtaining the vector matrix corresponding to the deep features of the Chinese signature image. In the convolutional neural network model, the classification layer can be regarded as a specific activation function layer used for performing classification operation on the information obtained by the previous layer. In this embodiment, the activation function used in the activation function layer is a Sigmoid function, and the Sigmoid function is not likely to cause a problem of gradient disappearance in a process of back propagation, so that the convolutional neural network model is easier to train.
In the above embodiment, a handwritten signature recognition model is constructed based on a CNN convolutional neural network, and includes a convolutional layer, a pooling layer and a classification layer, where the convolutional layer is configured to perform convolution processing on input data to obtain shallow features, the pooling layer is configured to perform pooling processing on the input data to obtain deep features, the classification layer is configured to classify the input data according to the shallow features and the deep features, and the handwritten signature image is recognized through the handwritten signature recognition model to obtain second identity information of a signature person to be verified.
Further, before the step of inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image, the method further includes:
acquiring a sample signature image, and labeling the sample signature image to obtain a labeling result;
performing feature extraction on the sample signature image to obtain a sample signature feature;
importing the sample signature characteristics into a preset initial identification model, wherein the initial identification model comprises a convolutional layer, a pooling layer and a classification layer;
carrying out convolution processing on the sample signature characteristics by adopting the convolution layer of the initial identification model to obtain a sample shallow layer characteristic vector;
pooling the sample signature characteristics by using a pooling layer of the initial identification model to obtain a sample deep layer characteristic vector;
carrying out vector splicing on the sample shallow layer feature vector and the sample deep layer feature vector to obtain a sample splicing feature vector;
classifying and identifying the sample splicing characteristic vectors by adopting a classification layer of the initial identification model to obtain an identification result of the sample signature image;
and based on the recognition result and the labeling result, iteratively updating the initial recognition model by adopting a back propagation algorithm until the model is fitted to obtain the handwritten signature recognition model.
The back propagation algorithm, namely a back propagation algorithm (BP algorithm), is a learning algorithm suitable for a multi-layer neuron network, and is established on the basis of a gradient descent method and used for error calculation of a deep learning network. The input and output relationship of the BP network is essentially a mapping relationship: an n-input m-output BP neural network performs the function of continuous mapping from n-dimensional euclidean space to a finite field in m-dimensional euclidean space, which is highly non-linear. The learning process of the BP algorithm consists of a forward propagation process and a backward propagation process. In the forward propagation process, input information passes through the hidden layer through the input layer, is processed layer by layer and is transmitted to the output layer, the backward propagation is carried out, the partial derivatives of the target function to the weight of each neuron are calculated layer by layer, and the gradient of the target function to the weight vector is formed to be used as the basis for modifying the weight.
Specifically, a sample signature image is obtained, and the sample signature image is labeled to obtain a labeling result; performing feature extraction on the sample signature image to obtain sample signature features; importing sample signature characteristics into a preset initial identification model, wherein the initial identification model comprises a convolutional layer, a pooling layer and a classification layer; carrying out convolution processing on the signature characteristics of the sample by adopting a convolution layer of the initial identification model to obtain a shallow characteristic vector of the sample; pooling the sample signature characteristics by using a pooling layer of the initial identification model to obtain a sample deep layer characteristic vector; carrying out vector splicing on the sample shallow layer feature vector and the sample deep layer feature vector to obtain a sample splicing feature vector, and carrying out classification identification on the sample splicing feature vector by adopting a classification layer of an initial identification model to obtain an identification result of a sample signature image; and based on the recognition result and the labeling result, iteratively updating the initial recognition model by adopting a back propagation algorithm until the model is fitted to obtain the handwritten signature recognition model.
In the embodiment, the application discloses an electronic signature verification method, and belongs to the technical field of artificial intelligence. The identity of a person to be checked and signed is identified when the identity check is passed, the identity document of the person to be checked and signed is obtained from the document database after the identity check is passed, identity information on the identity document is obtained through OCR (optical character recognition), a handwritten signature image is identified through a handwritten signature identification model in the real-time video call process, the identity information on the handwritten signature image is obtained, and whether the electronic signature check is passed or not is determined by comparing the identity information on the identity document with the identity information on the handwritten signature image. This application is through in real-time video conversation process, combines face identification and handwritten signature to discern and checks user's handwritten signature information, can guarantee the continuity of video connecting line when accomplishing handwritten signature verification, and it is lower to improve the processing ageing to can avoid maliciously impersonating to substitute other people's signing action.
It is emphasized that, in order to further ensure the privacy and security of the video image data, the video image data may also be stored in a node of a block chain.
The block chain referred by the application is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware associated with computer readable instructions, which can be stored in a computer readable storage medium, and when executed, can include processes of the embodiments of the methods described above. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of an electronic signature verification apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices.
As shown in fig. 3, the electronic signature verification apparatus according to the present embodiment includes:
the image data acquisition module 301 is configured to receive an electronic signature verification instruction and acquire video image data of a person to be verified;
a facial feature extraction module 302, configured to analyze the video image data to obtain facial feature data of the signature person to be verified;
the identity tag identification module 303 is configured to import the facial feature data into a preset identity identification model to obtain an identity tag of the signature person to be verified;
the identity information identification module 304 is configured to acquire an identity document of the person to be verified and signed from a preset document database based on the identity tag, and perform OCR recognition on the identity document to obtain first identity information;
a normalization processing module 305, configured to obtain a handwritten signature image of the signature person to be verified, and perform normalization processing on the handwritten signature image;
a depth feature obtaining module 306, configured to input the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image;
a feature vector stitching module 307, configured to perform vector stitching on the shallow feature vector and the deep feature vector to obtain a stitched feature vector;
the feature classification and identification module 308 is configured to perform classification and identification on the spliced feature vectors by using a pre-trained classifier to obtain second identity information;
and the electronic signature verification module 309 is configured to compare the first identity information with the second identity information, and generate an electronic signature verification result.
Further, the facial feature extraction module 302 specifically includes:
a key frame extraction unit, configured to analyze the video image data, and extract a key frame from the video image obtained through analysis to obtain a key frame image;
the content identification unit is used for identifying the key frame image by utilizing a pre-trained content identification model to obtain a facial image of the signature person to be verified;
and the facial feature extraction unit is used for extracting the features of the facial image to obtain the facial feature data of the signature person to be verified.
Further, the facial feature extraction unit specifically includes:
an extractor constructing subunit, configured to construct an image feature extractor;
and the facial feature extraction subunit is used for extracting the features of the facial image by using the image feature extractor to obtain the facial feature data of the signature person to be verified.
Further, the identity recognition model includes an embedding unit, a convolution unit and a full connection unit, and the identity tag recognition module 303 specifically includes:
the vector conversion unit is used for carrying out vector conversion processing on the facial feature data through the embedding unit of the identity recognition model to obtain an initial feature vector;
the convolution calculation unit is used for carrying out convolution calculation on the initial characteristic vector by adopting the convolution unit of the identity recognition model to obtain an initial characteristic label;
and the similarity calculation unit is used for calculating the similarity between the initial characteristic label and a preset characteristic label by adopting the full connection unit of the identity recognition model, and outputting the characteristic label corresponding to the maximum value of the similarity to obtain the identity label of the signature person to be verified.
Further, the normalization processing module 305 specifically includes:
the format normalization unit is used for acquiring the handwritten signature image of the signature person to be verified and performing format normalization processing on the handwritten signature image;
and the interpolation processing unit is used for carrying out interpolation processing on the handwritten signature image after format normalization processing according to preset standard characters.
Further, the handwritten signature recognition model includes a convolutional layer, a pooling layer, and a classification layer, and the depth feature obtaining module 306 specifically includes:
the signature feature extraction unit is used for extracting features of the handwritten signature image after normalization processing to obtain signature feature data;
the convolution processing unit is used for carrying out convolution processing on the signature characteristic data by adopting a convolution layer of the handwritten signature recognition model to obtain a shallow layer characteristic vector;
the pooling processing unit is used for pooling the signature feature data by adopting a pooling layer of the handwritten signature recognition model to obtain deep feature vectors;
the classifier is arranged in a classification layer of the handwritten signature recognition model, and the step of classifying and recognizing the splicing feature vectors by utilizing the pre-trained classifier to obtain second identity information specifically comprises the following steps:
and the classification and identification unit is used for classifying and identifying the splicing characteristic vectors by adopting the classification layer of the handwritten signature identification model to obtain second identity information.
Further, the electronic signature verification apparatus further includes:
the sample labeling module is used for acquiring a sample signature image and labeling the sample signature image to obtain a labeling result;
the sample feature extraction module is used for extracting features of the sample signature image to obtain sample signature features;
the sample data import module is used for importing the sample signature characteristics into a preset initial identification model, wherein the initial identification model comprises a convolutional layer, a pooling layer and a classification layer;
the sample convolution module is used for carrying out convolution processing on the sample signature characteristics by adopting the convolution layer of the initial identification model to obtain a sample shallow layer characteristic vector;
the sample pooling module is used for pooling the sample signature characteristics by adopting a pooling layer of the initial identification model to obtain a sample deep layer characteristic vector;
the sample vector splicing module is used for carrying out vector splicing on the sample shallow layer feature vector and the sample deep layer feature vector to obtain a sample splicing feature vector;
the sample signature identification module is used for carrying out classification identification on the sample splicing characteristic vectors by adopting a classification layer of the initial identification model to obtain an identification result of the sample signature image;
and the recognition model iteration module is used for carrying out iteration updating on the initial recognition model by adopting a back propagation algorithm based on the recognition result and the labeling result until the model is fitted to obtain the handwritten signature recognition model.
In the above embodiment, the application discloses an electronic signature verification device, and belongs to the technical field of artificial intelligence. The identity of a person to be verified and signed is identified when the person to be verified and signed is in real-time video call, after the identity verification is passed, the identity document of the person to be verified and signed is obtained from a document database, identity information on the identity document is obtained through OCR (optical character recognition), in the real-time video call process, a handwritten signature image is identified through a handwritten signature identification model, the identity information on the handwritten signature image is obtained, and whether the electronic signature verification is passed or not is determined by comparing the identity information on the identity document with the identity information on the handwritten signature image. This application is through in real-time video conversation process, combines face identification and handwritten signature to discern and checks user's handwritten signature information, can guarantee the continuity of video connecting line when accomplishing handwritten signature verification, and it is lower to improve the processing ageing to can avoid maliciously impersonating to substitute other people's signing action.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It is noted that only computer device 4 having components 41-43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The computer equipment can carry out man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 4. Of course, the memory 41 may also include both an internal storage unit of the computer device 4 and an external storage device thereof. In this embodiment, the memory 41 is generally used for storing an operating system installed in the computer device 4 and various types of application software, such as computer readable instructions of an electronic signature verification method. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing the electronic signature verification method.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the computer device 4 and other electronic devices.
The application discloses computer equipment belongs to artificial intelligence technical field. The identity of a person to be verified and signed is identified when the person to be verified and signed is in real-time video call, after the identity verification is passed, the identity document of the person to be verified and signed is obtained from a document database, identity information on the identity document is obtained through OCR (optical character recognition), in the real-time video call process, a handwritten signature image is identified through a handwritten signature identification model, the identity information on the handwritten signature image is obtained, and whether the electronic signature verification is passed or not is determined by comparing the identity information on the identity document with the identity information on the handwritten signature image. This application is through in real-time video conversation process, combines face identification and handwritten signature to discern and checks user's handwritten signature information, can guarantee the continuity of video connecting line when accomplishing handwritten signature verification, and it is lower to improve the processing ageing to can avoid maliciously impersonating to substitute other people's signing action.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing computer-readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the electronic signature verification method as described above. The at least one processor performs the electronic signature verification method as described above to achieve the following technical effects:
during real-time video call, the identity of a person to be verified and signed is identified, after the identity verification is passed, the identity document of the person to be verified and signed is obtained from a document database, identity information on the identity document is obtained through OCR identification, in the real-time video call process, a handwritten signature image is identified through a handwritten signature identification model, the identity information on the handwritten signature image is obtained, and whether the electronic signature verification is passed or not is determined through comparing the identity information on the identity document with the identity information on the handwritten signature image. This application is through in real-time video conversation process, combines face identification and handwritten signature to discern and checks user's handwritten signature information, can guarantee the continuity of video connecting line when accomplishing handwritten signature verification, and it is lower to improve the processing ageing to can avoid maliciously impersonating to substitute other people's signing action.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a ROM/RAM, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. An electronic signature verification method, comprising:
receiving an electronic signature verification instruction, and acquiring video image data of a person to be verified;
analyzing the video image data to obtain the facial feature data of the signature person to be verified;
importing the facial feature data into a preset identity recognition model to obtain an identity label of the signature person to be verified;
acquiring the identity document of the signature person to be verified in a preset document database based on the identity label, and performing OCR (optical character recognition) on the identity document to obtain first identity information;
acquiring a handwritten signature image of the signature person to be verified, and performing normalization processing on the handwritten signature image;
inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image;
performing vector splicing on the shallow layer characteristic vector and the deep layer characteristic vector to obtain a spliced characteristic vector;
classifying and identifying the splicing characteristic vectors by using a pre-trained classifier to obtain second identity information;
and comparing the first identity information with the second identity information to generate an electronic signature verification result.
2. The method for verifying an electronic signature as claimed in claim 1, wherein the step of analyzing the video image data to obtain the facial feature data of the signature person to be verified includes:
analyzing the video image data, and extracting key frames of the video images obtained by analysis to obtain key frame images;
recognizing the key frame image by using a pre-trained content recognition model to obtain a facial image of the signature person to be verified;
and extracting the features of the facial image to obtain the facial feature data of the signature person to be verified.
3. The electronic signature verification method according to claim 2, wherein the step of extracting the features of the facial image to obtain the facial feature data of the signature person to be verified specifically comprises:
constructing an image feature extractor;
and extracting the features of the facial image by using the image feature extractor to obtain the facial feature data of the signature person to be verified.
4. The electronic signature verification method according to claim 1, wherein the identification model includes an embedding unit, a convolution unit and a full connection unit, and the step of importing the facial feature data into a preset identification model to obtain the identification tag of the signature person to be verified specifically includes:
carrying out vector conversion processing on the facial feature data through an embedding unit of the identity recognition model to obtain an initial feature vector;
performing convolution calculation on the initial characteristic vector by adopting a convolution unit of the identity recognition model to obtain an initial characteristic label;
and calculating the similarity between the initial characteristic label and a preset characteristic label by adopting a full-connection unit of the identity recognition model, and outputting the characteristic label corresponding to the maximum value of the similarity to obtain the identity label of the signature person to be verified.
5. The electronic signature verification method according to claim 1, wherein the step of obtaining the handwritten signature image of the person to be verified and normalizing the handwritten signature image includes:
acquiring a handwritten signature image of the person to be verified, and performing format normalization processing on the handwritten signature image;
and carrying out interpolation processing on the handwritten signature image after format normalization processing according to a preset standard character.
6. The method for verifying electronic signatures according to any one of claims 1 to 5, wherein the handwritten signature recognition model includes a convolutional layer, a pooling layer and a classification layer, and the step of inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain the shallow feature vectors and the deep feature vectors of the handwritten signature image includes:
performing feature extraction on the handwritten signature image after normalization processing to obtain signature feature data;
performing convolution processing on the signature characteristic data by adopting a convolution layer of the handwritten signature recognition model to obtain a shallow layer characteristic vector;
pooling the signature characteristic data by using a pooling layer of the handwritten signature recognition model to obtain deep characteristic vectors;
the classifier is arranged in a classification layer of the handwritten signature recognition model, and the step of classifying and recognizing the splicing feature vectors by utilizing the pre-trained classifier to obtain second identity information specifically comprises the following steps:
and classifying and identifying the spliced feature vectors by adopting a classification layer of the handwritten signature identification model to obtain second identity information.
7. The method for verifying an electronic signature as claimed in claim 6, wherein before the step of inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain the shallow feature vectors and the deep feature vectors of the handwritten signature image, the method further comprises:
acquiring a sample signature image, and labeling the sample signature image to obtain a labeling result;
performing feature extraction on the sample signature image to obtain a sample signature feature;
importing the sample signature characteristics into a preset initial identification model, wherein the initial identification model comprises a convolutional layer, a pooling layer and a classification layer;
carrying out convolution processing on the sample signature characteristics by adopting the convolution layer of the initial identification model to obtain a sample shallow layer characteristic vector;
pooling the sample signature characteristics by using a pooling layer of the initial identification model to obtain a sample deep layer characteristic vector;
carrying out vector splicing on the sample shallow layer feature vector and the sample deep layer feature vector to obtain a sample splicing feature vector;
classifying and identifying the sample splicing characteristic vectors by adopting a classification layer of the initial identification model to obtain an identification result of the sample signature image;
and based on the recognition result and the labeling result, performing iterative updating on the initial recognition model by adopting a back propagation algorithm until the model is fitted to obtain the handwritten signature recognition model.
8. An electronic signature verification device, comprising:
the image data acquisition module is used for receiving the electronic signature verification instruction and acquiring video image data of a person to be verified;
the facial feature extraction module is used for analyzing the video image data to obtain facial feature data of the signature person to be verified;
the identity tag identification module is used for importing the facial feature data into a preset identity identification model to obtain an identity tag of the person to be verified;
the identity information identification module is used for acquiring the identity document of the signature person to be verified in a preset document database based on the identity label and carrying out OCR (optical character recognition) on the identity document to obtain first identity information;
the normalization processing module is used for acquiring the handwritten signature image of the signature person to be verified and carrying out normalization processing on the handwritten signature image;
the depth feature acquisition module is used for inputting the normalized handwritten signature image into a pre-trained handwritten signature recognition model to obtain a shallow feature vector and a deep feature vector of the handwritten signature image;
the characteristic vector splicing module is used for carrying out vector splicing on the shallow characteristic vector and the deep characteristic vector to obtain a spliced characteristic vector;
the characteristic classification and identification module is used for classifying and identifying the spliced characteristic vectors by utilizing a pre-trained classifier to obtain second identity information;
and the electronic signature verification module is used for comparing the first identity information with the second identity information to generate an electronic signature verification result.
9. A computer device comprising a memory having computer readable instructions stored therein and a processor that when executed implements the steps of the electronic signature verification method of any one of claims 1 to 7.
10. A computer-readable storage medium, having computer-readable instructions stored thereon, which, when executed by a processor, implement the steps of the electronic signature verification method according to any one of claims 1 to 7.
CN202210171442.7A 2022-02-24 2022-02-24 Electronic signature verification method and device, computer equipment and storage medium Pending CN114547575A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210171442.7A CN114547575A (en) 2022-02-24 2022-02-24 Electronic signature verification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210171442.7A CN114547575A (en) 2022-02-24 2022-02-24 Electronic signature verification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114547575A true CN114547575A (en) 2022-05-27

Family

ID=81678469

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210171442.7A Pending CN114547575A (en) 2022-02-24 2022-02-24 Electronic signature verification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114547575A (en)

Similar Documents

Publication Publication Date Title
CN109543690B (en) Method and device for extracting information
AU2014368997B2 (en) System and method for identifying faces in unconstrained media
WO2019095571A1 (en) Human-figure emotion analysis method, apparatus, and storage medium
WO2022105118A1 (en) Image-based health status identification method and apparatus, device and storage medium
CN114241459B (en) Driver identity verification method and device, computer equipment and storage medium
CN110795714A (en) Identity authentication method and device, computer equipment and storage medium
CN112330331A (en) Identity verification method, device and equipment based on face recognition and storage medium
CN115050064A (en) Face living body detection method, device, equipment and medium
CN113420690A (en) Vein identification method, device and equipment based on region of interest and storage medium
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN113705534A (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium based on deep vision
CN114282258A (en) Screen capture data desensitization method and device, computer equipment and storage medium
CN113642481A (en) Recognition method, training method, device, electronic equipment and storage medium
Reyes et al. Computer Vision-Based Signature Forgery Detection System Using Deep Learning: A Supervised Learning Approach
CN112686243A (en) Method and device for intelligently identifying picture characters, computer equipment and storage medium
CN115880702A (en) Data processing method, device, equipment, program product and storage medium
CN115273110A (en) Text recognition model deployment method, device, equipment and storage medium based on TensorRT
CN113781462A (en) Human body disability detection method, device, equipment and storage medium
CN114547575A (en) Electronic signature verification method and device, computer equipment and storage medium
CN112733645A (en) Handwritten signature verification method and device, computer equipment and storage medium
CN112966150A (en) Video content extraction method and device, computer equipment and storage medium
CN113128296B (en) Electronic handwriting signature fuzzy label recognition system
CN113343898B (en) Mask shielding face recognition method, device and equipment based on knowledge distillation network
CN113723093B (en) Personnel management policy recommendation method and device, computer equipment and storage medium
US20220237692A1 (en) Method and system for providing financial process automation to financial organization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination