CN117475519B - Off-line handwriting identification method based on integration of twin network and multiple channels - Google Patents
Off-line handwriting identification method based on integration of twin network and multiple channels Download PDFInfo
- Publication number
- CN117475519B CN117475519B CN202311797195.2A CN202311797195A CN117475519B CN 117475519 B CN117475519 B CN 117475519B CN 202311797195 A CN202311797195 A CN 202311797195A CN 117475519 B CN117475519 B CN 117475519B
- Authority
- CN
- China
- Prior art keywords
- picture
- pictures
- handwriting
- attention
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000010354 integration Effects 0.000 title claims description 4
- 239000013598 vector Substances 0.000 claims abstract description 29
- 238000012360 testing method Methods 0.000 claims abstract description 25
- 230000004927 fusion Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 5
- 230000004913 activation Effects 0.000 claims description 9
- 238000013459 approach Methods 0.000 claims description 9
- 230000009977 dual effect Effects 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 4
- 230000002776 aggregation Effects 0.000 claims description 3
- 238000004220 aggregation Methods 0.000 claims description 3
- 238000013527 convolutional neural network Methods 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000012935 Averaging Methods 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 5
- 238000012795 verification Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008451 emotion Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/30—Writer recognition; Reading and verifying signatures
- G06V40/33—Writer recognition; Reading and verifying signatures based only on signature image, e.g. static signature recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/18—Extraction of features or characteristics of the image
- G06V30/1801—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
- G06V30/18019—Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections by matching or filtering
- G06V30/18038—Biologically-inspired filters, e.g. difference of Gaussians [DoG], Gabor filters
- G06V30/18048—Biologically-inspired filters, e.g. difference of Gaussians [DoG], Gabor filters with interaction between the responses of different filters, e.g. cortical complex cells
- G06V30/18057—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/1918—Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Biodiversity & Conservation Biology (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses an off-line handwriting identification method based on twin network and multi-channel fusion, which is characterized in that two input handwriting pictures are subjected to inverse operation to obtain inverse gray level pictures of the two input handwriting pictures, wherein the four handwriting pictures comprise a reference picture, a test picture, an inverse gray level picture of the reference picture and an inverse gray level picture of the test picture; respectively entering four handwriting pictures into the same network model to extract image feature vectorsGenerating four corresponding feature vectors; feature vectorPerforming weight weighting treatment on twice double handwriting attention to obtain a feature vector Y; fusing 128-dimensional pictures of the four handwriting pictures subjected to image processing, wherein 512-dimensional pictures are fused; the fused pictures are subjected to self-attention and convolution operation to judge; and performing sigmoid operation on the output feature vector, and using the feature vector as confidence level to judge whether the two input pictures are written by the same person.
Description
Technical Field
The invention relates to the technical field of computer vision and pattern recognition, in particular to an off-line handwriting identification method based on combination of a twin network and multiple channels.
Background
In the current society, signature handwriting is used as one of important legal evidence means and is widely applied to various fields such as law, insurance, culture and the like. The signature handwriting has the characteristics of uniqueness, stability, reliability and the like, and becomes an important basis for identifying the authenticity of the document, confirming the identity of a principal and the like. However, with the continued development of technology, signature handwriting verification also faces many challenges. The generation of signature handwriting can be traced back to ancient times, when people use various symbols and graphs to sign. As paper and ink evolve, people begin to use handwritten signatures. However, signature handwriting has not begun to lead to people and research until the beginning of the 20 th century. In this period, the disciplines such as crime psychology, psychology and statistics are beginning to be applied to the research of signature handwriting, and a theoretical basis is provided for signature handwriting inspection. Signature handwriting plays an important role in a number of fields. In the legal field, signature handwriting is an important basis for confirming authenticity of a document and is also part of evidence in a court. In the insurance field, signature handwriting is used for identifying the authenticity of a policy and preventing insurance fraud. In the cultural field, the signature handwriting reflects the style and personality of an artist, and has important value for deep research of handwriting science. With advances in technology, signature handwriting verification presents a number of challenges. The signature handwriting is easily influenced by factors such as writing habit, emotion, environment and the like, so that the accuracy of handwriting inspection becomes complex. In response to the above challenges, researchers are continually exploring new signature handwriting verification methods.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides an off-line handwriting identification method based on the combination of a twin network and multiple channels, which is used for verifying a handwriting signature independent of an author, and effectively improves the accuracy of off-line handwriting identification.
In order to achieve the above object, the solution of the present invention is:
an off-line handwriting authentication method based on combination of a twin network and multiple channels comprises the following steps:
s1, performing inverse operation on two input handwriting pictures to obtain inverse gray level pictures, wherein the inverse gray level pictures comprise four handwriting pictures, namely a reference picture, a test picture, an inverse gray level picture of the reference picture and an inverse gray level picture of the test picture;
step S2, respectively entering the four handwriting pictures of the reference picture, the test picture, the anti-gray picture of the reference picture and the anti-gray picture of the test picture into the same network model to extract the image feature vectorWherein +.>、/>And->Respectively representing the height, the width and the channel number, and generating four corresponding feature vectors;
step S3, feature vectorCarrying out weight weighting treatment on the two times of dual handwriting attention to obtain feature vectors Y, wherein each vector has 128 dimensions;
s4, fusing the reference picture, the test picture, the anti-gray picture of the reference picture and the 128-dimensional picture subjected to image processing by the anti-gray picture of the test picture, wherein 512-dimensional pictures are fused;
s5, judging the fused picture through self-attention and convolution operation;
and S6, performing sigmoid operation on the output feature vector, and using the sigmoid operation as confidence level to judge whether the two input pictures are written by the same person.
Further, in step S1, the formula of the handwriting picture inverting operation is:
。
further, in step S2, the network model uses the encoding network ConvNet to obtain a feature map through a convolution layer, and then performs nonlinear transformation through a ReLU activation function.
Further, in step S3, the feature map output by the convolution module of the original and inverse gray pictures of the same picture is input into an up-sampling structure, which up-samples using a nearest neighbor algorithm and convolves using sigmoid activation, assuming that h is the output of the first layer of the convolution module in the discriminating stream, in the attention module, multiplying h by g elements, then adding h to produce an intermediate attention measure h·g+h, where "·" represents the element wise multiplication, the following global averaging pooling layer and fully connected layer with sigmoid activation receive the intermediate attention measures and output a weight vector f, the intermediate attention measure of each element of each channel multiplied by f is respectively generated into a final attention mask (h·g+h) ×f, which is respectively fed back to the second layer of the other feature map.
Further, in step S4, the operation of fusing the four pictures is to combine the four feature pictures into an image (w, h, c 1) through a concatate, w is wide, h is high, c1, c2, c3, c4 are channels, that is, the four feature pictures combine the 128-dimensional feature pictures extracted previously into a 512-dimensional feature picture through channel fusion, wherein the combined sequence is a reference picture, an inverse gray scale test picture, and a test picture.
Further, in step S5, the convolution operation and the self-attention operation are such that the convolution with the kernel size kxk can be divided into k2 individual 1x1 convolutions, and then shift and sum operations are performed, while the projections of the query, key and value in the self-attention mechanism in ACMix are interpreted as a plurality of 1x1 convolutions, and then the aggregation of the attention weights and value is calculated.
Further, in step S6, the output range of the sigmoid function is (0, 1), with (0, 0.5) as the center of symmetry, the output approaches 0 when the input approaches negative infinity, and the output approaches 1 when the input approaches positive infinity; when the input is near 0, the vector gradient is large, the fine input change can cause a large result, the further the input is away from 0, the gradient gradually decreases, and finally approaches zero, and the formula is as follows:
where x is an argument and e is a constant.
After the scheme is adopted, the off-line handwriting identification method based on the combination of the twin network and the multiple channels provides a novel off-line handwriting verification model which is used for verifying the handwriting signature independent of an author, the model firstly performs feature extraction processing through the two layers of convolution layers and the dual attention module, then performs feature combination in a channel combination mode, and finally adopts the ACmix identification module to carry out judgment, so that the accuracy rate of off-line handwriting identification is effectively improved. Compared with other methods, the invention has the following characteristics:
(1) The invention provides a Multi-channel feature fusion (Multi-Channel Feature Fusion Network, MCFFN) framework, wherein two comparison pictures extract Multi-dimensional detailed features through a twin network, and then similarity judgment is carried out through a channel fusion method.
(2) The invention provides a dual handwriting attention module, which extracts the characteristics from the gray level pictures and the inverse gray level pictures thereof, and shows stronger characteristic extraction performance.
(3) The invention applies ACmix to the field of signature verification for the first time, and shows the superiority of performance.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a diagram of dual handwriting attention according to the present invention.
Fig. 3 is a diagram of the attention profile of the present invention.
FIG. 4 is a graphical representation of a function of the present invention.
Detailed Description
In order to further explain the technical scheme of the invention, the invention is explained in detail by specific examples.
As shown in fig. 1 to 3, the invention discloses an offline handwriting authentication method based on the integration of a twin network and multiple channels, which comprises the following steps:
s1, performing inverse operation on two input handwriting pictures to obtain inverse gray level pictures of the two input handwriting pictures, wherein four handwriting pictures are used at present, namely a reference picture, a test picture, an inverse gray level picture of the reference picture and an inverse gray level picture of the test picture; the formula of the picture taking operation is as follows:
。
s2, respectively entering four handwriting pictures, namely a reference picture, a test picture, an inverse gray picture of the reference picture and an inverse gray picture of the test picture, into the same network model to extract image feature vectorsWherein +.>、/>And->Respectively representing the height, the width and the channel number, and generating four corresponding feature vectors; the network model adopts a coding network ConvNet to obtain a feature map through a convolution layer, and then nonlinear transformation is carried out through a ReLU activation function。
Step S3, feature vectorCarrying out weight weighting treatment on the two times of dual handwriting attention to obtain feature vectors Y, wherein each vector has 128 dimensions;
the structure diagram of the specific implementation of the dual handwriting attention is shown in fig. 2, the feature diagram output by the convolution module of the original picture and the inverse gray picture of the same picture is input into an up-sampling structure, the up-sampling structure performs up-sampling by using a nearest neighbor algorithm and performs convolution operation by using sigmoid activation, and it is assumed that h is the output of the first layer of the convolution module in the discrimination stream, in the attention module, h is multiplied by g element mode, then h is added to generate intermediate attention measurement h·g+h, where "·" represents element mode multiplication, the following Global Average Pooling (GAP) layer and full connection layer (FC) with sigmoid activation receive the intermediate attention measurement and output a weight vector f, and the intermediate attention measurement of each element of each channel multiplied by f generates a final attention mask (h·g+h) ×f, respectively, and the masks are fed back to the second layer of the other feature diagram respectively. The original drawing of the dual handwriting attention and the attention feature drawing with weights added respectively are shown in fig. 3.
And S4, fusing the reference picture, the test picture, the anti-gray picture of the reference picture and the 128-dimensional picture subjected to image processing, wherein 512-dimensional pictures are fused.
The operation of fusing the four pictures is to merge the 128-dimensional feature images extracted before into a 512-dimensional feature image through channel fusion, wherein the merging sequence is a reference picture, an inverse gray test picture and a test picture, and the fusion method is a concatate method.
And S5, judging the fused picture through self-attention and convolution operation.
The convolution operation and the method of self-attention operation are simply that the convolution with the kernel size of kxk can be divided into k2 individual 1x1 convolutions, then shift and sum operations are performed, while the projection of the query, key and value in the self-attention mechanism in ACMix is interpreted as a plurality of 1x1 convolutions, then the aggregation of the attention weight and value is calculated.
And S6, performing sigmoid operation on the output feature vector, and using the sigmoid operation as confidence level to judge whether the two input pictures are written by the same person.
The sigmoid function has obvious characteristics: (1) continuous smooth, strictly monotonous; (2) The output range is (0, 1), and the (0, 0.5) is taken as the symmetry center; (3) When the input goes to minus infinity, the output goes to 0, and when the input goes to plus infinity, the output goes to 1; (4) When the input is near 0, the output change trend is obvious, the further the input is away from 0, the more gentle and gradual the change trend is, and the formula is as follows:
。
the functional image is shown in fig. 4.
The above examples and drawings are not intended to limit the form or form of the present invention, and any suitable variations or modifications thereof by those skilled in the art should be construed as not departing from the scope of the present invention.
Claims (5)
1. The off-line handwriting identification method based on the integration of the twin network and the multiple channels is characterized by comprising the following steps:
s1, performing inverse operation on two input handwriting pictures to obtain inverse gray level pictures, wherein the inverse gray level pictures comprise four handwriting pictures, namely a reference picture, a test picture, an inverse gray level picture of the reference picture and an inverse gray level picture of the test picture;
step S2, respectively entering the four handwriting pictures of the reference picture, the test picture, the anti-gray picture of the reference picture and the anti-gray picture of the test picture into the same network model to extract the image feature vectorWherein +.>、/>And->Respectively representing the height, the width and the channel number, and generating four corresponding feature vectors;
step S3, feature vectorCarrying out weight weighting treatment on the two times of dual handwriting attention to obtain feature vectors Y, wherein each vector has 128 dimensions; the specific method comprises the following steps: the method comprises the steps that an original picture and an inverse gray picture of the same picture are input into an up-sampling structure through a feature image output by a convolution module, the structure performs up-sampling by using a nearest neighbor algorithm, convolution operation is performed by using sigmoid activation, h is assumed to be the output of a first layer of the convolution module in a discrimination stream, in an attention module, h is multiplied by g element mode, then h is added to generate intermediate attention measurement h.g+h, wherein "·" represents element mode multiplication, and g is a vector generated from maximum pooling; the next global averaging pooling layer and fully connected layer with sigmoid activation receive the intermediate attention measurements and output a weight vector f, multiplying each channel by the intermediate attention measurement of each element of f, respectively, generating a final attention mask (h.g+h) xf, respectively, which is fed back to the second layer of the other feature map, respectively;
s4, fusing the reference picture, the test picture, the anti-gray picture of the reference picture and the 128-dimensional picture subjected to image processing by the anti-gray picture of the test picture, wherein 512-dimensional pictures are fused;
s5, judging the fused picture through self-attention and convolution operation; the convolution operation is performed by integrating a volume with a kernel size of kxk into k 2 Individual 1x1 convolutions are then shifted and summed; the self-attention operation method is query, key and valu of self-attention mechanism in ACmixe projection is a plurality of 1x1 convolutions, and then aggregation of the attention weight and value is calculated;
and S6, performing sigmoid operation on the output feature vector, and using the sigmoid operation as confidence level to judge whether the two input pictures are written by the same person.
2. An offline handwriting authentication method based on twin network and multi-channel fusion as claimed in claim 1, wherein: in step S1, the formula of the handwriting picture inverting operation is:
reverse-rotation。
3. An offline handwriting authentication method based on twin network and multi-channel fusion as claimed in claim 1, wherein: in step S2, the network model uses the encoding network ConvNet to obtain a feature map through a convolution layer, and then performs nonlinear transformation through a ReLU activation function.
4. An offline handwriting authentication method based on twin network and multi-channel fusion as claimed in claim 1, wherein: in step S4, the operation of fusing the four pictures is to merge the four feature pictures into a 512-dimensional feature picture from the 128-dimensional feature pictures extracted in the previous step through channel fusion, wherein the merging order is a reference picture, an inverse gray scale test picture and a test picture.
5. An offline handwriting authentication method based on twin network and multi-channel fusion as claimed in claim 1, wherein: in step S6, the output range of the sigmoid function is (0, 1), with (0, 0.5) as the center of symmetry, the output approaches 0 when the input approaches negative infinity, and the output approaches 1 when the input approaches positive infinity; the formula is as follows:
,
where x is an argument and e is a constant.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311797195.2A CN117475519B (en) | 2023-12-26 | 2023-12-26 | Off-line handwriting identification method based on integration of twin network and multiple channels |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311797195.2A CN117475519B (en) | 2023-12-26 | 2023-12-26 | Off-line handwriting identification method based on integration of twin network and multiple channels |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117475519A CN117475519A (en) | 2024-01-30 |
CN117475519B true CN117475519B (en) | 2024-03-12 |
Family
ID=89625959
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311797195.2A Active CN117475519B (en) | 2023-12-26 | 2023-12-26 | Off-line handwriting identification method based on integration of twin network and multiple channels |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117475519B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113095156A (en) * | 2021-03-23 | 2021-07-09 | 西安深信科创信息技术有限公司 | Double-current network signature identification method and device based on inverse gray scale mode |
CN113779643A (en) * | 2021-09-24 | 2021-12-10 | 重庆傲雄在线信息技术有限公司 | Signature handwriting recognition system and method based on pre-training technology and storage medium |
CN114220178A (en) * | 2021-12-16 | 2022-03-22 | 重庆傲雄在线信息技术有限公司 | Signature identification system and method based on channel attention mechanism |
CN114360071A (en) * | 2022-01-11 | 2022-04-15 | 北京邮电大学 | Method for realizing off-line handwritten signature verification based on artificial intelligence |
CN114898472A (en) * | 2022-04-26 | 2022-08-12 | 华南理工大学 | Signature identification method and system based on twin vision Transformer network |
CN115311746A (en) * | 2022-07-22 | 2022-11-08 | 浙江工业大学 | Off-line signature authenticity detection method based on multi-feature fusion |
CN116259062A (en) * | 2023-04-04 | 2023-06-13 | 西南政法大学 | CNN handwriting identification method based on multichannel and attention mechanism |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10878298B2 (en) * | 2019-03-06 | 2020-12-29 | Adobe Inc. | Tag-based font recognition by utilizing an implicit font classification attention neural network |
-
2023
- 2023-12-26 CN CN202311797195.2A patent/CN117475519B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113095156A (en) * | 2021-03-23 | 2021-07-09 | 西安深信科创信息技术有限公司 | Double-current network signature identification method and device based on inverse gray scale mode |
CN113779643A (en) * | 2021-09-24 | 2021-12-10 | 重庆傲雄在线信息技术有限公司 | Signature handwriting recognition system and method based on pre-training technology and storage medium |
CN114220178A (en) * | 2021-12-16 | 2022-03-22 | 重庆傲雄在线信息技术有限公司 | Signature identification system and method based on channel attention mechanism |
CN114360071A (en) * | 2022-01-11 | 2022-04-15 | 北京邮电大学 | Method for realizing off-line handwritten signature verification based on artificial intelligence |
CN114898472A (en) * | 2022-04-26 | 2022-08-12 | 华南理工大学 | Signature identification method and system based on twin vision Transformer network |
CN115311746A (en) * | 2022-07-22 | 2022-11-08 | 浙江工业大学 | Off-line signature authenticity detection method based on multi-feature fusion |
CN116259062A (en) * | 2023-04-04 | 2023-06-13 | 西南政法大学 | CNN handwriting identification method based on multichannel and attention mechanism |
Also Published As
Publication number | Publication date |
---|---|
CN117475519A (en) | 2024-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Xu et al. | Data uncertainty in face recognition | |
CN108427921A (en) | A kind of face identification method based on convolutional neural networks | |
Wang et al. | FaceFormer: Aggregating global and local representation for face hallucination | |
CN103218624A (en) | Recognition method and recognition device based on biological characteristics | |
Ren et al. | 2C2S: A two-channel and two-stream transformer based framework for offline signature verification | |
Mshir et al. | Signature recognition using machine learning | |
CN107301643A (en) | Well-marked target detection method based on robust rarefaction representation Yu Laplce's regular terms | |
CN115830637B (en) | Method for re-identifying blocked pedestrians based on attitude estimation and background suppression | |
Zhang et al. | Efficient lightweight attention network for face recognition | |
CN110222568A (en) | A kind of across visual angle gait recognition method based on space-time diagram | |
Hefny et al. | Online signature verification using deep learning and feature representation using Legendre polynomial coefficients | |
Shi et al. | Entropy and orthogonality based deep discriminative feature learning for object recognition | |
Sathya et al. | Average Intensity Sign (AIS) Feature based Offline Signature Verification for Forgery Detection using Machine Learning | |
Antil et al. | A two stream face anti-spoofing framework using multi-level deep features and ELBP features | |
CN117475519B (en) | Off-line handwriting identification method based on integration of twin network and multiple channels | |
Aaronson et al. | Robust face detection using convolutional neural network | |
Singh et al. | Integrating global zernike and local discriminative HOG features for face recognition | |
Zhang et al. | Partial point cloud registration with deep local feature | |
Kartheek et al. | Modified chess patterns: handcrafted feature descriptors for facial expression recognition | |
Jain et al. | A Survey on Face Recognition Techniques in Machine Learning | |
Yang et al. | Face recognition based on Weber symmetrical local graph structure | |
Karmakar et al. | Generation of new points for training set and feature-level fusion in multimodal biometric identification | |
Alkaabi et al. | A novel architecture to verify offline hand-written signatures using convolutional neural network | |
Li et al. | Multi-level Fisher vector aggregated completed local fractional order derivative feature vector for face recognition | |
Singh et al. | DLDFD: Recurrence Free 2D Convolution Approach for Deep Fake Detection. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |