CN113326809A - Off-line signature identification method and system based on three-channel neural network - Google Patents

Off-line signature identification method and system based on three-channel neural network Download PDF

Info

Publication number
CN113326809A
CN113326809A CN202110733922.3A CN202110733922A CN113326809A CN 113326809 A CN113326809 A CN 113326809A CN 202110733922 A CN202110733922 A CN 202110733922A CN 113326809 A CN113326809 A CN 113326809A
Authority
CN
China
Prior art keywords
image
neural network
channel
signature
comparison
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110733922.3A
Other languages
Chinese (zh)
Inventor
黎文江
邢永康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202110733922.3A priority Critical patent/CN113326809A/en
Publication of CN113326809A publication Critical patent/CN113326809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/30Writer recognition; Reading and verifying signatures
    • G06V40/37Writer recognition; Reading and verifying signatures based only on signature signals such as velocity or pressure, e.g. dynamic signature recognition
    • G06V40/382Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention belongs to the technical field of signature identification, and particularly discloses an off-line signature identification method and system based on a three-channel neural network. By adopting the technical scheme, the three-channel image is used as an input training neural network model to obtain an identification result, and the method has high convergence speed and high precision.

Description

Off-line signature identification method and system based on three-channel neural network
Technical Field
The invention belongs to the technical field of signature identification, and relates to an off-line signature identification method and system based on a three-channel neural network.
Background
The process of authenticating a handwritten signature is usually to determine whether two signatures are from the same person based on the information of the two given signature scripts. Research methods for handwritten signature identification can be divided into two categories according to data acquisition methods: off-line handwritten signature authentication and on-line handwritten signature authentication. Signature data used by off-line handwritten signature authentication is directly scanned in a gray-scale image format by a scanning mode, and is stored as a static image. Most signature data are acquired offline static images, offline handwritten signature identification can convert online data into formats of the static images, and application scenes of the offline handwritten signature identification are wider.
One existing off-line handwritten signature identification method adopts a two-channel neural network model, the number of neurons of an output layer of the two-channel neural network is set to be 2, a sample pair is overlapped into an image in a two-channel mode, the image is input into the neural network for training, finally, the distance of the sample pair is calculated according to 2 output values, and a judgment result is obtained according to comparison between the distance value and a threshold value. However, the two-channel neural network adopted in the prior art has low convergence rate during training, is not beneficial to use, has low identification accuracy, and cannot meet the use requirement.
Disclosure of Invention
The invention aims to provide an off-line signature identification method and system based on a three-channel neural network, which have the advantages of high convergence speed and high precision.
In order to achieve the purpose, the basic scheme of the invention is as follows: an off-line signature identification method based on a three-channel neural network comprises the following steps:
acquiring a signature image and a comparison image, and converting the signature image and the comparison image into a gray level image or a binary image;
comparing the signature image with the comparison image to obtain a difference image;
setting a three-channel neural network, respectively inputting the signature image and the comparison image into the first two channels of the three-channel neural network, and inputting the difference image into the third channel of the three-channel neural network;
the signature image, the comparison image and the difference image are superposed on three channels to obtain a three-channel image;
and (5) taking the three-channel image as input, training a neural network model, and obtaining an identification result.
The working principle and the beneficial effects of the basic scheme are as follows: the difference graph of the signature image and the comparison image is also regarded as important basic data, and the three-channel neural network is introduced into the field of off-line signature identification. The first two channels of the whole neural network are two signature images, and the data of the third channel is a difference value image of the two signature images. The method is equivalent to adding prior information into the input, so that the neural network training can be converged more quickly, the network training effect is optimized, and the identification accuracy is improved.
Further, the signature image and the contrast image are gray level images or binary images, and the size of each gray level image or binary image is W multiplied by H multiplied by 1;
when the signature image is compared with the comparison image, the data size of the read-in model is W multiplied by H multiplied by 2;
the size of the difference image is W multiplied by H multiplied by 1, and the three-channel image is obtained by superposing the signature image, the contrast image and the difference image, and the size of the three-channel image is W multiplied by H multiplied by 3.
And a three-channel image is set, is similar to an RGB image, and when training or predicting image data in deep learning, the image is read according to an RGB mode and is input into a neural network model, so that a better identification effect is obtained.
Further, a void convolution layer is added to the neural network.
And performing model training according to the determined hole convolution, so that the performance of the model can be remarkably improved.
Further, the hole convolution layer is located in the base-line network mix2 layer of the neural network.
And determining specific parameters and positions of the cavity convolution to ensure the improvement of the model performance by the cavity convolution.
Further, the specific method of the hole convolution is as follows:
setting the void ratio as a sawtooth structure, wherein the void ratio is expressed as follows:
Mi=max[Mi+1-2ri,Mi+1-2(Mi+1-ri),ri]
wherein M isiRepresents the maximum voidage of the ith layer, riRepresenting the i-th layer void rate, and setting the convolution kernel size to be k multiplied by k when M is satisfied2When k is less than or equal to k, the requirement of the design condition is met.
The zigzag void convolution structure is added into the neural network to improve the network performance.
Further, an attention mechanism is provided in the neural network.
The identification effect of the neural network model is improved by using a channel attention mechanism.
Further, the attention mechanism is located at the last layer of the neural network and is used for giving different weights to the handwriting features extracted by the neural network model.
Placing the attention mechanism at the end of the entire network achieves the effect of exceeding the baseline model by optimizing the hyper-parameters of the model.
Further, the method of the specific attention mechanism adopts the SEnet module as follows:
taking X as network input and u as the feature map extracted from the network, passing the feature map with size of W × H × 3 through FsqThe function performs the Squeeze operation to obtain a vector of size 1 × 1 × C, which is expressed by the formula:
Figure BDA0003140803780000031
ucis a characteristic diagram of the c channel;
converting the 1 × 1 × C vector into a new 1 × 1 × C vector with weight information, i.e. an Excitation operation, and the formula of the door mechanism is as follows:
Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))
wherein, delta is a Relu activation function, sigma is a sigmoid activation function, z represents a characteristic diagram, W represents a parameter, and the parameter is multiplied by the characteristic diagram to obtain a characteristic diagram of the next step; g is an intermediate calculation function; w1Compressing the 1 × 1 × C vector to the set 1 × 1 × r, where r is the dimensionality reduction coefficient, W2Then restoring the vector to 1 × 1 × C;
the obtained 1 × 1 × C vector is the required channel weight vector, and is multiplied by the W × H × C feature vector, so as to obtain an output image feature map X%.
The SENet module can establish an attention mechanism, learn the importance degree of each characteristic channel, establish the correlation among different characteristic channels in a display mode, enhance the effective characteristics of a prediction result, weaken the ineffective characteristics and effectively improve the performance of the model.
Further, performing 5-fold cross validation on the three-channel neural network, and evaluating the performance of the three-channel neural network; setting the distribution proportion of the training set, the verification set and the test set to be 0.75: 0.05: and 0.20, dividing all data into 5 parts, and sequentially selecting the test sets in the test sets to finish 5-fold distribution of the data sets.
The 5-fold cross validation can evaluate the performance of the neural network so as to carry out operation of optimizing the performance of the neural network model.
The invention also provides an authentication system based on the off-line signature authentication method, which comprises an image acquisition module, a storage module, a comparison module and an authentication module;
the image acquisition module is used for acquiring a real-time handwritten signature image;
the storage module is used for storing the comparison image;
the first input end of the comparison module is connected with the image acquisition module, and the second input end of the comparison module is connected with the storage module and used for acquiring a difference image;
the identification module is respectively connected with the output ends of the image acquisition module, the storage module and the comparison module and is used for outputting the identification result of the signature.
The identification system carries out signature identification by collecting signature images, and is simple to use and easy to operate.
Drawings
FIG. 1 is a schematic flow chart of an off-line signature authentication method based on a three-channel neural network according to the present invention;
fig. 2 is a schematic flow chart of the off-line signature authentication system based on the three-channel neural network of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used merely for convenience of description and for simplicity of description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, are not to be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and limited, it is to be noted that the terms "mounted," "connected," and "connected" are to be interpreted broadly, and may be, for example, a mechanical connection or an electrical connection, a communication between two elements, a direct connection, or an indirect connection via an intermediate medium, and specific meanings of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, the invention discloses an off-line signature authentication method based on a three-channel neural network, which comprises the following steps:
acquiring a signature image (a signature to be predicted) and a comparison image (a reference signature), and converting the signature image and the comparison image into a gray scale image or a binary image, wherein the signature image and the comparison image are gray scale images or binary images, and the size of the signature image and the comparison image is W multiplied by H multiplied by 1;
comparing the signature image with the comparison image to obtain a difference image (signature difference value); when the signature image is compared with the comparison image, the size of the data read into the model is W multiplied by H multiplied by 2, and the size of the difference image is W multiplied by H multiplied by 1;
setting a three-channel neural network, respectively inputting the signature image and the comparison image into the first two channels of the three-channel neural network, and inputting the difference image into the third channel of the three-channel neural network;
the signature image, the comparison image and the difference image are superposed on three channels to obtain a three-channel image; superposing the signature image, the comparison image and the difference image to obtain a three-channel image, wherein the size of the three-channel image is W multiplied by H multiplied by 3; 1,2, 3 represent the number of channel layers, which can be analogized to RGB image, 1 is R channel, 2 is G channel, and 3 is B channel.
And (5) taking the three-channel image as input, training a neural network model, and obtaining an identification result. And taking the difference image of the third channel as supervision information, acquiring the difference value between the signature image and the comparison image, setting the threshold range of the difference value (the training fails due to the fact that the difference value exceeds the threshold), initially putting the difference image of the third channel as supervision information, and fitting through a neural network.
Adding a cavity convolution layer in the neural network, preferably, the cavity convolution layer is positioned in an inclusion V3 network mix2 layer of the neural network, and the specific cavity convolution method comprises the following steps:
setting the void ratio as a zigzag structure, such as a structure like [1,2,5], the void ratio expression is as follows:
Mi=max[Mi+1-2ri,Mi+1-2(Mi+1-ri),ri]
wherein M isiRepresents the maximum voidage of the ith layer, riRepresenting the i-th layer void rate, and setting the convolution kernel size to be k multiplied by k when M is satisfied2When k is less than or equal to k, the requirement of the design condition is met. And performing model training according to the determined specific parameters and positions of the cavity convolution, so that the performance of the model can be remarkably improved.
An attention mechanism is arranged in the neural network, and the attention mechanism can be conveniently added into any network structure. Preferably, the attention mechanism is positioned at the last layer of the neural network and is used for giving different weights to the handwriting features extracted by the neural network model, so that the supervision information of the additional channel is better integrated into the output feature map. The specific attention mechanism method adopts the SEnet module as follows:
taking X as network input and u as the feature map extracted from the network, passing the feature map with size of W × H × 3 through FsqThe function (usually a global average pooling layer) performs the Squeeze operation to obtain a size of 1 × 1 × CThe formula of the vector of (a) is:
Figure BDA0003140803780000071
u represents a feature map, c represents a channel, and after the features are generally extracted, the value of the channel c is 256, u represents a feature map, and u represents a channelcIs a characteristic diagram of the c channel;
Fexthe function (usually two fully-connected layers) generates a weight for each eigen channel through a gate mechanism of the neural network, and converts the 1 × 1 × C vector into a new 1 × 1 × C vector with weight information, that is, an Excitation operation, where the formula of the gate mechanism is:
Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))
wherein, delta is a Relu activation function, sigma is a sigmoid activation function, z represents a characteristic diagram, W represents a parameter, and the parameter is multiplied by the characteristic diagram to obtain a characteristic diagram of the next step; g is an intermediate calculation function, which can be calculated by W1And W2Two layers of parameters; w1Compressing the 1 × 1 × C vector to the set 1 × 1 × r, where r is the dimensionality reduction coefficient, W2Then restoring the vector to 1 × 1 × C;
the obtained 1 × 1 × C vector is the required channel weight vector, and is multiplied by the W × H × C feature vector, so as to obtain an output image feature map X% of the SE module. For example, assuming H × W × C feature vectors, 1 × 1 × C vectors (assuming values of 1,2, 3, …, C) are calculated by the SE module, H × W × 1 feature maps are all multiplied by 1, H × W × 2 feature maps are all multiplied by 2, and so on. And the information of the difference map is captured and fused to other positions through the operation of the SE module. For example, after the three-channel image wxhx3 is extracted by the neural network, the calculated image becomes w/64xh/64x30, 30 is the number of channels, the SE module calculates the attention value of the channels, and the information of the difference image is interacted with the other 20 channels, so that the neural network training can be converged more quickly, the network training effect is optimized, and the identification accuracy is improved.
The SENEt module is added before the last layer of the network outputs the prediction result, the SENEt module is used for enhancing the effective characteristic of the prediction result, weakening the ineffective characteristic, and 5-fold cross validation is used on the data set, so that the attention mechanism is added to bring a certain degree of improvement to the model performance. And 5-fold cross validation is carried out on the three-channel neural network for evaluating the performance of the three-channel neural network. When the specific position and parameter setting of the cavity convolution are determined, the position of the cavity convolution layer is determined through multiple attempts by utilizing a 5-fold cross validation experiment, and the optimal setting is ensured.
And when 5-fold cross validation is carried out, setting the distribution proportion of the training set, the validation set and the test set to be 0.75: 0.05: 0.20, to ensure that the proportion of the test set is 20%. Then all data are divided into 5 parts, test sets are sequentially selected from the 5 parts, and 5-fold distribution of the data sets is completed. Such as the bangladesh data set of BHSig260, has a total of 100 person signature data, with 75 as the training set, 5 as the verification set, and 20 as the test set. The test set is selected by equally dividing all data into five parts, and the five parts do not overlap with each other, which is equivalent to training five models. And when the experiment is carried out formally, the five models are used for testing the data to be tested, and the five test results are gathered according to equal weighting to obtain the final output result.
As shown in fig. 2, the present invention further provides an authentication system based on the offline signature authentication method of the present invention, which comprises an image acquisition module, a storage module, a comparison module and an authentication module;
the image acquisition module is used for acquiring a real-time handwritten signature image;
the storage module is used for storing the comparison image;
the first input end of the comparison module is electrically connected with the output end of the image acquisition module, the second input end of the comparison module is electrically connected with the output end of the storage module, the comparison module is used for comparing the signature image with the comparison image, and the difference value of the comparison image signed by the user and the signature image to be identified is subtracted to obtain a difference value image;
the identification module is respectively and electrically connected with the output ends of the image acquisition module, the storage module and the comparison module, the identification module superposes the signature image, the comparison image and the difference image in three channels, a RGB (red, green and blue) similar mode is obtained, the three-channel image is used as input to train a neural network, and the neural network can output the identification result of the signature.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. An off-line signature authentication method based on a three-channel neural network is characterized by comprising the following steps:
acquiring a signature image and a comparison image, and converting the signature image and the comparison image into a gray level image or a binary image;
comparing the signature image with the comparison image to obtain a difference image;
setting a three-channel neural network, respectively inputting the signature image and the comparison image into the first two channels of the three-channel neural network, and inputting the difference image into the third channel of the three-channel neural network;
the signature image, the comparison image and the difference image are superposed on three channels to obtain a three-channel image;
and (5) taking the three-channel image as input, training a neural network model, and obtaining an identification result.
2. The off-line signature authentication method based on the three-channel neural network as claimed in claim 1, wherein the signature image and the comparison image are gray-scale images or binary images with the size of WxH x 1;
when the signature image is compared with the comparison image, the data size of the read-in model is W multiplied by H multiplied by 2;
the size of the difference image is W multiplied by H multiplied by 1, and the three-channel image is obtained by superposing the signature image, the contrast image and the difference image, and the size of the three-channel image is W multiplied by H multiplied by 3.
3. The off-line signature identification method based on the three-channel neural network as claimed in claim 1, wherein a hole convolution layer is added in the neural network.
4. The method of claim 3, wherein the hole convolution layer is located in a base-line network mix2 layer of the neural network.
5. The off-line signature identification method based on the three-channel neural network as claimed in claim 3, wherein the specific hole convolution method is as follows:
setting the void ratio as a sawtooth structure, wherein the void ratio is expressed as follows:
Mi=max[Mi+1-2ri,Mi+1-2(Mi+1-ri),ri]
wherein M isiRepresents the maximum voidage of the ith layer, riRepresenting the i-th layer void rate, and setting the convolution kernel size to be k multiplied by k when M is satisfied2When k is less than or equal to k, the requirement of the design condition is met.
6. The off-line signature authentication method based on the three-channel neural network as claimed in claim 1, wherein an attention mechanism is provided in the neural network.
7. The off-line signature authentication method based on the three-channel neural network as claimed in claim 6, wherein the attention mechanism is located at the last layer of the neural network and is used for giving different weights to handwriting features extracted by the neural network model.
8. The off-line signature authentication method based on the three-channel neural network as claimed in claim 6, wherein the specific attention mechanism method adopts a SENET module as follows:
taking X as network input and u as the feature map extracted from the network, passing the feature map with size of W × H × 3 through FsqThe function performs the Squeeze operation to obtain a vector of size 1 × 1 × C, which is expressed by the formula:
Figure FDA0003140803770000021
ucis a characteristic diagram of the c channel;
converting the 1 × 1 × C vector into a new 1 × 1 × C vector with weight information, i.e. an Excitation operation, and the formula of the door mechanism is as follows:
Fex(z,W)=σ(g(z,W))=σ(W2δ(W1z))
wherein, delta is a Relu activation function, sigma is a sigmoid activation function, z represents a characteristic diagram, W represents a parameter, and the parameter is multiplied by the characteristic diagram to obtain a characteristic diagram of the next step; g is an intermediate calculation function; w1Compressing the 1 × 1 × C vector to the set 1 × 1 × r, where r is the dimensionality reduction coefficient, W2Then restoring the vector to 1 × 1 × C;
the obtained 1 × 1 × C vector is the required channel weight vector, and is multiplied by the W × H × C feature vector, so as to obtain an output image feature map X%.
9. The off-line signature authentication method based on the three-channel neural network as claimed in claim 1, wherein 5-fold cross validation is performed on the three-channel neural network to evaluate the performance of the three-channel neural network; setting the distribution proportion of the training set, the verification set and the test set to be 0.75: 0.05: and 0.20, dividing all data into 5 parts, and sequentially selecting the test sets in the test sets to finish 5-fold distribution of the data sets.
10. An authentication system based on the off-line signature authentication method of one of claims 1 to 9, comprising an image acquisition module, a storage module, a comparison module and an authentication module;
the image acquisition module is used for acquiring a real-time handwritten signature image;
the storage module is used for storing the comparison image;
the first input end of the comparison module is connected with the image acquisition module, and the second input end of the comparison module is connected with the storage module and used for acquiring a difference image;
the identification module is respectively connected with the output ends of the image acquisition module, the storage module and the comparison module and is used for outputting the identification result of the signature.
CN202110733922.3A 2021-06-30 2021-06-30 Off-line signature identification method and system based on three-channel neural network Pending CN113326809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110733922.3A CN113326809A (en) 2021-06-30 2021-06-30 Off-line signature identification method and system based on three-channel neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110733922.3A CN113326809A (en) 2021-06-30 2021-06-30 Off-line signature identification method and system based on three-channel neural network

Publications (1)

Publication Number Publication Date
CN113326809A true CN113326809A (en) 2021-08-31

Family

ID=77423505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110733922.3A Pending CN113326809A (en) 2021-06-30 2021-06-30 Off-line signature identification method and system based on three-channel neural network

Country Status (1)

Country Link
CN (1) CN113326809A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694130A (en) * 2005-03-24 2005-11-09 上海大学 Identification method of mobile number plate based on three-channel parallel artificial nerve network
CN106529395A (en) * 2016-09-22 2017-03-22 文创智慧科技(武汉)有限公司 Signature image recognition method based on deep brief network and k-means clustering
CN108460089A (en) * 2018-01-23 2018-08-28 哈尔滨理工大学 Diverse characteristics based on Attention neural networks merge Chinese Text Categorization
CN108875787A (en) * 2018-05-23 2018-11-23 北京市商汤科技开发有限公司 A kind of image-recognizing method and device, computer equipment and storage medium
US20190236411A1 (en) * 2016-09-14 2019-08-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN110738141A (en) * 2019-09-26 2020-01-31 五邑大学 vein identification method, device, equipment and storage medium
US20200175313A1 (en) * 2018-12-03 2020-06-04 Samsung Electronics Co., Ltd. Method and apparatus with dilated convolution
US20200300954A1 (en) * 2019-03-22 2020-09-24 Canon Medical Systems Corporation Apparatus and method for deep learning to mitigate artifacts arising in simultaneous multi slice (sms) magnetic resonance imaging (mri)
CN111798436A (en) * 2020-07-07 2020-10-20 浙江科技学院 Salient object detection method based on attention expansion convolution feature fusion
CN112069900A (en) * 2020-08-06 2020-12-11 常熟理工学院 Bill character recognition method and system based on convolutional neural network
CN112287724A (en) * 2019-07-24 2021-01-29 邹伟 Finger vein identity recognition system based on transfer learning and GAN network
CN112837315A (en) * 2021-03-05 2021-05-25 云南电网有限责任公司电力科学研究院 Transmission line insulator defect detection method based on deep learning
CN113011386A (en) * 2021-04-13 2021-06-22 重庆大学 Expression recognition method and system based on equally divided characteristic graphs

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1694130A (en) * 2005-03-24 2005-11-09 上海大学 Identification method of mobile number plate based on three-channel parallel artificial nerve network
US20190236411A1 (en) * 2016-09-14 2019-08-01 Konica Minolta Laboratory U.S.A., Inc. Method and system for multi-scale cell image segmentation using multiple parallel convolutional neural networks
CN106529395A (en) * 2016-09-22 2017-03-22 文创智慧科技(武汉)有限公司 Signature image recognition method based on deep brief network and k-means clustering
CN108460089A (en) * 2018-01-23 2018-08-28 哈尔滨理工大学 Diverse characteristics based on Attention neural networks merge Chinese Text Categorization
CN108875787A (en) * 2018-05-23 2018-11-23 北京市商汤科技开发有限公司 A kind of image-recognizing method and device, computer equipment and storage medium
US20200175313A1 (en) * 2018-12-03 2020-06-04 Samsung Electronics Co., Ltd. Method and apparatus with dilated convolution
US20200300954A1 (en) * 2019-03-22 2020-09-24 Canon Medical Systems Corporation Apparatus and method for deep learning to mitigate artifacts arising in simultaneous multi slice (sms) magnetic resonance imaging (mri)
CN112287724A (en) * 2019-07-24 2021-01-29 邹伟 Finger vein identity recognition system based on transfer learning and GAN network
CN110738141A (en) * 2019-09-26 2020-01-31 五邑大学 vein identification method, device, equipment and storage medium
CN111798436A (en) * 2020-07-07 2020-10-20 浙江科技学院 Salient object detection method based on attention expansion convolution feature fusion
CN112069900A (en) * 2020-08-06 2020-12-11 常熟理工学院 Bill character recognition method and system based on convolutional neural network
CN112837315A (en) * 2021-03-05 2021-05-25 云南电网有限责任公司电力科学研究院 Transmission line insulator defect detection method based on deep learning
CN113011386A (en) * 2021-04-13 2021-06-22 重庆大学 Expression recognition method and system based on equally divided characteristic graphs

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何凯等: "基于改进LeNet-5模型的手写体中文识别", 《天津大学学报(自然科学与工程技术版)》 *
基于深度学习的离线签名鉴定: "王豪爽", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Similar Documents

Publication Publication Date Title
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN109977757B (en) Multi-modal head posture estimation method based on mixed depth regression network
CN113779675B (en) Physical-data driven intelligent shear wall building structure design method and device
CN113011499A (en) Hyperspectral remote sensing image classification method based on double-attention machine system
CN112800876B (en) Super-spherical feature embedding method and system for re-identification
CN108304864B (en) Deep confrontation metric learning method and device
CN105160400A (en) L21 norm based method for improving convolutional neural network generalization capability
CN109872305B (en) No-reference stereo image quality evaluation method based on quality map generation network
CN113178255A (en) Anti-attack method of medical diagnosis model based on GAN
CN108398268A (en) A kind of bearing performance degradation assessment method based on stacking denoising self-encoding encoder and Self-organizing Maps
CN111582225A (en) Remote sensing image scene classification method and device
CN112200121B (en) Hyperspectral unknown target detection method based on EVM and deep learning
CN106682649A (en) Vehicle type recognition method based on deep learning
CN112580782A (en) Channel enhancement-based double-attention generation countermeasure network and image generation method
CN111161224A (en) Casting internal defect grading evaluation system and method based on deep learning
CN112183742A (en) Neural network hybrid quantization method based on progressive quantization and Hessian information
CN111915101A (en) Complex equipment fault prediction method and system based on LPP-HMM method
CN114283285A (en) Cross consistency self-training remote sensing image semantic segmentation network training method and device
CN114091598A (en) Multi-vehicle collaborative environment sensing method based on semantic level information fusion
CN110738132A (en) target detection quality blind evaluation method with discriminant perception capability
CN114565594A (en) Image anomaly detection method based on soft mask contrast loss
CN115496720A (en) Gastrointestinal cancer pathological image segmentation method based on ViT mechanism model and related equipment
CN115564194A (en) Method and system for constructing metering abnormality diagnosis information generation model of smart power grid
CN114529730A (en) Convolutional neural network ground material image classification method based on LBP (local binary pattern) features
CN117173595A (en) Unmanned aerial vehicle aerial image target detection method based on improved YOLOv7

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210831