CN110610174A - Bank card number identification method under complex conditions - Google Patents

Bank card number identification method under complex conditions Download PDF

Info

Publication number
CN110610174A
CN110610174A CN201910643964.0A CN201910643964A CN110610174A CN 110610174 A CN110610174 A CN 110610174A CN 201910643964 A CN201910643964 A CN 201910643964A CN 110610174 A CN110610174 A CN 110610174A
Authority
CN
China
Prior art keywords
bank card
image
card number
network
identifying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910643964.0A
Other languages
Chinese (zh)
Inventor
任柯燕
李思洋
马杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910643964.0A priority Critical patent/CN110610174A/en
Publication of CN110610174A publication Critical patent/CN110610174A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/225Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • G06V30/1475Inclination or skew detection or correction of characters or of image to be recognised
    • G06V30/1478Inclination or skew detection or correction of characters or of image to be recognised of characters or characters lines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The invention designs a method for identifying a bank card number under a complex condition, which is a system for identifying the card number of a bank card in an image under a complex environment by utilizing angular point detection, a convolutional neural network and a generative countermeasure network. The system trains the convolutional neural network by using a large number of complex condition samples, so that after the angular point of a picture is detected and corrected in position and angle, the bank card number can be detected under the conditions of more complex shooting environment and imaging environment without other picture preprocessing methods. For the bank card number with low readability, the system utilizes a large number of complex samples to train and generate a countermeasure network DCGAN to enhance the card number image, and then utilizes a trained convolutional neural network to detect.

Description

Bank card number identification method under complex conditions
Technical Field
The electronic information field is used for identifying the card number of the bank card.
Background
The bank card is a common physical credit payment tool in life, and nowadays, the electronization of the bank card is also a mainstream trend in the rapid development of informatization. In social economic activities, the bank card number is used as the identification code of the bank card, and in many cases, the user needs to manually input the bank card number to complete economic activities such as transactions, and the efficiency is very low. The electronic equipment such as the mobile phone and the like automatically identifies the bank card number of the user by utilizing the camera, and is a very effective solution for solving the problem. Traditional algorithm need set for strict shooting mode and shoot the environment and carry out bank card automatic identification, and the card is of a great variety of background patterns probably causes the difference of detection effect and leads to detecting the failure, is difficult to obtain the guarantee to the dirty card detection effect of number unclear, these all can be given the complexity that increases user operation, deviate from the convenient original intention of operation.
Disclosure of Invention
The invention aims to provide a bank card number identification method based on an image.
The invention can detect the card number of the bank card by using the electronic equipment camera under more complex background and shooting environment by using the target detection algorithm based on the convolutional network and some auxiliary image processing algorithms, and can generate a countermeasure network for the stained number to carry out reinforced detection. The system has the characteristics of strong adaptability, excellent detection capability, flexibility, convenience and the like.
The method is characterized in that the four corners of the bank card are identified by using corner detection, the four edges of the bank card are identified by using edge detection and Hough (Hough) line detection, so that the angles and the positions of the pictures are corrected without other preprocessing methods.
The invention is characterized in that under the complex shooting condition, the bank card number placed at any angle in the image can be identified, and the bank card with the stained number is enhanced and identified. Wherein:
the method comprises the steps that four corners of the bank card are identified through corner detection, four edges of the bank card are identified through edge detection and Hough transformation (Hough) straight line detection, a convolutional neural network is mainly used for constructing a main network of an identification algorithm, and a countermeasure network DCGAN is generated and used for enhancing numbers with unclear fonts. The bank card edge can be in any angle with the image edge without being perpendicular or parallel to the edge of the image or realizing perfect alignment of four corners with a shooting visual angle, a preprocessing algorithm can correct the position of the bank card according to the detected four corner points of the bank card, and the bank card image perpendicular or parallel to the image edge is generated by cutting so as to provide convenience for subsequent detection. The method can select the state between the clear and clear card number and the unrecognizable number after the number cannot be detected by basic recognition according to the requirement, enhance the card number image by using the trained generation countermeasure network DCGAN, and then identify the number of the enhanced card number image. The method utilizes the coordinate information of each digit to screen which digits belong to the bank card number, and the calculation amount can be reduced through screening.
The overall flow is shown in fig. 1, and the general steps are as follows:
the method comprises the following steps of (1) acquiring an image, carrying out corner detection, edge detection and Hough transformation (Hough) straight line detection to identify four edges of the bank card, and carrying out position and angle correction on the image to obtain an orthographic view angle bank card image with aligned edges.
And (2) identifying the numbers on the preprocessed card one by using an identification algorithm based on a convolutional neural network, and extracting the bank card number. And simultaneously, the position information of each digit obtained by detecting the network is utilized to screen whether the digit detected by the network is a bank card number. If the output probability of all the identification numbers is greater than a certain threshold value (0.85-0.95), outputting the card number; and (4) if the output probability of a certain number is smaller than the threshold value, intercepting and enhancing the card image, and in step (3). The detection network is the convolutional neural network.
And (3) enhancing the card number image screened in the step (2) by using a generation model based on the generation countermeasure network DCGAN, and outputting the enhanced card number image.
And (4) re-identifying the bank card image enhanced in the step (3) by using the trained convolutional neural network, namely, turning to the step (2), and outputting the card number to finish identification.
Advantageous effects
The detection method does not need to carry out binarization preprocessing and background and digital area distinguishing on the collected image, but uses the convolutional neural network as the main network of the identification algorithm to carry out digital detection on all card area numbers one by one, so that the problem of bank card number identification under complex conditions can be solved, and the digital extraction efficiency is improved.
Drawings
FIG. 1 is an overall block diagram of the present invention.
FIG. 2 shows the perspective and angle correction of the bank card of the present invention.
FIG. 3 is a graph of a convolutional neural network of the present invention
FIG. 4 is a schematic diagram of the digital detection of the present invention.
FIG. 5 is a diagram of a DCGAN network architecture of the present invention
FIG. 6 is a schematic diagram of the enhanced detection of the present invention.
Detailed Description
1 image preprocessing
In the picture preprocessing stage, the corner detection is used to identify four corners of the bank card, and the edge detection and Hough transform (Hough) line detection are used to identify four edges of the bank card, so as to correct the angle and position of the picture, as shown in fig. 2. Specifically comprises
1) And detecting four corners of the bank card through a corner detection algorithm. Wherein the corner detection algorithm can be replaced by any corner detection algorithm.
2) The four edges of the bank card are identified by using edge detection and Hough transformation, so that the angle and the position of the bank card picture are corrected, and no binarization preprocessing and background and digital area distinguishing are required to be performed on the acquired image.
2 making training set
2.1 convolutional neural network training set Generation
datA augmentation including, but not limited to, rotation, cropping, blurring, binarization, image composition, resolution reduction, background substitution, contrast change, saturation change, brightness change, and perspective change, is performed on the manually marked bank card digital and standard digital pictures (OCR- A/B font, MICR E-13B dataset, etc.). And finally, obtaining a digital data set for training the convolutional neural network. The method specifically comprises the following steps:
1) unified resolution: the resolution of the bank card image was changed to 300 x 300 (i.e., the image was 300 in both length and width) to unify the size of the training input and the test input. The method comprises the steps of selecting the proportion of stretching the image of the bank card, or selecting the supplementary blank color block as the supplement of the image under the condition that the original proportion is not changed, so as to ensure that the proportion of the original image is not stretched in the area of the bank card.
2) The card number is directly marked on the image of the bank card, and one image corresponds to a plurality of card number numbers and the corresponding positions thereof.
2.2 Generation of confrontation network training set
And carrying out processing for reducing digital readability, such as random shielding, fuzzy noise increasing, cutting and the like on the bank card image on the basis of the convolutional neural network data set. In addition, a certain amount of bank card images with dirty, fuzzy, scratched or partially missing numbers and low readability are marked manually. And finally, obtaining a low-readability digital data set for generating the confrontation network training.
Overview of the model
3.1 convolutional neural networks
A pattern-recognition convolutional neural network model. First, the input layer, the computer understands that several matrices are input.
Followed by a Convolution Layer (Convolution Layer), the Convolution process being expressed mathematically as
Where n is the number of input matrices, XkRepresenting the kth input matrix. WkA kth sub-convolution kernel matrix representing a convolution kernel. And s (i, j) is the value of the corresponding position element of the output matrix corresponding to the convolution kernel W.
Convolutional layers are specific to convolutional neural networks, and among convolutional layers, there are activation functions, and common activation functions include ReLU, tanh, sigmoid, softmax, and the like.
After the convolutional layer is a Pooling layer (Pooling layer), we perform the dimensionality reduction and feature extraction on the input image through the convolution operation, but the dimensionality of the feature image is still high. High dimensionality not only is computationally time consuming, but also easily leads to overfitting. For this purpose, a downsampling technique, also known as pooling, i.e. pooling, is introduced.
Pooling is done by replacing a certain area of the image with a value, such as a maximum or average value. If the maximum value is adopted, the method is called max pooling; if mean values are used, this is called mean pooling. In addition to reducing the image size, an additional benefit of downsampling is translational and rotational invariance, since the output values are computed from an area of the image and are not sensitive to translation and rotation. The specific implementation of the pooling layer is to perform block division on the obtained feature image after convolution operation, divide the image into disjoint blocks, and calculate the maximum value or average value in the blocks to obtain the pooled image.
The combination of the convolutional layer and the pooling layer can appear on the hidden layer for many times, and the times are selected according to the requirements of the model. A number of convolutional + pooling layers are followed by Fully Connected layers (FC), which act as "classifiers" throughout the convolutional neural network.
The network structure adopted in the embodiment specifically is as follows: 4 convolution layers, 3 pooling layers and 3 full-connection layers, wherein the size of an input picture is 300 × 300, the input picture respectively passes through four convolution kernels of 3 × 3, the depths are 64, 128, 256 and 512 in sequence, a feature map after each convolution is downsampled through the pooling kernels of 2 × 2, and finally the input picture enters the three full-connection layers, and the number of neurons of the full-connection layers is 1024, 1024 and 10 in sequence. But not limited to this structure, and can be adjusted according to actual conditions.
3.2 generating a Confrontation network
The GAN mainly includes two core networks, a generation model and a discriminant model.
The generative model is denoted as G, and a spurious sample can be generated by learning a large number of samples. In the method, the generator can perform image by inputting the noise picture into the generator, so that the fuzzy bank card number becomes clear.
And the discrimination model is marked as D, and the real sample and the sample generated by G are received, discriminated and distinguished. D and G are mutually played, and through learning, the generation capability of G and the discrimination capability of D are gradually enhanced and converged.
The generation of the objective function for countering network optimization is defined as follows:
where g (z) denotes mapping the input noise z to data (e.g., generating a picture). D (x) represents the probability that x is from the true data distribution. p is a radical ofdataFor true data distribution probability, pgTo generate the distribution probability of model G over training data X. The result of the last two model games is that G can generate data G (z) that is spurious. It is difficult for D to determine whether the data generated by G is true, that is, D (G (z)) is 0.5.
The objective function is divided into two parts, one part is the loss function of the discriminant model, and the other part is the loss function of the generative model.
Loss function of discriminant model:
-((1-y)log(1-D(G(z)))+ylogD(x))
the meaning is that when a normal digital image is taken, the model output D (x) should be as close to 1 as possible; when the low readability digit is extracted, the model output should be as 0 as possible, i.e. let the output of D (G (z)) go to 0. And calculating a loss function, and selecting any gradient descent optimization algorithm for optimization. Here, 0 represents that a low-readability picture is output, and 1 represents a real picture. y is the category, y takes 0 to represent a low-readability picture, and y takes 1 to represent a real picture.
The loss function of the generative model is:
(1-y)log(1-D(G(z)))
the meaning is that the generative model should produce numbers similar to normal numbers as much as possible, and the generative model error should be minimized.
The structure of the generated countermeasure network is shown in fig. 5, wherein the generated model is composed of deconvolution layers, and the specific number of layers can be changed according to the actual effect. The discriminant model is composed of convolutional layers, and the specific number of layers can be changed according to the actual effect.
The network structure adopted in the embodiment specifically is as follows: the generated model has 4 deconvolution layers in total, more nonlinear changes are added through convolution of convolution kernels with the same size after each deconvolution, and the final output result is a 300 × 300 picture activated by Tanh. The discrimination model is composed of 5 convolution layers and 1 full-connection layer, the image and the original image generated by the generator are used as input, the characteristics of the input image are sequentially extracted by the 5-layer convolution network and are finally input into the full-connection layer, probability mapping is carried out on the full-connection layer of 1024 neurons, and the probability mapping is carried out and is used for calculating a loss function after the probability mapping is carried out on the full-connection layer of 1024 neurons. But not limited to this structure, and can be adjusted according to actual conditions.
4 model training
4.1 convolutional neural network training
1) Initializing a network weight;
2) the input picture data is transmitted forwards through a convolution layer, a down-sampling layer and a full-connection layer to obtain an output value;
3) calculating the error between the output value of the network and the target value;
4) when the error is larger than the expected value, the error is transmitted back to the network, and the errors of the full connection layer, the down sampling layer and the convolution layer are obtained in sequence. The error of each layer can be understood as the total error of the network, and the network can bear the total error; when the error is equal to or less than our expected value, the training is ended.
5) And updating the weight according to the obtained error. Then at entry 4.1.2).
6) The output result is the bank card number. For each digit identification, ten classes, the softmax function calculates the probability for each class. And finally, selecting the numerical category corresponding to the maximum probability for output. And selecting the iteration times according to the experimental condition, and reducing the iteration times when the stable change of the loss function is small.
4.2 generating antagonistic network training
1) Inputting a picture with low readability (namely a false picture described below) into a generation model to obtain a group of false data, which is marked as G (z);
2) randomly sampling a real picture, and recording the sampled real picture as real data as x;
3) taking data generated in one of the first two steps 4.2.1) and 4.2.2) as an input of the discrimination network (so that the input of the discrimination model is two types of data, true/false), and the output value of the discrimination network is the probability that the input belongs to real picture data, real is 1, and fake is 0.
4) Then calculating a loss function according to the obtained probability value;
5) according to the loss functions of the discrimination model and the generation model, parameters of the model can be updated by using a back propagation algorithm; (parameters of the discrimination model are updated first, and then parameters of the generator are updated by noise data obtained by resampling)
6) The output result is the enhanced bank card number picture. And selecting the iteration times according to the experimental condition, and reducing the iteration times when the stable change of the loss function is small.
5 detection
In the detection stage, the overall framework is shown in fig. 1, and the detailed steps are as follows:
the method comprises the following steps of (1) acquiring a bank card image, identifying four corners of the bank card by corner detection, identifying four sides of the bank card by edge detection and Hough transformation (Hough) straight line detection, correcting the angle and the position of the bank card image to obtain a bank card image with an edge aligned front view angle, and keeping the selection of manually adjusting the corner, edge detection and Hough transformation straight line detection to ensure the normal extraction of card information.
And (2) inputting the image preprocessed in the step (1) into a trained convolutional neural network for card number identification, detecting all numbers and corresponding coordinates on the card at the moment, and using the characteristic that the coordinate information of each number obtained by the detection network, namely a part of continuous card numbers, the abscissa of the card numbers is unchanged and the ordinate of the card numbers is sequentially increased, and completing screening by comparing the part with the most numbers in the card numbers, namely the bank card numbers. If the probabilities of the identification numbers are all larger than a certain threshold value (0.85-0.95), the bank card number is output; if the probability of some identification numbers is smaller than the threshold value, intercepting the card number image, and turning to the step 3 to enhance the image;
and (3) enhancing the intercepted card number image by using a countermeasure network based on a generating formula, sending the image into a generating model in the trained countermeasure generating network, performing up-sampling on the image by four deconvolution layers, and outputting a 64 x 64 enhanced image. I.e. a repaired digital image, as in fig. 6.
And (4) re-detecting the repaired digital image by using the trained convolutional neural network, wherein the method is the same as the step (2), outputting the card number, and completing the detection.

Claims (5)

1. The method for identifying the bank card number under the complex condition is characterized by comprising the following steps:
step 1: acquiring an image, and preprocessing the image;
step 2: identifying the numbers on the preprocessed card one by using an identification algorithm based on a convolutional neural network, extracting bank card numbers, obtaining the position information of each number by using a detection network, screening whether the number detected by the network is the bank card number, and outputting the card number if the output probability of all identification numbers is greater than a certain threshold value; if the output probability of a certain number is smaller than the threshold value, intercepting and enhancing the card image, and in step (3);
and step 3: inputting the bank card image intercepted in the step 2 into a trained generation model for generating a countermeasure network DCGAN, and carrying out image enhancement;
and 4, step 4: inputting the bank card image enhanced in the step 3 into a trained convolutional neural network for card number identification, and outputting the card number if the identification is successful; otherwise the identification fails.
2. The method for identifying a bank card number under a complex condition as claimed in claim 1, wherein: in the image preprocessing process, corner detection is used for identifying four corners of the bank card, edge detection and Hough (Hough) straight line detection are used for identifying four edges of the bank card, and therefore the angle and the position of the bank card picture are corrected.
3. The method for identifying a bank card number under a complex condition as claimed in claim 1, wherein: the network structure of the method is divided into two independent parts, wherein the first part is a convolutional neural network and is used for carrying out card number identification on a preprocessed bank card image; the second part is to generate a countermeasure network, including a generative model and a discriminant model.
4. The method for identifying a bank card number under a complex condition as claimed in claim 1, wherein: the convolutional neural network identifies the bank card number in the frame in a partitioning manner by finding out the accurate position of the frame where the bank card number is located; and identifying the bank card picture after the generation of the confrontation network enhancement.
5. The method for identifying a bank card number under a complex condition as claimed in claim 1, wherein: the generation model in the generation countermeasure network is composed of a deconvolution neural network and used for enhancing the image intercepted by the bank card, and the discrimination model is composed of a convolution neural network and used for training the generation model and improving the accuracy of the generation model.
CN201910643964.0A 2019-07-16 2019-07-16 Bank card number identification method under complex conditions Pending CN110610174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910643964.0A CN110610174A (en) 2019-07-16 2019-07-16 Bank card number identification method under complex conditions

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910643964.0A CN110610174A (en) 2019-07-16 2019-07-16 Bank card number identification method under complex conditions

Publications (1)

Publication Number Publication Date
CN110610174A true CN110610174A (en) 2019-12-24

Family

ID=68889665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910643964.0A Pending CN110610174A (en) 2019-07-16 2019-07-16 Bank card number identification method under complex conditions

Country Status (1)

Country Link
CN (1) CN110610174A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111899270A (en) * 2020-07-30 2020-11-06 平安科技(深圳)有限公司 Card frame detection method, device and equipment and readable storage medium
CN112668529A (en) * 2020-12-31 2021-04-16 神思电子技术股份有限公司 Dish sample image enhancement identification method
WO2021227289A1 (en) * 2020-05-14 2021-11-18 南京翱翔信息物理融合创新研究院有限公司 Deep learning-based low-quality two-dimensional barcode detection method in complex background
EP3961482A1 (en) * 2020-08-31 2022-03-02 Check it out Co., Ltd. System and method for verifying authenticity of an anti-counterfeiting element, and method for building a machine learning model used to verify authenticity of an anti-counterfeiting element
CN114359287A (en) * 2022-03-21 2022-04-15 青岛正信德宇信息科技有限公司 Image data processing method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN108764230A (en) * 2018-05-30 2018-11-06 上海建桥学院 A kind of bank's card number automatic identifying method based on convolutional neural networks
US20190138860A1 (en) * 2017-11-08 2019-05-09 Adobe Inc. Font recognition using adversarial neural network training

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190138860A1 (en) * 2017-11-08 2019-05-09 Adobe Inc. Font recognition using adversarial neural network training
CN108446667A (en) * 2018-04-04 2018-08-24 北京航空航天大学 Based on the facial expression recognizing method and device for generating confrontation network data enhancing
CN108764230A (en) * 2018-05-30 2018-11-06 上海建桥学院 A kind of bank's card number automatic identifying method based on convolutional neural networks

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021227289A1 (en) * 2020-05-14 2021-11-18 南京翱翔信息物理融合创新研究院有限公司 Deep learning-based low-quality two-dimensional barcode detection method in complex background
CN111899270A (en) * 2020-07-30 2020-11-06 平安科技(深圳)有限公司 Card frame detection method, device and equipment and readable storage medium
CN111899270B (en) * 2020-07-30 2023-09-05 平安科技(深圳)有限公司 Card frame detection method, device, equipment and readable storage medium
EP3961482A1 (en) * 2020-08-31 2022-03-02 Check it out Co., Ltd. System and method for verifying authenticity of an anti-counterfeiting element, and method for building a machine learning model used to verify authenticity of an anti-counterfeiting element
CN112668529A (en) * 2020-12-31 2021-04-16 神思电子技术股份有限公司 Dish sample image enhancement identification method
CN114359287A (en) * 2022-03-21 2022-04-15 青岛正信德宇信息科技有限公司 Image data processing method and device

Similar Documents

Publication Publication Date Title
CN110610174A (en) Bank card number identification method under complex conditions
CN111325203B (en) American license plate recognition method and system based on image correction
CN108596166B (en) Container number identification method based on convolutional neural network classification
CN110060237B (en) Fault detection method, device, equipment and system
CN109145745B (en) Face recognition method under shielding condition
CN109740606B (en) Image identification method and device
CN110032989B (en) Table document image classification method based on frame line characteristics and pixel distribution
CN108681735A (en) Optical character recognition method based on convolutional neural networks deep learning model
CN111539957B (en) Image sample generation method, system and detection method for target detection
CN112329779A (en) Method and related device for improving certificate identification accuracy based on mask
CN110532855A (en) Natural scene certificate image character recognition method based on deep learning
CN111783757A (en) OCR technology-based identification card recognition method in complex scene
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN109977834B (en) Method and device for segmenting human hand and interactive object from depth image
CN115797216A (en) Inscription character restoration model and restoration method based on self-coding network
CN115909172A (en) Depth-forged video detection, segmentation and identification system, terminal and storage medium
US5105470A (en) Method and system for recognizing characters
CN108960005B (en) Method and system for establishing and displaying object visual label in intelligent visual Internet of things
CN110766001A (en) Bank card number positioning and end-to-end identification method based on CNN and RNN
CN110175509A (en) A kind of round-the-clock eye circumference recognition methods based on cascade super-resolution
CN116229528A (en) Living body palm vein detection method, device, equipment and storage medium
CN113051901B (en) Identification card text recognition method, system, medium and electronic terminal
CN115170414A (en) Knowledge distillation-based single image rain removing method and system
CN111079715B (en) Occlusion robustness face alignment method based on double dictionary learning
CN114387592A (en) Character positioning and identifying method under complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191224

WD01 Invention patent application deemed withdrawn after publication