CN111199240A - Training method of bank card identification model, and bank card identification method and device - Google Patents

Training method of bank card identification model, and bank card identification method and device Download PDF

Info

Publication number
CN111199240A
CN111199240A CN201811368215.3A CN201811368215A CN111199240A CN 111199240 A CN111199240 A CN 111199240A CN 201811368215 A CN201811368215 A CN 201811368215A CN 111199240 A CN111199240 A CN 111199240A
Authority
CN
China
Prior art keywords
image
bank card
unionpay
sign
card
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811368215.3A
Other languages
Chinese (zh)
Inventor
沈程隆
赵立军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN201811368215.3A priority Critical patent/CN111199240A/en
Publication of CN111199240A publication Critical patent/CN111199240A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a training method of a bank card identification model, a bank card identification method and a device, wherein the bank card identification method comprises the following steps: acquiring an image to be detected; detecting whether the image to be detected is a bank card image with a Unionpay sign or not through a bank card identification model; and if the image to be detected is a bank card image with the Unionpay sign, outputting the position information of the bank card and the image category prediction probability. By the mode, the bank card can be quickly positioned and identified.

Description

Training method of bank card identification model, and bank card identification method and device
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a training method for a bank card recognition model, a bank card recognition method, and an apparatus.
Background
With the rapid development of internet finance, the traffic of mobile payment is increased, and how to efficiently process bank card binding operation becomes very important. The traditional card binding operation needs the user to select the bank of the bank card and the card number of the bank card, the steps are complicated, and the user experience is poor.
The existing identification method for the bank card is to acquire an image containing the bank card, perform Gaussian blur, denoising and smoothing treatment on the image, perform graying, convert a three-channel color image into a single-channel grayscale image, and facilitate subsequent treatment. And then Canny edge detection is carried out, the edge of the whole bank card is detected through an algorithm, and the edge is converted into a black and white image through binarization so as to extract the contour. And finally, screening out the contour of the bank card meeting the conditions, and determining the position of the bank card.
Although the above method brings about a subversive effect relative to manual input, when the bank card is in a complex environment, the position of the bank card cannot be accurately determined.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a training method of a bank card identification model, a bank card identification method and a bank card identification device, which can be used for rapidly positioning and identifying a bank card.
In order to solve the above technical problem, the first technical solution adopted by the present application is: the training method of the bank card recognition model comprises the following steps: inputting the marked image into a deep learning regression model, wherein the marked image is an image for marking the positions and image types of a bank card and a Unionpay sign in the image;
predicting the bank card and the Unionpay sign in the image through a deep learning regression model to obtain the predicted position information and the image type prediction probability of the bank card and the Unionpay sign;
and comparing the predicted position information with the bank card of the image and the marked position information of the Unionpay sign, determining whether the deep learning regression model is retrained through a loss function, and obtaining a bank card identification model which is the deep learning regression model after training.
In order to solve the above technical problem, the second technical solution adopted by the present application is: the identification method of the bank card is based on a bank card identification model and comprises the following steps:
acquiring an image to be detected;
detecting whether the image to be detected is a bank card image with a Unionpay sign or not through a bank card identification model;
and if the image to be detected is a bank card image with the Unionpay sign, outputting the position information of the bank card and the image category prediction probability.
In order to solve the above technical problem, the third technical solution adopted by the present application is: provides an intelligent device, the intelligent terminal comprises an image acquisition module, a detection module and an output module,
the image acquisition module is used for acquiring an image to be detected;
the detection module is used for detecting whether the image to be detected is a bank card image with a Unionpay sign through a bank card identification model;
the output module is used for outputting the position information of the bank card and the image category prediction probability when the image to be detected is the bank card image with the Unionpay sign.
In order to solve the above technical problem, a fourth technical solution adopted by the present application is: the training device comprises an image input module, a prediction module and a training module, wherein the image input module is used for inputting a marked image into a deep learning regression model, and the marked image is an image for marking the positions of a bank card and a UnionPair mark in the image and the image category;
the prediction module is used for predicting the bank card and the Unionpay sign in the image through the deep learning regression model to obtain the prediction position information of the bank card and the Unionpay sign and the image category prediction probability;
the training module is used for comparing the predicted position information with the bank card of the image and the marked position information of the Unionpay sign, determining whether the deep learning regression model is retrained through a loss function, and obtaining a bank card identification model which is the deep learning regression model after training.
In order to solve the above technical problem, a fifth technical solution adopted by the present application is: the intelligent terminal comprises a human-computer interaction control circuit, a processor and a computer program capable of running on the processor, wherein the human-computer interaction control circuit, the processor and the computer program are mutually coupled, and the steps of the training method of the bank card identification model or the identification method of the bank card in any embodiment are executed when the processor executes the computer program.
In order to solve the above technical problem, a sixth technical solution adopted in the present application is: the memory device is stored with program data, and the program data realizes the training method for bank card identification or the identification method for bank card when being executed by a processor.
The beneficial effect of this application is: in this embodiment, when the bank card is identified, the bank card and the unionpay sign are simultaneously positioned and identified, whether the card is the bank card is determined by judging whether the unionpay sign exists, and when the image to be detected is determined to be the bank card image with the unionpay sign, the position information of the bank card and the image category prediction probability are output. Not only can filter the card of non-bank card type, moreover, can realize the accurate positioning of bank card fast under the complex environment, get rid of because of the influence that card location is inaccurate to follow-up characters detection and characters discernment, improve the speed and the rate of accuracy of bank card discernment.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a training method for a bank card recognition model according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating an embodiment of a method for identifying a bank card according to the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a training apparatus for a bank card recognition model according to the present application;
FIG. 4 is a schematic block diagram of an embodiment of the smart device of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of an intelligent terminal according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a memory device according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The bank card identification model is utilized, and the bank card is quickly and accurately identified on the premise of not needing manual participation.
The bank card identification model is an initial model which is a deep learning regression model. The deep learning regression model is obtained by fine tuning on an object detection model YOLO model. The fine tuning specifically comprises the steps that a full connection layer in the basic network architecture is replaced by a full convolution layer, and the dimensionality of deep learning model calculation is reduced, so that the calculation amount is reduced. In order to control the size of the deep learning model and improve the operation speed, the feature map number of the object detection model YOLO model is reduced by half so as to reduce the memory occupied by the bank card identification model. In some embodiments, the object detection model uses the YOLO v3 model.
Specifically, as shown in fig. 1, fig. 1 is a schematic flow chart of an embodiment of a training method for a bank card identification model according to the present application.
Because the common certificates similar to the bank cards are more and more, such as identity cards, access control cards and the like, in order to prevent the identification failure caused by misoperation, save the identification efficiency of the bank cards and save resources, the training of identifying the Unionpay mark is added to the deep learning regression model in the embodiment so as to judge whether the cards are bank cards or not.
The method specifically comprises the following steps:
step 101: and inputting the marked images into a deep learning regression model.
The marked image is an image for marking the position information and the image category of the bank card and the Unionpay sign in the image. The labeling process is accomplished by manual labeling.
The marked position information comprises marks of the coordinates of the top left vertex of the bank card and the Unionpay sign and the length and the width of the bank card and the Unionpay sign. In particular, the horizontal and vertical coordinates x and y in the coordinate system established in the image, and the extension distances w and h on the horizontal and vertical coordinates are labeled. The image category is marked as whether the image is a bank card or whether the image is a type mark of a Unionpay mark.
In an alternative embodiment, all the labeled images have a uniform coordinate system, and if the labeled images are square, the coordinate system takes the upper left end point of the square image as the origin.
In another embodiment, a coordinate system may also be established by determining a central point of the image as an origin, and coordinates of the central point of the bank card and the union pay sign in the image and an extension distance of the central point along a coordinate axis, a size of the bank card, and a size of the union pay sign are labeled, which is not limited herein.
Step 102: predicting the bank card and the Unionpay sign in the image through a deep learning regression model to obtain the predicted position information and the image category prediction probability of the bank card and the Unionpay sign.
And positioning the bank card and the Unionpay sign through a deep learning regression model, and determining the predicted position information and the image category prediction probability of the bank card and the Unionpay sign.
The number of feature maps of the deep learning model of the present embodiment is reduced by half with respect to the original object detection model YOLOv3 model. In the embodiment, an image input into the deep learning model is divided into 13 × 13 grids, feature extraction is performed on the image in each grid in the image through the deep learning network, the grid position where the central point of the bank card and the unionpay sign is located is determined, the bank card or the unionpay sign is located through the grid where the central point is located, and in a specific embodiment, coordinate information of the bank card and the unionpay sign relative to the upper left end point of a coordinate system of the image and the extending distance of the bank card and the unionpay sign along each coordinate axis are determined.
In an alternative embodiment, by performing class 3 bounding box prediction on each mesh, the bounding box with the largest area IUO of the real bounding box area is used as the final predicted bounding box, and the coordinate information of the bounding box is output. In another embodiment, 4-class bounding box prediction or 5-class bounding box prediction may be performed for each mesh, and theoretically, the more the number of bounding boxes is predicted, the more accurate the result is, but in consideration of the calculation amount and the positioning effect, 3-class bounding box prediction is used as a preferred embodiment.
Specifically, if the predicted image is a bank card, the feature of the partial image predicted as the bank card in the image is compared with the pre-stored bank card feature to determine the prediction probability of the bank card. And similarly, comparing the characteristics of the partial image predicted to be the Unionpay mark with the characteristic information of the prestored Unionpay mark, and determining the prediction probability of the partial image to be the Unionpay mark.
Step 103: and comparing the predicted position information with the bank card of the image and the marked position information of the Unionpay sign, determining whether the deep learning regression model is retrained through a loss function, and obtaining the bank card identification model which is the deep learning regression model after training.
Because the depth regression model has the function of classifying the images, the depth regression model does not have a perfect bank card identification function, and the output predicted position information of the bank card and the Unionpay sign is not completely accurate or has low accuracy. Therefore, the output of the deep learning regression model is compared with the labeled position information of the bank card and the Unionpay sign in the image, and model optimization is performed according to the comparison result.
In the present embodiment, the optimization of the deep learning regression model is realized by optimizing the loss function. Specifically, the parameters of the loss function corresponding to the minimum loss value of the current loss value are determined as the current optimization parameters of the loss function, and the deep learning regression model is retrained. And when the change of the loss value of the current loss value is smaller than a preset range, determining the current loss function parameter as the model parameter of the deep learning regression model, and determining the model parameter as the model parameter of the bank card identification model.
Different from the prior art, the embodiment labels the positions of the bank card and the unionpay sign in the image and the image category information, compares the labeled position information with the position information of the bank card and the unionpay sign predicted by the deep learning regression model, trains the deep learning regression model through the loss function, and determines the deep learning regression model after the training as the bank card identification model. The bank card recognition model trained in the mode can filter out cards of non-bank card types, can quickly realize accurate positioning of the bank card in a complex environment, eliminates subsequent influences on character detection and character recognition caused by inaccurate card positioning, and improves the speed and accuracy of bank card recognition.
Referring to fig. 2, fig. 2 is a schematic flow chart of an embodiment of the identification method for a bank card according to the present application. The bank card identification method of the embodiment is based on a bank card identification model. The bank card identification model is obtained by training through the training method of the bank card identification model in any one embodiment of fig. 1 and the text description. The identification method of the bank card specifically comprises the following steps:
step 201: and acquiring an image to be detected.
When the intelligent terminal needs to identify the bank card, the image to be detected is firstly acquired, wherein the image to be detected can be acquired in a photographing or scanning mode, and the method is not limited herein.
Step 202: and detecting whether the image to be detected is a bank card image with a Unionpay sign or not through a bank card identification model.
In this embodiment, the bank card identification model first determines whether the image to be detected includes a bank card-shaped card and a unionpay sign, and if the bank card-shaped card or the unionpay sign is not detected, it is determined that the image to be detected is not the bank card image with the unionpay sign, and a prompt message is sent to prompt the user. And if the image to be detected is the card image of the bank card with the Unionpay mark, continuously identifying the bank card.
Specifically, the bank card identification model first performs feature extraction on an image to be detected according to set parameters or specifications of the bank card identification model, and divides the image to be detected into grids matched with the set parameters or specifications of the bank card identification model, such as 13 × 13 grids. Performing 3-class frame prediction on each grid, and taking a frame with the largest IOU (input output Unit) of a real frame as prediction output to obtain position information of the card; and predicting the position of the Unionpay mark through the grid where the central point of the Unionpay mark is located to obtain the position information of the Unionpay mark.
In addition, in the present embodiment, probability prediction is also performed on the card and the union pay flag by the YOLOv3 model. Specifically, the characteristic information of the card is compared with the pre-stored bank card characteristic information, and the prediction probability that the card is the bank card is determined, namely the image type prediction probability of the card is determined. And similarly, comparing the characteristic information of the Unionpay mark with the characteristic information of the prestored Unionpay mark, and determining the prediction probability that the partial image is the Unionpay mark, namely determining the image type prediction probability of the Unionpay mark.
After the image type prediction probability of the card and the image type prediction probability of the Unionpay sign are obtained, the image type prediction probability of the card and the image type prediction probability of the Unionpay sign are respectively compared with the corresponding probability threshold values, and whether the image type prediction probability of the card and the image type prediction probability of the Unionpay sign are both larger than the corresponding probability threshold values is judged. And if the detected images are all larger than the corresponding probability threshold values, determining that the image to be detected is the bank card image with the Unionpay mark.
In an alternative embodiment, the predicted probability of the image type of the card and the predicted probability of the image type of the union pay flag are both set to 50%, and in other embodiments, other probabilities may be set, such as 60%, 70%, or other values greater than 50%. Moreover, the prediction type prediction probability of the card and the prediction type prediction probability of the union pay sign may be the same or different, and are not limited herein.
Step 203: and if the image to be detected is a bank card image with the Unionpay sign, outputting the position information of the bank card and the image category prediction probability.
The position information of the bank card is the position information of the card, and the image type prediction probability is the prediction probability that the card is the bank card.
In another embodiment, when the image to be detected is determined to be a bank card image with the Unionpay sign, characters on the bank card are further identified, and the card number of the bank card is determined.
In the embodiment, the optical recognition technology OCR technology is adopted to recognize the characters on the bank card and determine the card number of the bank card.
Specifically, the OCR technology is to examine characters on an image by an electronic device, determine the shape of the characters by detecting light and dark patterns, and translate the shape into computer words by a character recognition technology, i.e., recognize each number of a card number.
And after the card number of the bank card is determined, outputting the card number, and finishing the identification process of the bank card.
Different from the prior art, in the embodiment, when the bank card is identified, the bank card and the unionpay sign are simultaneously positioned and identified, whether the card is the bank card is determined by judging whether the unionpay sign exists, and when the image to be detected is determined to be the bank card image with the unionpay sign, the position information and the image category prediction probability of the bank card are output. Not only can filter the card of non-bank card type, moreover, can realize the accurate positioning of bank card fast under the complex environment, get rid of because of the card inaccurate influence of causing of location to characters detection and characters discernment, improve the speed and the rate of accuracy of bank card discernment.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an embodiment of a training device for a bank card recognition model according to the present application.
Because the number of the common certificates similar to the bank card is increased, in order to prevent the identification failure caused by misoperation, save the identification efficiency of the bank card and save resources, the training of the identification of the Unionpay mark is added to the deep learning regression model in the embodiment.
The bank card recognition model of the present embodiment uses a deep learning regression model as an initial model. The deep learning regression model is obtained by fine tuning on an object detection model YOLO v3 model. The fine tuning specifically comprises the steps that a full connection layer in the basic network architecture is replaced by a full convolution layer, and the dimensionality of deep learning model calculation is reduced, so that the calculation amount is reduced. In order to control the size of the deep learning model and improve the operation speed, the feature map number of the object detection model YOLO model is reduced by half so as to reduce the memory occupied by the bank card identification model.
Specifically, the training apparatus of the present embodiment includes an image input module 301, a prediction module 302, and a training module 303.
The image input module 301 is used to input the labeled image into the deep learning regression model.
The marked image is an image marked on the position information and the image type of the bank card in the image, the position information and the image type of the Unionpay mark.
The marked position information comprises marks of the coordinates of the top left vertex of the bank card and the Unionpay sign and the length and the width of the bank card and the Unionpay sign. In particular, the horizontal and vertical coordinates x and y in the coordinate system established in the image, and the extension distances w and h on the horizontal and vertical coordinates are labeled. The image category is the type label of whether the image is a bank card or a unionpay sign.
In an alternative embodiment, all the labeled images have a uniform coordinate system, and if the labeled images are square, the coordinate system takes the upper left end point of the square image as the origin.
In another embodiment, a coordinate system may also be established by determining a central point of the image as an origin, and coordinates of the central point of the bank card and the union pay sign in the image and an extension distance of the central point along a coordinate axis, a size of the bank card, and a length and a width of the union pay sign are labeled, which is not limited herein.
Specifically, the training device inputs the labeled image into the deep learning regression model after receiving the labeled image.
The prediction module 302 is configured to predict the bank card and the union pay sign in the image through the deep learning regression model, and obtain predicted position information of the bank card and the union pay sign and an image category prediction probability.
The prediction module 302 locates the bank card and the union pay sign through the deep learning regression model, and determines the predicted position information and the image category prediction probability of the bank card and the union pay sign.
The number of feature maps of the deep learning model of the present embodiment is reduced by half with respect to the original object detection model YOLOv3 model. In this embodiment, the prediction module 302 divides the image input into the deep learning model into 13 × 13 grids, performs feature extraction on the image in each grid in the image through the deep learning network, determines the grid positions where the center points of the bank card and the union pay sign are located, and locates the bank card or the union pay sign through the grid where the center point is located. In a particular embodiment, the coordinate information of the bank card and the union pay sign with respect to the upper left endpoint of the coordinate system of the image and the extension distance along each coordinate axis are determined.
In an alternative embodiment, prediction module 302 performs class 3 bounding box prediction for each mesh, uses the bounding box with IUO maximum of the real bounding box as the final predicted bounding box, and outputs the coordinate information of the bounding box. In another embodiment, 4-class bounding box prediction or 5-class bounding box prediction may be performed for each mesh, and theoretically, the more the number of bounding boxes is predicted, the more accurate the result is, but in consideration of the calculation amount and the effect, 3-bounding box prediction is used as a preferred embodiment.
The above-mentioned image type prediction probability is also realized by the prediction module 302 through YOLOv3, specifically, if the predicted image is a bank card, the feature of the partial image predicted as the bank card in the image is compared with the pre-stored bank card feature to determine the prediction probability of the bank card. And similarly, comparing the characteristics of the partial image predicted as the Unionpay mark with the characteristic information of the prestored Unionpay mark to determine the prediction probability of the Unionpay mark.
The training module 303 is configured to compare the predicted position information with the bank card of the image and the labeled position information of the union pay sign, determine whether to retrain the deep learning regression model through a loss function, and obtain a bank card identification model, where the bank card identification model is a deep learning regression model after training.
Because the depth regression model has the function of classifying the images, the depth regression model does not have a perfect bank card identification function, and the output predicted position information of the bank card and the Unionpay sign is not completely accurate or has low accuracy. Therefore, the output of the deep learning regression model needs to be compared with the labeling position information for manually labeling the bank card and the unionpay sign in the image, and model optimization needs to be performed according to the comparison result.
In this embodiment, the training module 303 optimizes the deep learning regression model by optimizing the loss function. Specifically, the training module 303 determines the parameter of the loss function corresponding to the minimum loss value as the current optimized parameter of the loss function, and retrains the deep learning regression model. And when the change of the loss value of the current loss value is smaller than a preset range, determining the current loss function parameter as the model parameter of the deep learning regression model, and determining the model parameter as the model parameter of the bank card identification model.
Different from the prior art, the embodiment labels the positions of the bank card and the Unionpay sign in the image and the image category information, compares the labeled position information with the predicted position information predicted by the deep learning regression model, trains the deep learning regression model through the loss function, and determines the deep learning regression model after the training as the bank card identification model. The bank card recognition model trained in the mode can filter out cards of non-bank card types, can quickly realize accurate positioning of the bank card in a complex environment, eliminates subsequent influences on character detection and character recognition caused by inaccurate card positioning, and improves the speed and accuracy of bank card recognition.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of the intelligent device of the present application. The intelligent device of the embodiment comprises a bank card identification model. The bank card identification model is obtained by training through the training method of the bank card identification model in any one embodiment of fig. 1 and the text description.
The smart device includes an image acquisition module 401, a detection module 402, and an output module 403.
The image obtaining module 401 is configured to obtain an image to be detected.
When the intelligent terminal needs to identify the bank card, the image to be detected is firstly acquired through the image acquisition module 401, wherein the image to be detected is acquired through the image acquisition module 401 in a photographing or scanning mode, and the image to be detected is not limited herein.
The detection module 402 is configured to detect whether the image to be detected is a bank card image with a union pay flag through a bank card identification model.
In this embodiment, the detection module 402 first determines whether the image to be detected includes a bank card-shaped card and a union pay sign through a bank card identification model, and if the bank card-shaped card or the union pay sign is not detected, determines that the image to be detected is not the bank card image with the union pay sign, and sends a prompt message to prompt the user. And if the image to be detected is the card image of the bank card with the Unionpay mark, continuously identifying the bank card.
Specifically, the detection module 402 first performs feature extraction on the image to be detected according to the set parameters or specifications of the bank card identification model through the bank card identification model, and divides the image to be detected into grids matched with the set parameters or specifications of the bank card identification model, such as 13 × 13 grids. And 3 types of frame prediction are carried out on each grid, the frame with the largest IOU (input output unit) of the real frame is taken as a final prediction frame, and the position information of the card is output. And predicting the position of the Unionpay mark through the grid where the central point of the Unionpay mark is located to obtain the position information of the Unionpay mark.
In addition, in the present embodiment, the detection module 402 also performs probability prediction on the card and the union pay flag through YOLOv 3. Specifically, the characteristic information of the card is compared with the pre-stored bank card characteristic information, and the prediction probability that the card is the bank card is determined, namely the image type prediction probability of the card is determined. And similarly, comparing the characteristic information of the Unionpay mark with the characteristic information of the prestored Unionpay mark, and determining the prediction probability that the partial image is the Unionpay mark, namely determining the image type prediction probability of the Unionpay mark.
After obtaining the image type prediction probability of the card and the prediction probability of the union pay sign, the detection module 402 compares the image type prediction probability of the card and the image type prediction probability of the union pay sign with respective corresponding probability thresholds, and determines whether the image type prediction probability of the card and the image type prediction probability of the union pay sign are both greater than the corresponding probability thresholds. If the detected images are all larger than the corresponding probability threshold values, determining that the to-be-detected image is the bank card image with the Unionpay mark
In an alternative embodiment, the predicted probability of the image type of the card and the predicted probability of the image type of the union pay flag are both set to 50%, and in other embodiments, other probabilities may be set, such as 60%, 70%, or other values greater than 50%. Moreover, the image type prediction probability of the card and the image type prediction probability of the union pay sign may be the same or different, and are not limited herein.
The output module 403 is configured to output the position information of the bank card and the image category prediction probability when the image to be detected is a bank card image with a unionpay sign.
The position information of the bank card is the position information of the card, and the image type prediction probability is the prediction probability that the card is the bank card.
In another embodiment, when determining that the image to be detected is a bank card image with a unionpay sign, the output module 403 further identifies the text on the bank card to determine the card number of the bank card.
The output module 403 of the present embodiment adopts an optical recognition technology OCR technology to recognize characters on the bank card and determine the card number of the bank card.
Specifically, the OCR technology is to examine characters on an image by an electronic device, determine the shape of the characters by detecting light and dark patterns, and translate the shape into computer words by a character recognition technology, i.e., recognize each number of a card number.
The output module 403 outputs the card number after determining the card number of the bank card, and the identification process of the bank card is finished this time.
Different from the prior art, in the embodiment, when the bank card is identified, the bank card and the unionpay sign are simultaneously positioned and identified, whether the card is the bank card is determined by judging whether the unionpay sign exists, and when the image to be detected is determined to be the bank card image with the unionpay sign, the position information and the image category prediction probability of the bank card are output. Not only can filter the card of non-bank card type, moreover, can realize the accurate positioning of bank card fast under the complex environment, get rid of because of the card inaccurate influence of causing of location to characters detection and characters discernment, improve the speed and the rate of accuracy of bank card discernment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of the intelligent terminal according to the present application. The intelligent terminal 50 of the present embodiment includes a human-computer interaction control circuit 502 and a processor 501 coupled to the human-computer interaction control circuit. A computer program executable on the processor 501. The processor 501, when executing the computer program, can implement the training method of the bank card identification model of any embodiment of fig. 1 and the related text, or implement the bank card identification method of any embodiment of fig. 2 and the related text.
Please refer to fig. 6, the present application further provides a schematic structural diagram of an embodiment of a memory device. In this embodiment, the storage device 60 stores processor-executable computer instructions 61, and the computer instructions 61 are used for executing the steps of the method for training the bank card identification model according to any one of the embodiments described in fig. 1 and the related text thereof, or executing the steps of the method for identifying the bank card according to any one of the embodiments described in fig. 2 and the related text thereof.
The storage device 60 may be a medium that can store the computer instructions 61, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or may be a server that stores the computer instructions 61, and the server may send the stored computer instructions 61 to another device for operation or may self-operate the stored computer instructions 61.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, e.g., a unit or division of units is merely a logical division, and other divisions may be realized in practice, e.g., a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are merely examples and are not intended to limit the scope of the present disclosure, and all modifications, equivalents, and flow charts using the contents of the specification and drawings of the present disclosure or those directly or indirectly applied to other related technical fields are intended to be included in the scope of the present disclosure.

Claims (10)

1. A training method of a bank card recognition model is characterized by comprising the following steps:
inputting an annotated image into a deep learning regression model, wherein the annotated image is an image which is used for annotating position information and image categories of a bank card and a Unionpay sign in the image;
predicting the bank card and the Unionpay sign in the image through the deep learning regression model to obtain the predicted position information and the image category prediction probability of the bank card and the Unionpay sign;
and comparing the predicted position information with the bank card of the image and the marked position information of the Unionpay sign, determining whether the deep learning regression model is retrained through a loss function, and obtaining the bank card identification model which is the deep learning regression model after training.
2. A bank card identification method is characterized in that the identification method is based on a bank card identification model and comprises the following steps:
acquiring an image to be detected;
detecting whether the image to be detected is a bank card image with a Unionpay sign or not through the bank card identification model;
and if the image to be detected is a bank card image with a Unionpay sign, outputting the position information of the bank card and the image category prediction probability.
3. The identification method according to claim 2, wherein the step of detecting whether the image to be detected is a bank card image with a union pay sign through the bank card identification model specifically comprises:
positioning a card and a Unionpay sign in the image to be detected through the bank card identification model to obtain position information of the card and the Unionpay sign and an image type prediction probability;
judging whether the image type prediction probability of the card and the image type prediction probability of the Unionpay sign are both greater than corresponding probability threshold values;
and if the detected images are all larger than the corresponding probability threshold values, determining that the to-be-detected images are the bank card images with the Unionpay marks.
4. The identification method according to claim 2, characterized in that the identification method further comprises:
and if the image to be detected is a bank card image with a Unionpay sign, identifying characters on the bank card and determining the card number of the bank card.
5. The identification method according to claim 3, wherein the step of locating the card and the union pay sign through the bank card identification model to obtain the position information and the image type prediction probability of the card and the union pay sign specifically comprises:
performing feature extraction on the image to be detected according to set parameters of the bank card identification model, and determining grids where the central points of the card and the UnionPay mark are located;
and determining the predicted position information and the image type prediction probability of the card and the Unionpay marker through the grids where the central points of the card and the Unionpay marker are located.
6. The identification method according to claim 2, wherein the bank card identification model is trained by the training method according to claim 1.
7. An intelligent device is characterized in that the intelligent device comprises an image acquisition module, a detection module and an output module,
the image acquisition module is used for acquiring an image to be detected;
the detection module is used for detecting whether the image to be detected is a bank card image with a Unionpay sign through a bank card identification model;
the output module is used for outputting the position information of the bank card and the image category prediction probability.
8. The training device of the bank card recognition model is characterized by comprising an image input module, a prediction module and a training module,
the image input module is used for inputting an annotated image into the deep learning regression model, wherein the annotated image is an image which is used for annotating the position information and the image category of a bank card and a Unionpay sign in the image;
the prediction module is used for predicting the bank card and the Unionpay sign in the image through the deep learning regression model to obtain the prediction position information and the image type prediction probability of the bank card and the Unionpay sign;
the training module is used for comparing the predicted position information with the bank card of the image and the marked position information of the Unionpay sign, determining whether the deep learning regression model is retrained through a loss function, and obtaining the bank card identification model, wherein the bank card identification model is the deep learning regression model after training.
9. An intelligent terminal, characterized in that the intelligent terminal comprises a human-computer interaction control circuit, a processor and a computer program capable of running on the processor, which are coupled to each other, and the processor implements the training method of the bank card identification model of claim 1 or the steps of implementing the identification method of the bank card of any one of claims 2 to 6 when executing the computer program.
10. A storage device, wherein the storage device stores program data, and the program data when executed by a processor implements the training method of the bank card identification model according to claim 1 or the identification method of the bank card according to any one of claims 2 to 6.
CN201811368215.3A 2018-11-16 2018-11-16 Training method of bank card identification model, and bank card identification method and device Pending CN111199240A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811368215.3A CN111199240A (en) 2018-11-16 2018-11-16 Training method of bank card identification model, and bank card identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811368215.3A CN111199240A (en) 2018-11-16 2018-11-16 Training method of bank card identification model, and bank card identification method and device

Publications (1)

Publication Number Publication Date
CN111199240A true CN111199240A (en) 2020-05-26

Family

ID=70745771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811368215.3A Pending CN111199240A (en) 2018-11-16 2018-11-16 Training method of bank card identification model, and bank card identification method and device

Country Status (1)

Country Link
CN (1) CN111199240A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783775A (en) * 2020-06-30 2020-10-16 京东数字科技控股有限公司 Image acquisition method, device, equipment and computer readable storage medium
CN112801165A (en) * 2021-01-22 2021-05-14 中国银联股份有限公司 Card auditing method and device
WO2022062449A1 (en) * 2020-09-25 2022-03-31 平安科技(深圳)有限公司 User grouping method and apparatus, and electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966107A (en) * 2015-07-10 2015-10-07 安徽清新互联信息科技有限公司 Credit card card-number identification method based on machine learning
CN106886774A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The method and apparatus for recognizing ID card information
CN107247956A (en) * 2016-10-09 2017-10-13 成都快眼科技有限公司 A kind of fast target detection method judged based on grid
CN107742120A (en) * 2017-10-17 2018-02-27 北京小米移动软件有限公司 The recognition methods of bank card number and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104966107A (en) * 2015-07-10 2015-10-07 安徽清新互联信息科技有限公司 Credit card card-number identification method based on machine learning
CN106886774A (en) * 2015-12-16 2017-06-23 腾讯科技(深圳)有限公司 The method and apparatus for recognizing ID card information
CN107247956A (en) * 2016-10-09 2017-10-13 成都快眼科技有限公司 A kind of fast target detection method judged based on grid
CN107742120A (en) * 2017-10-17 2018-02-27 北京小米移动软件有限公司 The recognition methods of bank card number and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783775A (en) * 2020-06-30 2020-10-16 京东数字科技控股有限公司 Image acquisition method, device, equipment and computer readable storage medium
WO2022062449A1 (en) * 2020-09-25 2022-03-31 平安科技(深圳)有限公司 User grouping method and apparatus, and electronic device and storage medium
CN112801165A (en) * 2021-01-22 2021-05-14 中国银联股份有限公司 Card auditing method and device
CN112801165B (en) * 2021-01-22 2024-03-22 中国银联股份有限公司 Card auditing method and device

Similar Documents

Publication Publication Date Title
JP6831480B2 (en) Text detection analysis methods, equipment and devices
CN109492643B (en) Certificate identification method and device based on OCR, computer equipment and storage medium
CN110766014B (en) Bill information positioning method, system and computer readable storage medium
CN106156766B (en) Method and device for generating text line classifier
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
CN107016387B (en) Method and device for identifying label
CN103310211B (en) A kind ofly fill in mark recognition method based on image procossing
JP6366024B2 (en) Method and apparatus for extracting text from an imaged document
CN109919160B (en) Verification code identification method, device, terminal and storage medium
CN109784342B (en) OCR (optical character recognition) method and terminal based on deep learning model
CN113160192A (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN110348439B (en) Method, computer readable medium and system for automatically identifying price tags
CN110503054B (en) Text image processing method and device
CN108108734B (en) License plate recognition method and device
CN110598686A (en) Invoice identification method, system, electronic equipment and medium
CN113657274B (en) Table generation method and device, electronic equipment and storage medium
CN107240185B (en) A kind of crown word number identification method, device, equipment and storage medium
CN111199240A (en) Training method of bank card identification model, and bank card identification method and device
CN110598566A (en) Image processing method, device, terminal and computer readable storage medium
CN111259908A (en) Machine vision-based steel coil number identification method, system, equipment and storage medium
CN111626177A (en) PCB element identification method and device
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN116597466A (en) Engineering drawing text detection and recognition method and system based on improved YOLOv5s
CN113673528B (en) Text processing method, text processing device, electronic equipment and readable storage medium
CN111462388A (en) Bill inspection method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526