CN111428656A - Mobile terminal identity card identification method based on deep learning and mobile device - Google Patents
Mobile terminal identity card identification method based on deep learning and mobile device Download PDFInfo
- Publication number
- CN111428656A CN111428656A CN202010229029.2A CN202010229029A CN111428656A CN 111428656 A CN111428656 A CN 111428656A CN 202010229029 A CN202010229029 A CN 202010229029A CN 111428656 A CN111428656 A CN 111428656A
- Authority
- CN
- China
- Prior art keywords
- image
- mobile terminal
- identity card
- model
- deep learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000013135 deep learning Methods 0.000 title claims abstract description 23
- 238000013136 deep learning model Methods 0.000 claims abstract description 29
- 238000001514 detection method Methods 0.000 claims description 36
- 238000012549 training Methods 0.000 claims description 17
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 230000006870 function Effects 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000012795 verification Methods 0.000 claims description 4
- 238000011478 gradient descent method Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 102100032202 Cornulin Human genes 0.000 description 1
- 101000920981 Homo sapiens Cornulin Proteins 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/635—Overlay text, e.g. embedded captions in a TV program
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Probability & Statistics with Applications (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Character Discrimination (AREA)
Abstract
The invention discloses a mobile terminal identification card identification method and mobile equipment based on deep learning, wherein the method comprises the steps of deploying a deep learning model at a mobile terminal; acquiring an image containing an identity card from mobile terminal equipment; acquiring a corrected identity card image from the arbitrary identity card-containing image; acquiring a face image on the corrected identity card image, and comparing the face image with a candidate face image in a database to realize identity authentication; acquiring a character area on the corrected ID card image, and intercepting character slices; identifying characters in the character slices to obtain an identified character result; and based on the character area and the character recognition result, the identity card information is structurally extracted. Based on the method, the accuracy and robustness of identification card identification are improved; the mobile terminal is locally deployed, so that the identification speed is increased, the privacy of a user is protected, the interference of network factors is avoided, and the identification stability is improved.
Description
Technical Field
The invention relates to the technical field of certificate image recognition, in particular to a mobile terminal deployment identity card recognition method based on deep learning.
Background
Recently, with the rise of deep learning technology, the development of the computer vision field is greatly promoted. The method has prominent directional contributions in the aspects of target detection, image segmentation, face detection and recognition, OCR and the like. The image recognition technology is applied to various fields such as medicine, military, finance and the like, and the certificate recognition technology has strong requirements in the financial field. However, the traditional certificate identification method needs the certificate image to be clear and the background to be pure, so that the robustness and the universality of the identification method are limited. The deep learning thoroughly solves the problem, has strong robustness and universality, and is not limited by complex background and image quality.
With the rise of mobile internet, mobile devices become indispensable members in daily life, and more services are handled through the mobile devices. However, most of the existing schemes are to upload the certificate image acquired by the mobile device to the server through the network, and complete the identification task at the server. This solution has the following drawbacks: the influence of the network is large, especially the images shot by the mobile phone are large, and the network transmission is time-consuming; privacy protection for the client is not sufficient; the recognition speed is not fast enough, and the time for transmitting the image by the network is included.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the mobile terminal identification card recognition method and the mobile equipment for deep learning, the deep learning model is deployed on the mobile terminal equipment, the recognition task can be directly completed on the equipment terminal, the recognition result is obtained, the document picture does not need to be uploaded, the influence of the network transmission speed is effectively avoided, and the user privacy is well protected.
In order to solve the technical problem, the invention is solved by the following technical scheme:
a mobile terminal identity card identification method based on deep learning comprises the following specific steps: deploying a deep learning model at a mobile terminal;
acquiring an image containing an identity card from mobile terminal equipment;
acquiring a corrected identity card image from the arbitrary identity card-containing image;
and extracting the identity card information from the corrected identity card image.
Optionally, a face image is obtained on the corrected identity card image, and is compared with the face image stored in the database to realize identity authentication;
acquiring a character area on the corrected ID card image, and intercepting character slices;
identifying characters in the character slices to obtain an identified character result;
and based on the text area and the text result, the identity card information is structurally extracted.
Optionally, deploying a deep learning model at the mobile terminal includes:
normalizing the image sample;
constructing a deep learning model calculation graph based on TensorFlow;
performing iterative training based on a gradient descent method to minimize a loss function to obtain an optimal model;
solidifying and storing the model structure and parameters;
converting the deep learning model to a tflite format model;
quantizing the tflite format model.
Optionally, quantizing the tflite format model includes: parameters in the tflite model are converted from float format to int type.
Optionally, the deep learning model includes an identity card vertex detection model, a face detection model, a character detection model, and a character recognition model.
Optionally, the mobile terminal device calls four vertexes of the identity card in the locally deployed vertex detection model predicted image; and performing perspective transformation according to the four vertexes of the identity card in the image to obtain a corrected identity card image without a background.
Optionally, the mobile terminal device calls a locally deployed face detection model to detect the face position in the image;
acquiring a face image according to the face position in the detected image;
uploading the face image to a server according to the acquired face image;
and carrying out face comparison and identity verification on the face data in the database.
Optionally, the mobile terminal device calls a locally deployed character detection model to detect a character region in the image; and intercepting a character area in the image to obtain a character slice.
Optionally, the mobile terminal device calls the locally deployed character recognition model to recognize characters in the image, and outputs a character recognition result.
The invention also provides mobile equipment, which deploys the deep learning algorithm model for processing the identity card image and realizes the identification method of the mobile terminal identity card;
the mobile terminal includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform method steps for implementing the deployment of the deep learning model to the mobile terminal.
The invention has the beneficial effects that:
the mobile terminal identification card identification method disclosed by the invention quantizes the deep learning model to obtain a smaller model, and the smaller model is deployed in the local mobile terminal equipment. The system has the functions of identity card detection, face detection, character recognition and the like.
Compared with the prior art, the invention has the following beneficial effects:
1. the method is safer: the user does not need to upload the identity card photo, and the privacy is protected;
2. and (3) more quickly: the photos do not need to be uploaded, so that the time for uploading the photos and the time for returning results are saved;
3. and (3) more stable: is not influenced by the network;
4. the precision is high: based on deep learning, the precision and the universality are higher.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a mobile-side id card identification method according to the present invention;
FIG. 2 is a flowchart of the mobile deployment deep learning model of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples, which are illustrative of the present invention and are not to be construed as being limited thereto.
Example 1:
a mobile terminal identity card identification method based on deep learning comprises the following steps:
s100, deploying a deep learning model by a mobile terminal;
s200, acquiring any image containing the identity card from a camera or an album of the mobile terminal equipment;
s300, acquiring a corrected identity card image from any image containing the identity card;
s400, acquiring a face image on the corrected identity card image, and comparing the face image with the face image to realize identity authentication;
s500, acquiring a character area on the corrected ID card image, and intercepting character slices;
s600, recognizing characters in the character slices to obtain a character recognition result;
and S700, extracting the identity card information in a structured mode according to the character area and the recognized character result.
In step S100 of the method, as shown in fig. 2, the step of deploying a deep learning model by the mobile terminal mainly includes:
s110, training a deep learning model based on TensorFlow;
s120, solidifying and storing the deep learning model;
s130, converting the deep learning model into a tflite format model;
s140, quantizing the tflite format model, and converting parameters in the tflite model from the float format into int8 types;
s150, deploying a tflite frame at the mobile terminal, and loading a tflite model;
and S160, calculating a model and outputting a result.
The deep learning model deployed at the mobile terminal comprises an identity card vertex detection model, a face detection model, a character detection model and a character recognition model. Training an identity card vertex detection model, a face detection model, a character detection model and a character recognition model based on TensorFlow.
The specific overall flow is as shown in the left part of fig. 1:
SS100, mobile terminal deployment deep learning model;
the SS200 acquires any image containing the identity card from a camera or an album of the mobile terminal equipment;
judging whether the identity card is detected, if so, performing SS330, and if not, waiting for the next image input;
the SS300 is used for acquiring a corrected identity card image from the arbitrary identity card-containing image through the vertex detection model;
SS400, inputting the corrected ID card image into a face detection model, judging whether a face is detected, if so, acquiring a face image, and comparing the face to realize identity verification;
SS500, inputting a character detection model on the corrected ID card image, acquiring a character area, and cutting the character area;
SS600, inputting the cut character area into a character recognition model, recognizing characters in the character slice, and obtaining a character recognition result;
and the SS700 is used for extracting the identity card information in a structured mode.
Steps S110 to S120 are a deep learning model construction process, and a specific flow thereof is shown in fig. 1 as deep learning model construction in a dashed box on the right side. The method mainly comprises the following steps:
(1) normalizing the training image sample;
(2) constructing a deep learning model calculation graph;
(3) initializing a training parameter;
(4) calculating hidden layer and output layer vectors;
(5) updating weights and is cheap;
(6) calculating the total error of the unit;
(7) judging whether an error threshold value or the maximum iteration number is reached, if so, continuing the step (4) if not, in the step (8);
(8) and solidifying and storing the model structure and parameters.
The deep learning model is constructed based on iterative training of a gradient descent method, so that a loss function is minimum, and an optimal model is obtained. The method is based on TensorFlow training to obtain an identity card vertex detection model, a face detection model, a character detection model and a character recognition model. Since the model is constructed by a relatively common internet technology, this embodiment is not described in detail.
The step S300 in this embodiment of obtaining a corrected image of an identity card from any image containing an identity card includes the specific steps of:
s310, scaling any image containing the identity card to a fixed size of 800 x 600 pixels;
s320, normalizing the scaled 800 × 600 pixel image;
s330, the mobile terminal equipment calls a locally deployed vertex detection model to detect four vertexes of the identity card in the image;
and S340, performing perspective transformation according to the four vertexes of the identity card in the image to obtain a corrected edge-trimmed identity card image.
In the embodiment, the identity card vertex detection model adopts a deep learning full convolution algorithm and is based on a TensorFlow frame training model, and the main network of the model is MobileNet V2. The model output is four heat maps, each heat map is used for calculating to obtain a vertex of the identity card, the four vertices of the identity card are finally predicted, the distance between the coordinates of the predicted vertices and the coordinates of the four vertices in the label is made to be shortest through a gradient descent algorithm, the model is obtained and stored, the deep learning model of the PC end is obtained, the model is converted into a model in the tflite format, and the model is deployed on mobile end equipment.
In step S400, a face image is obtained on the corrected identity card image, and face comparison is performed, so as to implement identity authentication, which mainly includes the following steps:
s410, scaling the corrected image of the identity card to fixed 320 x 320 pixels;
s420 normalizing the scaled 320 × 320 pixel image;
and S430, the mobile terminal device calls a locally deployed face detection model to detect the face position in the image.
S440, acquiring a face image according to the detected face position in the image;
s450, uploading the face image to a server;
and S460, comparing the face image with the face image stored in the database to realize identity verification.
The embodiment can be applied to the handling of banking business, particularly to the portrait collected when the business is handled in a bank, and the comparison between the identity card and the portrait is achieved. The database of the server stores the related face information stored in other scenes in advance.
The face detection model is based on a target detection algorithm SSD, and channel separation convolution replacement is used for original convolution operation, so that the size of the model is greatly reduced, and the calculation speed is improved. Collecting a large number of images containing human faces, marking four vertexes of the human faces in the images, and using the four vertexes as labels of model training; based on a TensorFlow training model, the model output is the coordinates of the vertex of the face frame and the confidence coefficient of the face.
In step S500, a text area is obtained on the corrected image of the identification card, and the main steps of intercepting text slices are as follows:
s510, normalizing the corrected identity card image;
s520, the mobile terminal equipment calls a locally deployed character detection model to detect a character area in the image;
s530, intercepting the character area in the image to obtain a character slice.
The character detection model collects a large number of images containing printed characters based on a target detection algorithm YO L O V3, marks four vertex coordinates of a character area in the images to be used as label of model training, and outputs the vertex coordinates of the character area based on a Tensorflow training model.
The main steps of recognizing the character slice in the step S600 are as follows:
s610, scaling the text slice to an image with the height of 32 pixels;
s620, normalizing the zoomed image with the height of 32 pixels;
s630, the mobile terminal device calls a locally deployed character recognition model to recognize characters in the normalized image with the height of 32 pixels, and outputs a character recognition result.
The character recognition model adopts a CRNN + CTC model, a TensorFlow training model is based on, finally, the activation function adopts softmax, 6596 categories of confidence coefficients are output, and the highest confidence coefficient is the predicted character;
the process of training the model based on TensorFlow specifically comprises the following steps:
collecting a large number of images containing printed characters, and collecting 6596 common Chinese characters, punctuations and English characters;
writing characters in the image with the height of 32 pixels into a txt file to be used as label of model training;
based on a TensorFlow training model, softmax is adopted for outputting an activation function, 6596 categories of confidence degrees are output, and the character with the highest confidence degree is predicted.
And finally, according to the character area and the character recognition result, information such as name, gender, ethnicity, address, citizen identification number and the like on the identification card is extracted in a structured mode, and the information is combined in a key-value mode.
Example 2:
a mobile device, specifically a mobile phone, is provided with the deep learning algorithm model for processing an identity card image described in embodiment 1;
the mobile phone comprises: a processor and a memory communicatively coupled to the processor; wherein the memory stores instructions executable by the processor to cause the processor to perform method steps for implementing deployment of deep learning models to mobile terminals for execution as described in embodiment 1.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The use of the phrase "including a" does not exclude the presence of other, identical elements in the process, method, article, or apparatus that comprises the same element, whether or not the same element is present in all of the same element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. All equivalent or simple changes of the structure, the characteristics and the principle of the invention which are described in the patent conception of the invention are included in the protection scope of the patent of the invention. Various modifications, additions and substitutions for the specific embodiments described may be made by those skilled in the art without departing from the scope of the invention as defined in the accompanying claims.
Claims (10)
1. A mobile terminal identity card identification method based on deep learning is characterized by comprising the following specific steps:
deploying a deep learning model at a mobile terminal;
acquiring an image containing an identity card from mobile terminal equipment;
acquiring a corrected identity card image from the arbitrary identity card-containing image;
and extracting the identity card information from the corrected identity card image.
2. The deep learning-based mobile terminal identification card recognition method according to claim 1,
acquiring a face image on the corrected identity card image, and comparing the face image with a face image stored in a database to realize identity authentication;
acquiring a character area on the corrected ID card image, and intercepting character slices;
identifying characters in the character slices to obtain an identified character result;
and based on the text area and the text result, the identity card information is structurally extracted.
3. The method for mobile-side identification card recognition based on deep learning of claim 1, wherein deploying a deep learning model at the mobile side comprises:
normalizing the image sample;
constructing a deep learning model calculation graph based on TensorFlow;
performing iterative training based on a gradient descent method to minimize a loss function to obtain an optimal model;
solidifying and storing the model structure and parameters;
converting the deep learning model to a tflite format model;
quantizing the tflite format model.
4. The deep learning-based mobile terminal identification card recognition method according to claim 3, wherein quantizing the tflite format model comprises: parameters in the tflite model are converted from float format to int type.
5. The method for identifying the mobile terminal identity card based on the deep learning of any one of claims 1 to 4, wherein the deep learning model comprises an identity card vertex detection model, a face detection model, a character detection model and a character identification model.
6. The method for identifying the identity card of the mobile terminal based on the deep learning of claim 5, wherein the mobile terminal device calls four vertexes of the identity card in the locally deployed vertex detection model prediction image; and performing perspective transformation according to the four vertexes of the identity card in the image to obtain a corrected identity card image without a background.
7. The method for recognizing the mobile terminal identification card based on the deep learning of claim 5, wherein the mobile terminal device calls a locally deployed face detection model to detect the face position in the image;
acquiring a face image according to the face position in the detected image;
uploading the face image to a server according to the acquired face image;
and carrying out face comparison and identity verification on the face data in the database.
8. The method for identifying the mobile terminal identification card based on the deep learning of claim 5, wherein the mobile terminal device calls a locally deployed character detection model to detect a character region in an image; and intercepting a character area in the image to obtain a character slice.
9. The method for recognizing the mobile terminal identification card based on the deep learning of claim 5, wherein the mobile terminal device calls a locally deployed character recognition model to recognize characters in an image and outputs a character recognition result.
10. A mobile device, wherein the deep learning algorithm model for processing an identity card image according to any one of claims 1 to 9 is deployed, so as to implement a mobile terminal identity card identification method;
the mobile terminal includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform method steps for implementing deployment of a deep learning model to a mobile terminal for execution as claimed in any one of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010229029.2A CN111428656A (en) | 2020-03-27 | 2020-03-27 | Mobile terminal identity card identification method based on deep learning and mobile device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010229029.2A CN111428656A (en) | 2020-03-27 | 2020-03-27 | Mobile terminal identity card identification method based on deep learning and mobile device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111428656A true CN111428656A (en) | 2020-07-17 |
Family
ID=71549724
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010229029.2A Pending CN111428656A (en) | 2020-03-27 | 2020-03-27 | Mobile terminal identity card identification method based on deep learning and mobile device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111428656A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348024A (en) * | 2020-10-29 | 2021-02-09 | 北京信工博特智能科技有限公司 | Image-text identification method and system based on deep learning optimization network |
CN112541772A (en) * | 2020-12-04 | 2021-03-23 | 浪潮云信息技术股份公司 | Merchant-oriented qualification authentication method |
CN112949427A (en) * | 2021-02-09 | 2021-06-11 | 北京奇艺世纪科技有限公司 | Person identification method, electronic device, storage medium, and apparatus |
CN113869299A (en) * | 2021-09-30 | 2021-12-31 | 中国平安人寿保险股份有限公司 | Bank card identification method and device, computer equipment and storage medium |
CN113869299B (en) * | 2021-09-30 | 2024-06-11 | 中国平安人寿保险股份有限公司 | Bank card identification method and device, computer equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521569A (en) * | 2011-11-30 | 2012-06-27 | 康佳集团股份有限公司 | Method and system for identifying identity card by using smart phone and mobile phone |
CN110008961A (en) * | 2019-04-01 | 2019-07-12 | 深圳市华付信息技术有限公司 | Text real-time identification method, device, computer equipment and storage medium |
CN110443184A (en) * | 2019-07-31 | 2019-11-12 | 上海海事大学 | ID card information extracting method, device and computer storage medium |
CN110705398A (en) * | 2019-09-19 | 2020-01-17 | 安徽七天教育科技有限公司 | Mobile-end-oriented test paper layout image-text real-time detection method |
-
2020
- 2020-03-27 CN CN202010229029.2A patent/CN111428656A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102521569A (en) * | 2011-11-30 | 2012-06-27 | 康佳集团股份有限公司 | Method and system for identifying identity card by using smart phone and mobile phone |
CN110008961A (en) * | 2019-04-01 | 2019-07-12 | 深圳市华付信息技术有限公司 | Text real-time identification method, device, computer equipment and storage medium |
CN110443184A (en) * | 2019-07-31 | 2019-11-12 | 上海海事大学 | ID card information extracting method, device and computer storage medium |
CN110705398A (en) * | 2019-09-19 | 2020-01-17 | 安徽七天教育科技有限公司 | Mobile-end-oriented test paper layout image-text real-time detection method |
Non-Patent Citations (2)
Title |
---|
宋岩: "《在MCU上运用机器学习实现轻智能》", 《WWW.EEPW.COM.CN》 * |
熊亚蒙: "《基于TensorFlow的移动终端图像识别方法》", 《无线互联科技》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112348024A (en) * | 2020-10-29 | 2021-02-09 | 北京信工博特智能科技有限公司 | Image-text identification method and system based on deep learning optimization network |
CN112541772A (en) * | 2020-12-04 | 2021-03-23 | 浪潮云信息技术股份公司 | Merchant-oriented qualification authentication method |
CN112949427A (en) * | 2021-02-09 | 2021-06-11 | 北京奇艺世纪科技有限公司 | Person identification method, electronic device, storage medium, and apparatus |
CN113869299A (en) * | 2021-09-30 | 2021-12-31 | 中国平安人寿保险股份有限公司 | Bank card identification method and device, computer equipment and storage medium |
CN113869299B (en) * | 2021-09-30 | 2024-06-11 | 中国平安人寿保险股份有限公司 | Bank card identification method and device, computer equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20240048658A1 (en) | Content-based object detection, 3d reconstruction, and data extraction from digital images | |
US11620733B2 (en) | Content-based object detection, 3D reconstruction, and data extraction from digital images | |
CN107239786B (en) | Character recognition method and device | |
WO2019119966A1 (en) | Text image processing method, device, equipment, and storage medium | |
US9626555B2 (en) | Content-based document image classification | |
CN111428656A (en) | Mobile terminal identity card identification method based on deep learning and mobile device | |
CN113378710B (en) | Layout analysis method and device for image file, computer equipment and storage medium | |
CN112396047B (en) | Training sample generation method and device, computer equipment and storage medium | |
CN112070649A (en) | Method and system for removing specific character string watermark | |
US8773733B2 (en) | Image capture device for extracting textual information | |
CN110796145B (en) | Multi-certificate segmentation association method and related equipment based on intelligent decision | |
CN115171125A (en) | Data anomaly detection method | |
CN106611148B (en) | Image-based offline formula identification method and device | |
US8768058B2 (en) | System for extracting text from a plurality of captured images of a document | |
US11514702B2 (en) | Systems and methods for processing images | |
US8908970B2 (en) | Textual information extraction method using multiple images | |
US9378428B2 (en) | Incomplete patterns | |
CN112836632B (en) | Method and system for realizing user-defined template character recognition | |
US20220398399A1 (en) | Optical character recognition systems and methods for personal data extraction | |
CN114120305A (en) | Training method of text classification model, and recognition method and device of text content | |
CN114494678A (en) | Character recognition method and electronic equipment | |
WO2019071476A1 (en) | Express information input method and system based on intelligent terminal | |
CN112287763A (en) | Image processing method, apparatus, device and medium | |
CN113591657A (en) | OCR (optical character recognition) layout recognition method and device, electronic equipment and medium | |
CN115631493B (en) | Text region determining method, system and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Xinyada technology building, 3888 Jiangnan Avenue, Binjiang District, Hangzhou City, Zhejiang Province 310051 Applicant after: Sinyada Technology Co.,Ltd. Address before: Xinyada technology building, 3888 Jiangnan Avenue, Binjiang District, Hangzhou City, Zhejiang Province 310051 Applicant before: SUNYARD SYSTEM ENGINEERING Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200717 |