CN114820476A - Identification card identification method based on compliance detection - Google Patents

Identification card identification method based on compliance detection Download PDF

Info

Publication number
CN114820476A
CN114820476A CN202210378158.7A CN202210378158A CN114820476A CN 114820476 A CN114820476 A CN 114820476A CN 202210378158 A CN202210378158 A CN 202210378158A CN 114820476 A CN114820476 A CN 114820476A
Authority
CN
China
Prior art keywords
identity card
picture
compliance
effective
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210378158.7A
Other languages
Chinese (zh)
Inventor
曹娟
陈浩
俞颖超
谢添
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Zhongke Ruijian Technology Co ltd
Zhongke Computing Technology Innovation Research Institute
Original Assignee
Hangzhou Zhongke Ruijian Technology Co ltd
Zhongke Computing Technology Innovation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Zhongke Ruijian Technology Co ltd, Zhongke Computing Technology Innovation Research Institute filed Critical Hangzhou Zhongke Ruijian Technology Co ltd
Priority to CN202210378158.7A priority Critical patent/CN114820476A/en
Publication of CN114820476A publication Critical patent/CN114820476A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The invention relates to an identification card identification method based on compliance detection, which comprises the following steps: inputting an identity card picture to be recognized into a trained identity card type detection model, and judging whether the type of the identity card picture is in compliance or not through the model; detecting effective areas of the identity card pictures with the types being in compliance, and positioning effective text areas and portrait areas on the identity card pictures; calculating the definition score of each effective text region on the identity card picture under a set resolution scale, and taking the mean value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture; segmenting single characters in the effective text area, calculating the ratio of the distance between adjacent characters and the length of the two adjacent characters, matching based on a standard template, and determining that the ratio exceeds an acceptable range to have the possibility of tampering; inputting the effective portrait area into the trained portrait tampering detection model, and completing the tampering detection of the portrait area through the model. The invention is suitable for the technical field of image recognition.

Description

Identification card identification method based on compliance detection
Technical Field
The invention relates to an identification card identification method based on compliance detection. The method is suitable for the technical field of image recognition.
Background
The identity document is the most important identity certificate of an individual, and the identity document can not be used basically no matter whether the individual buys a ticket by going out, applies for handling a new mobile phone number and a bank card, and uses the bank credit, various life scenes and personal business handling. In the traditional identification of the identity card, second-generation identity card reading and verifying equipment is used for reading an RFID chip in the identity card and extracting personal key information such as name, nationality, birthday, address, identity card number and the like, and the method has great limitation on a use scene.
With the development and popularization of mobile interconnection, many industries open scenes of handling business online, and the traditional method of reading information by relying on an identity card reader is not suitable any more. At present, a plurality of functions of identification card information identification and verification are completed on line in real time, identification card OCR identification is relied on, workload of manual input can be reduced, and a large number of service scenes are needed to shoot personal identification card photos for data verification, information retention and the like, such as personal service of government service websites, bank credit, insurance service and the like.
Meanwhile, the identification card information authentication and identification also face some problems, and the current OCR technical scheme generally focuses on the accuracy rate of character content identification, and ignores the high and low quality of the photos and the credibility of the content, and determines whether the photos meet the specifications or not. Although clear and correct identity card uploading is prompted when the identity card is submitted in business handling, fuzzy pictures still appear due to shooting equipment, personal reasons and the like, or non-identity card pictures are uploaded maliciously, and the workload of manual examination and verification is increased. After the information of the RFID chip cannot be extracted through the ID card reader, the authenticity of the submitted ID card cannot be guaranteed, whether falsified characters or human images exist or not, false information is forged, and along with the change of deep learning technology and the beautification of various images, the falsification software is increasingly powerful, and even a forged ID card which cannot be distinguished by people at a glance appears.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: aiming at the existing problems, the identity card identification method based on compliance detection is provided, so that the safety of the whole identity card identification process is ensured to a certain extent while the key information of the identity card is extracted.
The technical scheme adopted by the invention is as follows: an identification card identification method based on compliance detection is characterized in that:
inputting an identity card picture to be recognized into a trained identity card type detection model, and judging whether the type of the identity card picture is in compliance or not through the model;
detecting effective areas of the identity card pictures with the types being in compliance, and positioning effective text areas and portrait areas on the identity card pictures;
calculating the definition score of each effective text region on the identity card picture under a set resolution scale, taking the average value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture, and judging the picture as a fuzzy sample when the integral definition score is smaller than a set threshold value;
segmenting single characters in the effective text area, calculating the ratio of the distance between adjacent characters and the length of the two adjacent characters, matching based on a standard template, and determining that the ratio exceeds an acceptable range to have tampering possibility;
inputting the effective portrait area into the trained portrait tampering detection model, and completing the tampering detection of the portrait area through the model.
The identity card type detection model is built by adding an SE module based on a residual error network ResNet.
The effective area detection is carried out on the identity card pictures with the types in compliance, and the method comprises the following steps: the network is detected using the centrnet target.
The method for calculating the definition scores of the effective text regions on the identity card picture under the set resolution scale and taking the average value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture comprises the following steps:
extracting edge features of the effective text region by using a sobel operator:
Figure BDA0003591647420000031
a denotes a text region image, G X And G y The convolution results in the X direction and the Y direction by using the sobel convolution factor are obtained;
performing dot multiplication operation on the results in the two directions, summing each element of the whole dot multiplication matrix result obtained by calculation, and dividing the sum by the height h of the region to obtain the definition score of the region;
the sharpness scores for all valid text regions are summed up with the score averaged as a whole:
Figure BDA0003591647420000032
where h is the height of the text region, sum represents the sum of the matrix elements, and n represents the number of valid text regions.
The segmenting individual characters in the valid text region includes:
and (4) solving a segmentation threshold value required by the binary image for each effective text region by using an Otsu method, and segmenting the single character by using a projection method.
Inputting the effective portrait area into the trained portrait tampering detection model, and completing the tampering detection of the portrait area through the model, wherein the method comprises the following steps:
the method comprises the steps of adopting an Efficientnet-b7 network as a basic feature extractor, simultaneously adopting the thought of double-flow Faster-RCNN added with noise flow, capturing edge features between tampered and non-tampered portraits and deep background texture features, adopting the thought of a feature pyramid, and splicing the edge features and RGB flow features of different scales in the forward propagation of the first layers of the network.
The input end of the edge feature stream is the input of the edge feature stream obtained by extracting an edge feature graph T based on a canny operator in the traditional method, downsampling forward propagation is carried out in a network based on convolution with a sliding step length stride of 2, an edge feature convolution layer structure is established, the downsampling feature graph for the nth time is obtained, wherein K represents a convolution kernel:
Figure BDA0003591647420000041
the network input end RGB stream, based on the convolution sampling that sliding step stride is 2, guarantee in the preceding n downsampling processes, the characteristic map size reduces to original half each time, the characteristic map size and the sampling under the marginal characteristic stream are unanimous, establish RGB convolution layer, RGB stream convolution characteristic is D, all splice the convolution characteristic layer of two tributaries after each sampling, for RGB stream, preceding n times convolution downsampling process all contains the characteristic that another tributary marginal characteristic stream convolution gets, characteristic map calculation mode after the nth downsampling is as follows:
Figure BDA0003591647420000042
replacing the last full link layer of the network with Global Average Power, constructing real and fake training data sets under the figure scene of the identity card by taking a subsequent feature graph of the front of the efficiencNet-b 7 network as X, and finally completing the tampering detection of the figure region by a trained softamx classifier, wherein X represents a final feature graph output by the network.
An identification card recognition device based on compliance detection, comprising:
the identity card type detection module is used for inputting the identity card picture to be identified into the trained identity card type detection model and judging whether the type of the identity card picture is in compliance or not through the model;
the effective region detection module is used for carrying out effective region detection on the identity card pictures with the types being in compliance and positioning an effective text region and a portrait region on the identity card pictures;
the definition judging module is used for calculating definition scores of all effective text regions on the identity card picture under the set resolution scale, taking the average value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture, and judging the picture as a fuzzy sample when the integral definition score is smaller than a set threshold value;
the text tampering detection module is used for segmenting the single characters in the effective text area, calculating the distance between the adjacent characters and the length ratio of the two adjacent characters, matching based on a standard template, and having tampering possibility for the determination that the ratio exceeds the acceptable range;
and the portrait tampering detection module is used for inputting the effective portrait area into the trained portrait tampering detection model and completing the tampering detection of the portrait area through the model.
A storage medium having stored thereon a computer program executable by a processor, the computer program comprising: the computer program when executed performs the steps of the compliance detection based identification card identification method.
A computer device having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program comprising: the computer program when executed performs the steps of the compliance detection based identification card identification method.
The invention has the beneficial effects that: aiming at the blank area and the too many background areas of the image of the identity card, which bring strong interference to the judgment of the definition, the invention firstly obtains the effective area, unifies the dimension of the image of the effective area to a plurality of fixed ranges, extracts the edge characteristics of the effective area, processes the characteristics of a plurality of areas, then averages the results of all the areas, judges that the judgment is clear enough, and meets the uploading requirement.
The invention uses a deep learning method to construct a training sample, construct a residual error network, introduce the attention thought of computer vision, and serve as a feature extractor to classify features in a high-dimensional space, and judge whether the shot picture is an identity card picture, a national emblem picture or a human image picture, an original picture or a copy picture, or a screen reproduction picture. Through the quality compliance detection of the identity card in the step, a large number of non-compliance pictures can be filtered, and invalid calling of machines and human resources is reduced.
The invention uses the traditional combination method with deep learning, based on the standard identity card template, carries out traditional logic rule matching, carries out self-adaptive threshold setting on the effective character area obtained in the step of detecting the image quality compliance of the identity card, carries out accurate character cutting, calculates the space proportion of characters, compares the space proportion with the data of the standard template, and judges whether the characters are forged and falsified.
The method is based on a deep learning method, and meanwhile, a traditional edge feature extraction method is utilized, and the method focuses on the edge feature information of the image surface and the region connected with the identity card. Based on the concept of the feature pyramid, the edge information is sampled for multiple times, and the double-stream splicing feature of the RGB stream and the edge information stream is constructed in the input end and the network forward process. And replacing the full-connection layer with the global average pooling layer, training a classifier, and judging whether the portrait is tampered and replaced.
Drawings
FIG. 1 is a flow chart of an embodiment.
Detailed Description
The embodiment is an identity card identification method based on compliance detection, which specifically comprises the following steps:
and S1, inputting the identification card picture to be recognized into the trained identification card type detection model, and judging whether the type of the identification card picture is in compliance or not through the model.
In the example, the identity card type detection model is based on a residual error network ResNet, and is added with an SE (Squeeze-and-exactionnetworks) module based on the residual error network ResNet, so that the attention idea in machine vision is introduced. Dividing the shot identity card picture into 6 categories of an original person image surface, an original country logo surface, a copy person image surface, a copy country logo surface, a screen turnover shot image and a non-identity card, constructing a training data set of 6 types of images, obtaining the probability of each category based on a trained softmax classifier, and then not performing subsequent processing on the identity card photo with the category not in compliance according to actual uploading requirements.
And S2, detecting the effective area of the ID card picture with the type being in compliance, and positioning the effective text area and the portrait area on the ID card picture.
The method comprises the steps of detecting an effective area of an identity card with a qualified category, determining tasks as two types of target detection of texts and portraits based on the idea of target detection, and classifying different key information fields such as name, gender, ethnicity, birth year and month, address, identity card number, issuing authority and effective date as 8 targets belonging to the same text class, wherein if the address field has a plurality of text lines, the targets are combined into one target.
In this embodiment, a centret target detection network is used to locate a plurality of valid text regions and portrait regions on an identification card picture.
S3, calculating the definition score of each effective text area on the ID card picture under the set resolution scale, taking the average value of the definition scores of all the text areas on the ID card picture as the integral definition score of the picture, and judging the picture as a fuzzy sample when the integral definition score is smaller than the set threshold value.
Due to the reasons of shooting equipment, parameters and the like, the resolution change range of the identity card image is large, different effective areas of the detected identity card need to be zoomed, each area is fixed to a proper proportion, and the definition judgment of the effective areas of different identity card images under the same set resolution scale is guaranteed.
Extracting edge features of the effective text regions obtained in the steps by using a sobel operator, wherein the operator can obtain the first-order gradient of the text regions and has a certain inhibition effect on noise,
Figure BDA0003591647420000071
a denotes a text region image, G X And G y The convolution results in the X and Y directions using sobel convolution factors.
Performing dot multiplication operation on the results in the two directions, summing each element of the whole dot multiplication matrix result obtained by calculation, dividing the sum by the height h of the region to obtain the definition score of the region, and then summing the definition scores of all the text regions and taking the average value as the integral score;
Figure BDA0003591647420000072
wherein h is the height of the text region, sum represents the summation of the matrix elements, and n represents the number of the effective text regions on the ID card picture.
And adjusting according to the prior of the fuzzy samples, and setting a proper definition judgment threshold value, wherein the fuzzy samples are considered to be the fuzzy samples when the definition judgment threshold value is lower than the threshold value.
And S4, segmenting the single characters in the effective text area, calculating the proportion of the space between the adjacent characters and the length of the adjacent two characters, and matching based on the standard template, wherein the possibility of tampering exists for the confirmation that the proportion exceeds the acceptable range.
For each detected effective character area, the segmentation threshold required by a binary image is obtained by using the OTSU, fields such as names and nationalities are projected in the vertical direction, and for addresses with multi-line texts, the fields need to be projected in the horizontal and vertical directions, and the individual characters are segmented by using the projection method.
For character sets segmented from different text regions, the proportion of the distance between two adjacent characters and the length of the two adjacent characters is calculated, the proportion is matched based on a standard template, and the possibility of tampering exists when the proportion is abnormal and the proportion exceeds the acceptable range.
And S5, inputting the effective portrait area into the trained portrait tampering detection model, and completing the tampering detection of the portrait area through the model.
The method comprises the steps of adopting an Efficientnet-b7 network as a basic feature extractor, simultaneously adopting the thought of double-current fast-RCNN added with noise flow, capturing edge features between tampered and non-tampered portraits and features such as deep background texture, and adopting the thought of a feature pyramid to splice edge features and RGB flow features of different scales in the forward propagation of the first layers of the network.
The input end of the edge feature stream is the input of the edge feature stream obtained by extracting an edge feature graph T based on a canny operator in the traditional method, the downsampling forward propagation is carried out in the network based on the convolution with the sliding step length stride of 2, an edge feature convolution layer structure is established, the downsampling feature graph of the nth time is obtained, wherein K represents a convolution kernel, and the method is extracted by the traditional method:
Figure BDA0003591647420000081
the method comprises the steps of similarly, based on convolution sampling with a sliding step length stride of 2, ensuring that the size of a feature diagram is reduced to be general every time in the previous n times of downsampling processes, the feature diagram size is consistent with the sampling under the edge feature stream, establishing an RGB convolution layer, the convolution feature of the RGB stream is D, splicing convolution feature layers of two branches after each sampling, namely for the RGB stream, the previous n times of convolution downsampling processes all contain the feature obtained by convolution of the edge feature stream of the other branch, and calculating the feature diagram after the nth downsampling in the following mode, wherein sampling is carried out three times in the method, and the edge feature stream information is discarded after the 3 rd time of sampling.
Figure BDA0003591647420000082
In this embodiment, the last full link layer of the network is replaced with a Global Average Pooling layer, so that overfitting is reduced, the amount of calculation is reduced, spatial information is retained, and the classifier uses Global information. And finally, completing tampering detection on the portrait area through a trained softamx classifier, wherein the characteristic diagram of the subsequent efficienNet-b7 network forward direction is X, a real and fake training data set under the portrait scene of the identity card is constructed, and X represents the final characteristic diagram output by the network.
Figure BDA0003591647420000091
After this embodiment passes through ID card image compliance detection, carry out the OCR discernment of accurate ID card content information, through preliminary treatment, text detection, text recognition, post-processing extracts the key field information of ID card.
This embodiment is realizing drawing the ID card information content through the OCR technique, when reducing manual work, the pertinence ID card picture carries out quality and content double coincidence regularity and detects, based on tradition and the mode that deep learning combines, deep excavation ID card and ID card characters and portrait characteristic, solve to a certain extent because the low repeated auditing inefficiency that brings of ID card image quality to and the malicious false information scheduling problem that forges and arouses of content, obtain the quality and the guaranteed ID card OCR recognition result of content.
This embodiment still provides an identification card recognition device based on compliance detects, includes: the system comprises an identity card type detection module, an effective area detection module, a definition judgment module, a character tampering detection module and a portrait tampering detection module.
In the embodiment, the identity card type detection module is used for inputting the picture of the identity card to be identified into the trained identity card type detection model and judging whether the type of the picture of the identity card is in compliance or not through the model; the effective region detection module is used for carrying out effective region detection on the identity card pictures with the types being in compliance and positioning an effective text region and a portrait region on the identity card pictures; the definition judging module is used for calculating definition scores of all effective text regions on the identity card picture under the set resolution scale, taking the average value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture, and judging the picture as a fuzzy sample when the integral definition score is smaller than a set threshold value; the text tampering detection module is used for segmenting single characters in the effective text area, calculating the distance between adjacent characters and the length ratio of the two adjacent characters, matching based on a standard template, and having tampering possibility for the determination that the ratio exceeds an acceptable range; the portrait tampering detection module is used for inputting the effective portrait area into the trained portrait tampering detection model, and tampering detection of the portrait area is completed through the model.
The present embodiment also provides a storage medium having stored thereon a computer program executable by a processor, the computer program when executed implementing the steps of the compliance detection based identification card identification method of the present embodiment.
The present embodiment also provides a computer device having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program when executed implementing the steps of the compliance detection based identification card identification method of the present embodiment.

Claims (10)

1. An identification card identification method based on compliance detection is characterized in that:
inputting an identity card picture to be recognized into a trained identity card type detection model, and judging whether the type of the identity card picture is in compliance or not through the model;
detecting effective areas of the identity card pictures with the types being in compliance, and positioning effective text areas and portrait areas on the identity card pictures;
calculating the definition score of each effective text region on the identity card picture under a set resolution scale, taking the average value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture, and judging the picture as a fuzzy sample when the integral definition score is smaller than a set threshold value;
segmenting single characters in the effective text area, calculating the ratio of the distance between adjacent characters and the length of the two adjacent characters, matching based on a standard template, and determining that the ratio exceeds an acceptable range to have tampering possibility;
inputting the effective portrait area into the trained portrait tampering detection model, and completing the tampering detection of the portrait area through the model.
2. The compliance detection-based identification card recognition method of claim 1, wherein: the identity card type detection model is built by adding an SE module based on a residual error network ResNet.
3. The identity card identification method based on compliance detection as claimed in claim 1, wherein the performing effective area detection on the identity card picture with type compliance comprises: the network is detected using the centrnet target.
4. The method of claim 1, wherein the calculating the sharpness score of each valid text region on the ID card picture at a predetermined resolution scale, and averaging the sharpness scores of all text regions on the ID card picture as the overall sharpness score of the picture comprises:
extracting edge features of the effective text region by using a sobel operator:
Figure FDA0003591647410000021
a meterText area image, G X And G y The convolution results in the X direction and the Y direction by using the sobel convolution factor are obtained;
performing dot multiplication operation on the results in the two directions, summing each element of the whole dot multiplication matrix result obtained by calculation, and dividing the sum by the height h of the region to obtain the definition score of the region;
the sharpness scores for all valid text regions are summed up with the score averaged as a whole:
Figure FDA0003591647410000022
where h is the height of the text region, sum represents the sum of the matrix elements, and n represents the number of valid text regions.
5. The compliance detection-based identification card recognition method of claim 1, wherein the segmenting individual characters in the valid text region comprises:
and (4) solving a segmentation threshold value required by the binary image for each effective text region by using an Otsu method, and segmenting the single character by using a projection method.
6. The identity card identification method based on compliance detection as claimed in claim 1, wherein the inputting of the valid portrait area into the trained portrait tampering detection model, and the completion of the portrait area tampering detection by the model, comprises:
the method comprises the steps of adopting an Efficientnet-b7 network as a basic feature extractor, simultaneously adopting the thought of double-current fast-RCNN added with noise flow, capturing edge features between tampered and non-tampered portraits and deep background texture features, adopting the thought of a feature pyramid, and splicing the edge features and RGB flow features of different scales in the forward propagation of the first layers of the network.
7. The compliance detection-based identification card recognition method of claim 6, wherein: the input end of the edge feature stream is the input of the edge feature stream obtained by extracting an edge feature graph T based on a canny operator in the traditional method, downsampling forward propagation is carried out in a network based on convolution with a sliding step length stride of 2, an edge feature convolution layer structure is established, the downsampling feature graph for the nth time is obtained, wherein K represents a convolution kernel:
Figure FDA0003591647410000031
the network input end RGB stream, based on the convolution sampling that sliding step stride is 2, guarantee in the preceding n downsampling processes, the characteristic map size reduces to original half each time, the characteristic map size and the sampling under the marginal characteristic stream are unanimous, establish RGB convolution layer, RGB stream convolution characteristic is D, all splice the convolution characteristic layer of two tributaries after each sampling, for RGB stream, preceding n times convolution downsampling process all contains the characteristic that another tributary marginal characteristic stream convolution gets, characteristic map calculation mode after the nth downsampling is as follows:
Figure FDA0003591647410000032
replacing the last full link layer of the network with Global Average Power, constructing real and fake training data sets under the figure scene of the identity card by taking a subsequent feature graph of the front of the efficiencNet-b 7 network as X, and finally completing the tampering detection of the figure region by a trained softamx classifier, wherein X represents a final feature graph output by the network.
8. An identification card recognition device based on compliance detection, comprising:
the identity card type detection module is used for inputting the identity card picture to be identified into the trained identity card type detection model and judging whether the type of the identity card picture is in compliance or not through the model;
the effective region detection module is used for carrying out effective region detection on the identity card pictures with the types being in compliance and positioning an effective text region and a portrait region on the identity card pictures;
the definition judging module is used for calculating definition scores of all effective text regions on the identity card picture under the set resolution scale, taking the average value of the definition scores of all the text regions on the identity card picture as the integral definition score of the picture, and judging the picture as a fuzzy sample when the integral definition score is smaller than a set threshold value;
the text tampering detection module is used for segmenting the single characters in the effective text area, calculating the distance between the adjacent characters and the length ratio of the two adjacent characters, matching based on a standard template, and having tampering possibility for the determination that the ratio exceeds the acceptable range;
and the portrait tampering detection module is used for inputting the effective portrait area into the trained portrait tampering detection model and completing the tampering detection of the portrait area through the model.
9. A storage medium having stored thereon a computer program executable by a processor, the computer program comprising: the computer program when executed implements the steps of the compliance testing based identification card recognition method of any of claims 1 to 7.
10. A computer device having a memory and a processor, the memory having stored thereon a computer program executable by the processor, the computer program comprising: the computer program when executed implements the steps of the compliance testing based identification card recognition method of any of claims 1 to 7.
CN202210378158.7A 2022-04-12 2022-04-12 Identification card identification method based on compliance detection Pending CN114820476A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210378158.7A CN114820476A (en) 2022-04-12 2022-04-12 Identification card identification method based on compliance detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210378158.7A CN114820476A (en) 2022-04-12 2022-04-12 Identification card identification method based on compliance detection

Publications (1)

Publication Number Publication Date
CN114820476A true CN114820476A (en) 2022-07-29

Family

ID=82535217

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210378158.7A Pending CN114820476A (en) 2022-04-12 2022-04-12 Identification card identification method based on compliance detection

Country Status (1)

Country Link
CN (1) CN114820476A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580409A (en) * 2023-07-14 2023-08-11 深圳市明泰智能技术有限公司 Automatic identification method, system and terminal for certificates

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116580409A (en) * 2023-07-14 2023-08-11 深圳市明泰智能技术有限公司 Automatic identification method, system and terminal for certificates
CN116580409B (en) * 2023-07-14 2023-09-19 深圳市明泰智能技术有限公司 Automatic identification method, system and terminal for certificates

Similar Documents

Publication Publication Date Title
US20210124919A1 (en) System and Methods for Authentication of Documents
US11023708B2 (en) Within document face verification
WO2016131083A1 (en) Identity verification. method and system for online users
WO2022089124A1 (en) Certificate authenticity identification method and apparatus, computer-readable medium, and electronic device
KR20200118842A (en) Identity authentication method and device, electronic device and storage medium
WO2021259096A1 (en) Identity authentication method, apparatus, electronic device, and storage medium
CN110503099B (en) Information identification method based on deep learning and related equipment
CN113111880B (en) Certificate image correction method, device, electronic equipment and storage medium
CN110502694A (en) Lawyer's recommended method and relevant device based on big data analysis
Yindumathi et al. Analysis of image classification for text extraction from bills and invoices
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN114820476A (en) Identification card identification method based on compliance detection
CN112215225B (en) KYC certificate verification method based on computer vision technology
CN116612538A (en) Online confirmation method of electronic contract content
US20230147685A1 (en) Generalized anomaly detection
US20230069960A1 (en) Generalized anomaly detection
Bouma et al. Authentication of travel and breeder documents
Thaiparnit et al. Tracking vehicles system based on license plate recognition
CN110415424B (en) Anti-counterfeiting identification method and device, computer equipment and storage medium
CN116110111B (en) Face recognition method, electronic equipment and storage medium
US11763590B2 (en) Validating identification documents
CN114333037B (en) Identification method and system for copied photo containing identity card
US20240021016A1 (en) Method and system for identity verification
CN112396058B (en) Document image detection method, device, equipment and storage medium
Bayar et al. MobileMRZNet: Efficient and Lightweight MRZ Detection for Mobile Devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination