CN112580621A - Identity card copying and identifying method and device, electronic equipment and storage medium - Google Patents

Identity card copying and identifying method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112580621A
CN112580621A CN202011558266.XA CN202011558266A CN112580621A CN 112580621 A CN112580621 A CN 112580621A CN 202011558266 A CN202011558266 A CN 202011558266A CN 112580621 A CN112580621 A CN 112580621A
Authority
CN
China
Prior art keywords
convolution
copying
convolution unit
identity card
identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011558266.XA
Other languages
Chinese (zh)
Other versions
CN112580621B (en
Inventor
赵小诣
吕文勇
周智杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Hope Finance Information Co Ltd
Original Assignee
Chengdu New Hope Finance Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Hope Finance Information Co Ltd filed Critical Chengdu New Hope Finance Information Co Ltd
Priority to CN202011558266.XA priority Critical patent/CN112580621B/en
Publication of CN112580621A publication Critical patent/CN112580621A/en
Application granted granted Critical
Publication of CN112580621B publication Critical patent/CN112580621B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Credit Cards Or The Like (AREA)

Abstract

The application provides an identification card copying method, an identification card copying device, electronic equipment and a storage medium, and relates to the technical field of image processing, wherein the method comprises the following steps: the method comprises the steps of carrying out identification card front and back recognition on an original image to obtain a front and back recognition result and a clipped identification card image, carrying out direction correction on the identification card image to obtain a standard identification card image, carrying out identification by using an identification card copying recognition model to obtain a first copying recognition result, carrying out copying recognition on the standard identification card image by using a front and back general copying recognition model to obtain a second copying recognition result, carrying out direction correction on the original image to obtain a corrected original image, carrying out identification by using a general copying recognition model to obtain a third copying recognition result, and determining the identification card copying recognition result according to the first copying recognition result, the second copying recognition result and the third copying recognition result. The method and the device can improve accuracy and applicability of identity card copying and identification.

Description

Identity card copying and identifying method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of image processing, in particular to an identification card copying and recognizing method and device, electronic equipment and a storage medium.
Background
In the prior art, the identification card is usually identified by traditional CV (computer vision) technology or by using a deep learning model. The traditional CV technology needs to perform specific pattern recognition on Moire patterns generated by copying in an image shot under a specific equipment shooting environment through methods such as edge detection, texture detection and the like, so that certain requirements are placed on a picture, for example, when the picture lacks the Moire patterns, the recognition accuracy is low, and the applicability is low. The existing recognition method based on the deep learning model cannot solve various interference factors caused by shooting mode differences, and also has the problem of low reproduction recognition accuracy.
Disclosure of Invention
Embodiments of the present application provide an identification card copying method, an identification card copying device, an electronic device, and a storage medium, so as to solve the problems of low accuracy and low applicability of the identification card copying method in the current method.
The embodiment of the application provides an identification card copying and identifying method, which comprises the following steps:
carrying out positive and negative identification on the original image to obtain a positive and negative identification result and a clipped identity card image, wherein the positive and negative identification result is used for indicating that the identity card image is the positive image of the identity card or the negative image of the identity card;
carrying out direction identification on the identity card image by using an identity card direction identification model to obtain the shooting direction of the identity card image;
carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image;
based on the front and back recognition results, performing copying recognition on the standard identity card image by using an identity card copying recognition model to obtain a first copying recognition result;
copying and identifying the standard identity card image through a front-back universal copying and identifying model to obtain a second copying and identifying result;
correcting the original image according to the shooting direction to obtain a corrected original image, and performing general copying recognition on the corrected original image through a general copying recognition model to obtain a third copying recognition result;
and determining an identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result and the third reproduction identification result, wherein the identification card reproduction identification result is used for indicating whether the original image is reproduced.
In the above implementation process, the identification card copying recognition model is used for copying and recognizing the standard identification card image obtained after the front and back sides of the identification card are recognized, the identification card copying recognition model is known to input the identification card copying recognition model, the input of the identification card copying recognition model belongs to the identification card front image or the identification card front image in the standard identification card image, the front and back sides general copying recognition model is used for copying and recognizing the standard identification card image with unknown front and back sides recognition result, the general copying recognition model is used for copying and recognizing the original image obtained through direction correction, namely the identification card copying recognition model, the front and back sides general copying recognition model and the general copying recognition model are used for respectively copying and recognizing images in different states containing the same identification card information, and the first copying recognition result obtained through copying and recognizing images in different states containing the same identification card information is used for obtaining the first copying and recognizing result, The method has the advantages that the second copying recognition result and the third copying recognition result determine the final copying recognition result of the original image, the image to be recognized does not have specific requirements, the applicability is high, various interference factors caused by different shooting differences can be eliminated, and therefore the accuracy of copying recognition of the identity card can be improved.
Optionally, before the performing front-back side identification on the original image to obtain a front-back side identification result and the clipped identification card image, the method further includes:
judging whether the original image comprises an identity card image or not;
and when the original image comprises the identity card image, executing the steps of carrying out the front and back identification of the identity card on the original image to obtain a front and back identification result and the cut identity card image.
In the implementation process, before the front and back sides of the original image are identified and cut, the original image is subjected to identity card detection, and when the original image contains an identity card image, the original image can be subjected to copying identification. When the original image does not contain the identity card image, the copying recognition of the original image is finished, the original image which is invalid for the copying recognition of the identity card can be filtered, and the validity of the copying recognition of the identity card is improved.
Optionally, the performing direction identification on the identity card image by using the identity card direction identification model to obtain the shooting direction of the identity card image includes:
inputting the identity card image into a first basic convolution unit of an identity card direction identification model to perform basic convolution processing to obtain a first convolution value, wherein the first basic convolution unit comprises a first convolution layer, a first batch normalization layer and a first activation function layer;
inputting the first convolution value into a first convolution unit of the input identity card direction identification model for convolution processing to obtain a second convolution value, wherein the first convolution unit comprises a second convolution layer and a second batch normalization layer;
inputting the first convolution value and the second convolution value into a first deep convolution unit of the input identity card direction identification model for deep convolution processing to obtain a third convolution value, wherein the first deep convolution unit comprises a second basic convolution unit, a third basic convolution unit and a second convolution unit;
inputting the third convolution value into a flattening layer of the identification card direction recognition model for flattening treatment to obtain a flattened convolution value;
and inputting the flattened convolution value into a linear layer of the identification card direction recognition model for linearization processing to obtain the shooting direction.
In the implementation process, the identification card direction identification model comprises the first basic convolution unit, the first deep convolution unit, a flattening layer and a linear layer, the first basic convolution unit, the first convolution unit and the first deep convolution unit perform multilayer convolution processing on the identification card image to extract feature data related to identification card shooting direction information in the identification card image, the flattening layer converts the multi-dimensional feature data related to the identification card shooting direction information into one-dimensional feature data related to the identification card shooting direction information, and the linear layer can enable the one-dimensional feature data related to front and back information to be converged and improve accuracy of obtaining the shooting direction.
Optionally, the identification card reproduction recognition model includes positive reproduction recognition model and reverse reproduction recognition model, positive reproduction recognition model and reverse reproduction recognition model structure are the same, based on the positive and reverse recognition result, it is right to utilize identification card reproduction recognition model standard identification card image is copied the discernment and is obtained first reproduction recognition result, include:
when the front and back recognition results show that the identity card image is the identity card front image, inputting the identity card image into a depth residual error network model of a first preset number of the front copying recognition model to perform feature extraction of a first preset number of dimensions to obtain a first feature extraction result of the first preset number;
inputting the first preset number of first feature extraction results into a second deep convolution unit of the front-side copying recognition model to obtain a fourth convolution value, wherein the second deep convolution unit comprises a fourth basic convolution unit, a fifth basic convolution unit and a third convolution unit;
inputting the first preset number of first feature extraction results and the fourth convolution values into a fourth convolution unit of the front-side copying recognition model for convolution processing to obtain a fifth convolution value, wherein the fourth convolution unit comprises a third convolution layer and a third batch normalization layer;
inputting the first preset number of first feature extraction results, the fourth convolution value and the fifth convolution value into a sixth basic convolution unit of the front-side copying recognition model to obtain a sixth convolution value, wherein the sixth basic convolution unit comprises a fourth convolution layer, a fourth batch normalization layer and a fourth activation function layer;
inputting the first preset number of first feature extraction results, the fifth convolution value and the sixth convolution value into a seventh basic convolution unit of the front-side copying recognition model to obtain a seventh convolution value, wherein the seventh basic convolution unit comprises a fifth convolution layer, a fifth batch normalization layer and a fifth activation function layer;
and inputting the seventh convolution value into a linearization layer of the front side copying recognition model for linear processing to obtain a first copying recognition result.
In the above implementation process, the identification card copying recognition model is divided into a front copying recognition model and a back copying recognition model, when the front and back recognition results show that the identification card image is the front identification card image, the identification card image is input into the front copying recognition model, and when the front and back recognition results show that the identification card image is the back identification card image, the identification card image is input into the back copying recognition model. The depth residual error network model with the first preset number of the front copying recognition model can be right from the dimensionalities of the first preset number of the identity card image to obtain a first feature extraction result, the feature extraction is carried out from the dimensionalities to improve the first feature extraction result and the fitting degree of the identity card image so as to enable the first feature extraction result to be reserved with more feature information in the identity card image and further improve the accuracy of the first copying recognition result obtained based on the first feature extraction result.
Optionally, the step of obtaining a second duplication recognition result by duplicating and recognizing the standard identification card image through the front-back universal duplication recognition model includes:
respectively inputting the identity card images into a VGG network model with a second preset number of dimensions of the front-back universal copying recognition model to perform feature extraction with the second preset number of dimensions to obtain a second feature extraction result with the second preset number;
inputting the second preset number of second feature extraction results into a third deep convolution unit of the front-back general copying recognition model to obtain an eighth convolution value, wherein the third deep convolution unit comprises an eighth basic convolution unit, a ninth basic unit and a fifth convolution unit;
inputting the second preset number of second feature extraction results and the eighth convolution value into a fourth deep convolution unit of the front-back general copying recognition model for convolution processing to obtain a ninth convolution value, wherein the fourth deep convolution unit comprises a tenth basic convolution unit, an eleventh basic convolution unit and a sixth convolution unit;
and inputting the ninth convolution value into a linear layer of the front-back universal copying recognition model for linear processing to obtain a universal recognition result.
In the implementation process, the preset number of VGG network models of the front copying recognition model can perform feature extraction on the standard identity card image from the second preset number of dimensions to obtain a second feature extraction result, the feature extraction from the multiple dimensions can improve the fitting degree of the second feature extraction result and the standard identity card image, so that the second feature extraction result retains more feature information in the standard identity card image, and the accuracy of the second copying recognition result obtained based on the second feature extraction result is improved.
Optionally, the performing general-purpose copying recognition on the corrected original image through a general-purpose copying recognition model to obtain a third copying recognition result includes:
inputting the corrected original image into a convolution unit of the universal copying recognition model to obtain a tenth convolution result, wherein the convolution unit of the universal copying recognition model comprises a twelfth basic convolution unit, a thirteenth basic convolution unit, a fifth deep convolution unit, a first residual convolution unit, a first structure convolution unit, a second residual convolution unit, a second structure convolution unit, a third residual convolution unit, a third structure convolution unit and a fourth structure convolution unit, the twelfth basic convolution unit comprises a sixth convolution layer, a sixth batch of standard layers and a sixth activation function layer, the thirteenth basic convolution unit comprises a seventh convolution layer, a seventh batch of standard layers and a seventh activation function layer, and the fifth deep convolution unit comprises a fourteenth basic convolution unit, a fifteenth basic convolution unit and a seventh convolution unit, the first residual convolution unit comprises a first specified number of sixth depth convolution units, the first specified number is the number of all convolution units through which the corrected original image enters the universal reproduction identification model, and the first structural convolution unit comprises a first self-adaptive pooling unit, a sixteenth basic convolution unit, an eighth convolution unit and a first activation unit; the second residual convolution unit comprises a second specified number of seventh deep convolution units, the second specified number is the number of all convolution units through which the corrected original image enters the universal reproduction identification model, and the second structural convolution unit comprises a second self-adaptive pooling unit, a seventeenth basic convolution unit, a ninth convolution unit and a second activation unit; the third residual convolution unit comprises a third specified number of eighth deep convolution units, the third specified number is a constant, and the third structural convolution unit comprises a third adaptive pooling unit, an eighteenth basic convolution unit, a tenth convolution unit and a third activation unit; the fourth structural convolution unit comprises a fourth adaptive pooling unit, a nineteenth basic convolution unit, an eleventh convolution unit and a fourth activation unit;
inputting the tenth convolution result into a flattening layer of the universal copying recognition model for flattening processing to obtain a flattening processing result;
inputting the flattening processing result into a linear layer of the universal copying recognition model for linear processing to obtain a linearization processing result;
and inputting the linearization processing result into a batch standardization layer of the universal reproduction identification model to obtain a third reproduction identification result.
In the implementation process, the convolution unit of the universal copying recognition model comprises a twelfth basic convolution unit, a thirteenth basic convolution unit, a fifth deep convolution unit, a first residual convolution unit, a first structure convolution unit, a second residual convolution unit, a second structure convolution unit, a third residual convolution unit, a third structure convolution unit and a fourth structure convolution unit, the multi-layer convolution processing is extracted through the multi-layer convolution processing to extract the feature data related to the copying information in the identity card image, the flattening layer converts the feature data related to the copying information in a multi-dimensional manner into the feature data related to the copying information in a one-dimensional manner, and the linear layer can enable the feature data related to the copying information in a one-dimensional manner to be converged and improve the accuracy of the shooting direction.
Optionally, the determining the identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result, and the third reproduction identification result includes:
obtaining a first preset weight of the first reproduction identification result, a second preset weight of the second reproduction identification result and a third preset weight of the third reproduction identification result based on the logistic regression model;
and carrying out weighted average on the first copying recognition result, the second copying recognition result and the third copying recognition result based on the first preset weight, the second preset weight and the third preset weight to obtain the identity card copying recognition result.
In the implementation process, the first, second and third reproduction identification results have different weights based on the logistic regression model, and the first, second and third reproduction identification results are weighted and averaged based on the first, second and third preset weights, so that the importance of the first, second and third reproduction identification results in different states can be adjusted according to the logistic regression model to improve the accuracy of the identification result.
The embodiment of the application provides an identification card reproduction recognition device, identification card reproduction recognition device includes:
the front and back recognition module is used for carrying out front and back recognition on the identity card of the original image to obtain a front and back recognition result and a clipped identity card image, and the front and back recognition result is used for indicating that the identity card image is an identity card front image or an identity card back image;
the direction recognition module is used for carrying out direction recognition on the identity card image by using an identity card direction recognition model to obtain the shooting direction of the identity card image and carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image;
the correction module is used for carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image;
the first copying recognition module is used for copying and recognizing the standard identity card image by using an identity card copying recognition model based on the front and back recognition results to obtain a first copying recognition result;
the second copying recognition module is used for copying and recognizing the standard identity card image through a front-back universal copying recognition model to obtain a second copying recognition result;
the third copying recognition module is used for correcting the original image according to the shooting direction to obtain a corrected original image, and performing general copying recognition on the corrected original image through a general copying recognition model to obtain a third copying recognition result;
and the analysis module is used for determining an identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result and the third reproduction identification result, and the identification card reproduction identification result is used for indicating whether the original image is reproduced.
In the above implementation process, the identification card copying recognition model is used for copying and recognizing the standard identification card image obtained after the front and back sides of the identification card are recognized, the identification card copying recognition model is known to input the identification card copying recognition model, the input of the identification card copying recognition model belongs to the identification card front image or the identification card front image in the standard identification card image, the front and back sides general copying recognition model is used for copying and recognizing the standard identification card image with unknown front and back sides recognition result, the general copying recognition model is used for copying and recognizing the original image obtained through direction correction, namely the identification card copying recognition model, the front and back sides general copying recognition model and the general copying recognition model are used for respectively copying and recognizing images in different states containing the same identification card information, and the first copying recognition result obtained through copying and recognizing images in different states containing the same identification card information is used for obtaining the first copying and recognizing result, The method has the advantages that the second copying recognition result and the third copying recognition result determine the final copying recognition result of the original image, the image to be recognized does not have specific requirements, the applicability is high, various interference factors caused by different shooting differences can be eliminated, and therefore the accuracy of copying recognition of the identity card can be improved.
Optionally, the identification card copying and recognizing device further includes a preprocessing module, and the preprocessing module includes:
judging whether the original image comprises an identity card image or not;
and when the original image comprises the identity card image, executing the steps of carrying out the front and back identification of the identity card on the original image to obtain a front and back identification result and the cut identity card image.
In the implementation process, before the front and back sides of the original image are identified and cut, the original image is subjected to identity card detection, and when the original image contains an identity card image, the original image can be subjected to copying identification. When the original image does not contain the identity card image, the copying recognition of the original image is finished, the original image which is invalid for the copying recognition of the identity card can be filtered, and the validity of the copying recognition of the identity card is improved.
Optionally, the direction identification module is configured to:
inputting the identity card image into a first basic convolution unit of an identity card direction identification model to perform basic convolution processing to obtain a first convolution value, wherein the first basic convolution unit comprises a first convolution layer, a first batch normalization layer and a first activation function layer;
inputting the first convolution value into a first convolution unit of the input identity card direction identification model for convolution processing to obtain a second convolution value, wherein the first convolution unit comprises a second convolution layer and a second batch normalization layer;
inputting the first convolution value and the second convolution value into a first deep convolution unit of the input identity card direction identification model for deep convolution processing to obtain a third convolution value, wherein the first deep convolution unit comprises a second basic convolution unit, a third basic convolution unit and a second convolution unit;
inputting the third convolution value into a flattening layer of the identification card direction recognition model for flattening treatment to obtain a flattened convolution value;
and inputting the flattened convolution value into a linear layer of the identification card direction recognition model for linearization processing to obtain the shooting direction.
In the implementation process, the identification card direction identification model comprises the first basic convolution unit, the first deep convolution unit, a flattening layer and a linear layer, the first basic convolution unit, the first convolution unit and the first deep convolution unit perform multilayer convolution processing on the identification card image to extract feature data related to identification card shooting direction information in the identification card image, the flattening layer converts the multi-dimensional feature data related to the identification card shooting direction information into one-dimensional feature data related to the identification card shooting direction information, and the linear layer can enable the one-dimensional feature data related to front and back information to be converged and improve accuracy of obtaining the shooting direction.
Optionally, the first duplication recognition module is configured to:
when the front and back recognition results show that the identity card image is the identity card front image, inputting the identity card image into a depth residual error network model of a first preset number of the front copying recognition model to perform feature extraction of a first preset number of dimensions to obtain a first feature extraction result of the first preset number;
inputting the first preset number of first feature extraction results into a second deep convolution unit of the front-side copying recognition model to obtain a fourth convolution value, wherein the second deep convolution unit comprises a fourth basic convolution unit, a fifth basic convolution unit and a third convolution unit;
inputting the first preset number of first feature extraction results and the fourth convolution values into a fourth convolution unit of the front-side copying recognition model for convolution processing to obtain a fifth convolution value, wherein the fourth convolution unit comprises a third convolution layer and a third batch normalization layer;
inputting the first preset number of first feature extraction results, the fourth convolution value and the fifth convolution value into a sixth basic convolution unit of the front-side copying recognition model to obtain a sixth convolution value, wherein the sixth basic convolution unit comprises a fourth convolution layer, a fourth batch normalization layer and a fourth activation function layer;
inputting the first preset number of first feature extraction results, the fifth convolution value and the sixth convolution value into a seventh basic convolution unit of the front-side copying recognition model to obtain a seventh convolution value, wherein the seventh basic convolution unit comprises a fifth convolution layer, a fifth batch normalization layer and a fifth activation function layer;
and inputting the seventh convolution value into a linearization layer of the front side copying recognition model for linear processing to obtain a first copying recognition result.
In the above implementation process, the identification card copying recognition model is divided into a front copying recognition model and a back copying recognition model, when the front and back recognition results show that the identification card image is the front identification card image, the identification card image is input into the front copying recognition model, and when the front and back recognition results show that the identification card image is the back identification card image, the identification card image is input into the back copying recognition model. The depth residual error network model with the first preset number of the front copying recognition model can be right from the dimensionalities of the first preset number of the identity card image to obtain a first feature extraction result, the feature extraction is carried out from the dimensionalities to improve the first feature extraction result and the fitting degree of the identity card image so as to enable the first feature extraction result to be reserved with more feature information in the identity card image and further improve the accuracy of the first copying recognition result obtained based on the first feature extraction result.
Optionally, the second duplication recognition module is specifically configured to:
respectively inputting the identity card images into a VGG network model with a second preset number of dimensions of the front-back universal copying recognition model to perform feature extraction with the second preset number of dimensions to obtain a second feature extraction result with the second preset number;
inputting the second preset number of second feature extraction results into a third deep convolution unit of the front-back general copying recognition model to obtain an eighth convolution value, wherein the third deep convolution unit comprises an eighth basic convolution unit, a ninth basic unit and a fifth convolution unit;
inputting the second preset number of second feature extraction results and the eighth convolution value into a fourth deep convolution unit of the front-back general copying recognition model for convolution processing to obtain a ninth convolution value, wherein the fourth deep convolution unit comprises a tenth basic convolution unit, an eleventh basic convolution unit and a sixth convolution unit;
and inputting the ninth convolution value into a linear layer of the front-back universal copying recognition model for linear processing to obtain a universal recognition result.
In the implementation process, the preset number of VGG network models of the front copying recognition model can perform feature extraction on the standard identity card image from the second preset number of dimensions to obtain a second feature extraction result, the feature extraction from the multiple dimensions can improve the fitting degree of the second feature extraction result and the standard identity card image, so that the second feature extraction result retains more feature information in the standard identity card image, and the accuracy of the second copying recognition result obtained based on the second feature extraction result is improved.
Optionally, the third duplication recognition module includes:
inputting the corrected original image into a convolution unit of the universal copying recognition model to obtain a tenth convolution result, wherein the convolution unit of the universal copying recognition model comprises a twelfth basic convolution unit, a thirteenth basic convolution unit, a fifth deep convolution unit, a first residual convolution unit, a first structure convolution unit, a second residual convolution unit, a second structure convolution unit, a third residual convolution unit, a third structure convolution unit and a fourth structure convolution unit, the twelfth basic convolution unit comprises a sixth convolution layer, a sixth batch of standard layers and a sixth activation function layer, the thirteenth basic convolution unit comprises a seventh convolution layer, a seventh batch of standard layers and a seventh activation function layer, and the fifth deep convolution unit comprises a fourteenth basic convolution unit, a fifteenth basic convolution unit and a seventh convolution unit, the first residual convolution unit comprises a first specified number of sixth depth convolution units, the first specified number is the number of all convolution units through which the corrected original image enters the universal reproduction identification model, and the first structural convolution unit comprises a first self-adaptive pooling unit, a sixteenth basic convolution unit, an eighth convolution unit and a first activation unit; the second residual convolution unit comprises a second specified number of seventh deep convolution units, the second specified number is the number of all convolution units through which the corrected original image enters the universal reproduction identification model, and the second structural convolution unit comprises a second self-adaptive pooling unit, a seventeenth basic convolution unit, a ninth convolution unit and a second activation unit; the third residual convolution unit comprises a third specified number of eighth deep convolution units, the third specified number is a constant, and the third structural convolution unit comprises a third adaptive pooling unit, an eighteenth basic convolution unit, a tenth convolution unit and a third activation unit; the fourth structural convolution unit comprises a fourth adaptive pooling unit, a nineteenth basic convolution unit, an eleventh convolution unit and a fourth activation unit;
inputting the tenth convolution result into a flattening layer of the universal copying recognition model for flattening processing to obtain a flattening processing result;
inputting the flattening processing result into a linear layer of the universal copying recognition model for linear processing to obtain a linearization processing result;
and inputting the linearization processing result into a batch standardization layer of the universal reproduction identification model to obtain a third reproduction identification result.
In the implementation process, the convolution unit of the universal copying recognition model comprises a twelfth basic convolution unit, a thirteenth basic convolution unit, a fifth deep convolution unit, a first residual convolution unit, a first structure convolution unit, a second residual convolution unit, a second structure convolution unit, a third residual convolution unit, a third structure convolution unit and a fourth structure convolution unit, the multi-layer convolution processing is extracted through the multi-layer convolution processing to extract the feature data related to the copying information in the identity card image, the flattening layer converts the feature data related to the copying information in a multi-dimensional manner into the feature data related to the copying information in a one-dimensional manner, and the linear layer can enable the feature data related to the copying information in a one-dimensional manner to be converged and improve the accuracy of the shooting direction.
Optionally, the analysis module is configured to:
obtaining a first preset weight of the first reproduction identification result, a second preset weight of the second reproduction identification result and a third preset weight of the third reproduction identification result based on the logistic regression model;
and carrying out weighted average on the first copying recognition result, the second copying recognition result and the third copying recognition result based on the first preset weight, the second preset weight and the third preset weight to obtain the identity card copying recognition result.
In the implementation process, the first, second and third reproduction identification results have different weights based on the logistic regression model, and the first, second and third reproduction identification results are weighted and averaged based on the first, second and third preset weights, so that the importance of the first, second and third reproduction identification results in different states can be adjusted according to the logistic regression model to improve the accuracy of the identification result.
The present embodiment also provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores program instructions, and the processor executes the program instructions to perform the steps of any of the above methods.
The present embodiment also provides a storage medium having stored therein computer program instructions, which when executed by a processor, perform the steps of any of the above methods.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Fig. 1 is a flowchart of an identification card copying and recognizing method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of an identification card copying and recognizing method including an identification card image preprocessing step according to an embodiment of the present application.
Fig. 3 is a flowchart of a step of obtaining a shooting direction of an identity card image according to an embodiment of the present application.
Fig. 4 is a block diagram of an identification card direction recognition model according to an embodiment of the present application.
Fig. 5 is a block diagram of a first basic convolution unit according to an embodiment of the present application.
Fig. 6 is a block diagram of a first convolution unit according to an embodiment of the present application.
Fig. 7 is a block diagram of a first deep convolution unit according to an embodiment of the present application.
Fig. 8 is a flowchart illustrating a step of obtaining a first duplication recognition result according to an embodiment of the present application.
Fig. 9 is a block diagram of a front-view reproduction identification model according to an embodiment of the present application.
Fig. 10 is a flowchart of a step of obtaining a second duplication recognition result according to an embodiment of the present application.
Fig. 11 is a block diagram of a front-back general-purpose copying recognition model according to an embodiment of the present application.
Fig. 12 is a flowchart illustrating a step of obtaining a third duplication recognition result according to an embodiment of the present application.
Fig. 13 is a block diagram of a general copy recognition model according to an embodiment of the present application.
Fig. 14 is a block diagram of a first convolution unit according to an embodiment of the present application.
Fig. 15 is a schematic view of an identification card copying and recognizing apparatus according to an embodiment of the present application.
Legend: 90-identity card reproduction identification means; 901-front and back recognition module; 902-direction identification module; 903-a correction module; 904-first reproduction identification module; 905-a second reproduction identification module; 906-a third reproduction identification module; 907-an analysis module; 908-preprocessing module; 10-identification card direction identification model; 101-a first elementary convolution unit; 102-a first convolution unit; 103-a first deep convolution unit; 104-flattening layer; 105-a linear layer; 1011-a first coiled layer; 1012-first batch normalization layer; 1013-a first activation function layer; 1021-a second convolution layer; 1022 — second batch normalization layer; 1031-a second basic convolution unit; 1032-a third basic convolution unit; 1033-a second convolution unit; 20-copying the recognition model on the front side; 201-depth residual error network model; 202-a second deep convolution unit; 203-a fourth convolution unit; 204-a sixth basic convolution unit; 205-a seventh basic convolution unit; 206-linearization layer; 30-front and back side universal copying recognition model; 301-VGG network model; 302-a third deep convolution unit; 303-a fourth deep convolution unit; 304-a linear layer; 40-a universal reproduction recognition model; 401-twelfth elementary convolution unit; 402-a thirteenth basic convolution unit; 403-a fifth deep convolution unit; 404-a first residual convolution unit; 405-a first structural convolution unit; 4051-a first adaptive pooling unit; 4052-sixteenth basic convolution unit; 4053-an eighth convolution unit; 4054-a first activation unit; 406-a second residual convolution unit; 407-a second structural convolution unit; 408-a third residual convolution unit; 409-a third structural convolution unit; 410-a fourth structural convolution unit; 411-flattening layer; 412-a linear layer; 413-batch standardization layer.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In the description of the present application, it is noted that the terms "first", "second", and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
In order to improve accuracy and applicability of identity card copying, an embodiment of the present application provides an identity card copying identification method, please refer to fig. 1, where fig. 1 is a flowchart of an identity card copying identification method provided in the embodiment of the present application, and the identity card copying identification method includes the following sub-steps:
step S2: and carrying out front and back recognition on the original image to obtain a front and back recognition result and a clipped identity card image, wherein the front and back recognition result is used for indicating that the identity card image is the front image of the identity card or the back image of the identity card.
It can be understood that the face of the identity card image containing the face image is an identity card front face image, and the face of the identity card image not containing the face image is an identity card back face image.
In one embodiment, the original image includes a background image surrounding the identification card image in addition to the identification card image. Because the identity card image and the background image have clear color value boundaries, the identity card image can be cut out of the original image based on the color value boundaries or the color value differences between the identity card image and the background image. After the identity card image is cut from the original image, the background image is prevented from being identified by subsequent copying identification, and the efficiency of the identity card copying identification is improved.
It is understood that the original image may include any image of any object information, and therefore there may be a case where the identification card image is not included in the original image, and in order to avoid an ineffective operation of performing the duplication recognition on the original image not including the identification card image, the processing described in step S1 below is performed before step S2.
Referring to fig. 2, fig. 2 is a flowchart of an identification card duplication recognition method including an identification card image preprocessing step according to an embodiment of the present application.
Optionally, before step S2, the identification card duplication recognition method further includes step S1, and step S1 includes the following sub-steps:
step S11: and judging whether the original image comprises the identity card image.
It can be understood that whether the original image contains the identification card image can be determined by a YOLO (You Only Look Once) algorithm. The YOLO algorithm first divides the picture into S x S cells, where S is the size of the cell. Each cell predicts a predetermined number of bounding boxes and a confidence level for each bounding box, wherein the confidence level for a bounding box actually includes the magnitude of the likelihood of the presence of an object in the bounding box and the positional accuracy of the bounding box. And classifying all the bounding boxes according to the confidence of each bounding box to obtain a classification result, and judging whether the original image comprises the identity card image or not based on the classification result. The YOLO algorithm adopts a mean square error loss function, so that the error of each boundary box can be reduced, and the accuracy of judging whether the original image comprises the identity card image or not is improved.
Step S12: and when the original image comprises the identity card image, executing the steps of carrying out the front and back identification of the identity card on the original image to obtain a front and back identification result and the clipped identity card image.
It can be understood that, when the original image includes the identification card image, step S2 and the following steps of identification card copying are executed, so as to avoid the invalid action of copying and identifying the original image that does not include the identification card image, and improve the validity of identification card copying and identifying.
Step S3: and carrying out direction identification on the identity card image by using the identity card direction identification model to obtain the shooting direction of the identity card image.
It can be understood that when the original image is obtained when the identity card or the image containing the identity card is shot, different shooting directions of the identity card image exist due to different shooting states of the shooting equipment and different placement directions of the identity card in the original image. The shooting device can be any device with a shooting function, such as a mobile phone, a tablet computer, a camera, a video camera, and the like. The distance is carried out by the mobile phone, and the original image obtained by photographing by the mobile phone can be a horizontal photo or a vertical photo. When the mobile phone is used for photographing, the identity card can be in the direction corresponding to the reading habit of a person when the identity card is used for directly photographing the identity card image, or the center of the identity card is used as a reference point to rotate the identity card image in the direction corresponding to the reading habit of the person by 360 degrees.
Referring to fig. 3, fig. 3 is a flowchart illustrating a step of obtaining a shooting direction of an identification card image according to an embodiment of the present application. Optionally, step S3 includes the following substeps:
referring to fig. 4, fig. 4 is a block diagram of an identification card direction recognition model according to an embodiment of the present disclosure. The arrows in fig. 4 indicate the data flow direction. From steps S31-S35, the identification card direction recognition model 10 includes a first basic convolution unit 101, a first convolution unit 102, a first deep convolution unit 103, a flattening layer 104, and a linear layer 105. As can be seen from fig. 4, after the identification card image is input into the identification card direction recognition model 10, a first convolution value is output through data processing of the first basic convolution unit 101, and the first convolution value is input and output to the first convolution unit 102 to obtain a second convolution value. The first convolution value and the second convolution value are used as input of the first deep convolution unit 103 and output to obtain a third convolution value, and the third convolution value is used as input of the flattening layer 104 and output to obtain a flattened convolution value. The imaging direction is obtained by inputting and outputting the flattened convolution value to the linear layer 105. The identity card direction identification model 10 comprises three convolution units with different convolution depths, namely a first basic convolution unit 101, a first convolution unit 102 and a first depth convolution unit 103, and is used for performing multilayer convolution processing on an identity card image to extract feature data related to identity card shooting direction information in the identity card image, wherein the feature data related to the identity card shooting direction information obtained by more convolution layers is closer to the identity card shooting direction, and the accuracy of the feature data related to the identity card shooting direction information is improved. The flattening layer 104 converts the multi-dimensional characteristic data related to the identity card shooting direction information into one-dimensional characteristic data related to the identity card shooting direction information, and the linear layer 105 can converge the one-dimensional characteristic data related to the front and back information, so that the accuracy of obtaining the shooting direction is improved.
Step S31: inputting an identity card image into a first basic convolution unit of an identity card direction identification model to perform basic convolution processing to obtain a first convolution value, wherein the first basic convolution unit comprises a first convolution layer, a first batch normalization layer and a first activation function layer.
As an embodiment, the identity card image outputs a first convolution value through data processing of the first basic convolution unit 101, and the data processing process of the first basic convolution unit 101 can use the formula y1=convbase1(input0), where convbase1Denotes the operation mode, y, of the first basic convolution element 1011Representing the first convolution value, input0 represents the identification card image.
Referring to fig. 5, fig. 5 is a block diagram of a first basic convolution unit according to an embodiment of the present application. In one embodiment, the first basic convolution unit 101 includes a first convolution layer 1011, a first batch normalization layer 1012, and a first activation-function layer 1013. As an embodiment, the first convolution layer 1011 may be implemented by conv operation, the calculation process of the first convolution layer 1011 may be expressed as conv (input0) by a formula, the conv (input0) is input into the first batch normalization layer 1012 to be operated, the bn algorithm may be adopted to implement the calculation process, and the calculation process of the first batch normalization layer 1012 may be expressed by a formula bn (input 0). The first activation function layer 1013 may be implemented by a relu function, and the calculation process of the first activation function layer 1013 may be represented by relu (bn (conv (input 0))). The relu function can save the calculation process, so as to improve the operation speed of the first basic convolution unit 101.
Step S32: and inputting the first convolution value into a first convolution unit of the identification card direction identification model for convolution processing to obtain a second convolution value, wherein the first convolution unit comprises a second convolution layer and a second batch standardization layer.
Referring to fig. 6, fig. 6 is a block diagram of a first convolution unit according to an embodiment of the present application.
As an embodiment, y in step S311The data processing of the first convolution unit 102 may use the formula y to output the second convolution value2=convblock1(y1) Wherein conv isblock1Representing a first convolutionOperation mode of cell 102, y1Representing a first convolution value, y2Representing the second convolution value.
In one embodiment, the first volume unit 102 includes a second volume layer 1021 and a second batch normalization layer 1022. As an embodiment, where the second convolution layer 1021 may be implemented by conv operation, the calculation process of the second convolution layer 1021 may be expressed as conv (y) by formula1) Connect conv (y)1) The calculation process of the second batch normalization layer 1022 may be implemented by a bn algorithm, and the formula bn (conv (y) may be used for the calculation process of the second batch normalization layer 10221) Is) is shown. Wherein the bn algorithm processes conv (y) obtained from the second batch normalization layer 10221) Normalization is performed to accelerate the subsequent utilization of the output bn (conv (y) of the second batch normalization layer 10221) The operation speed of data processing.
Step S33: and inputting the first convolution value and the second convolution value into a first deep convolution unit of the identification card direction identification model for deep convolution processing to obtain a third convolution value, wherein the first deep convolution unit comprises a second basic convolution unit, a third basic convolution unit and a second convolution unit.
As an embodiment, the first convolution value y obtained in step S311And the second convolution value y obtained in step S322The data process of the first deep convolution unit 103 may use the formula y to output a third convolution value through the data processing of the first deep convolution unit 1033=convdeep1(y1+y2) Wherein conv isdeep1Denotes the operation mode of the first deep convolution unit 103, y1Representing a first convolution value, y2Representing a second convolution value, y3Representing a third convolution value.
Referring to fig. 7, fig. 7 is a block diagram of a first deep convolution unit according to an embodiment of the present application. As an embodiment, the first deep convolution unit 103 includes a second basic convolution unit 1031, a third basic convolution unit 1032, and a second convolution unit 1033. Here, the nomenclature of the second basic convolution unit 1031 and the nomenclature of the third basic convolution unit 1032 are different from the nomenclature of the first basic convolution unit 101 in fig. 5, but the structure of the second basic convolution unit 1031 and the structure of the third basic convolution unit 1032 are the same as the structure of the first basic convolution unit 101 in fig. 5. The naming of the second convolution unit 1033 is different from that of the first convolution unit 102 in fig. 5, but the structure of the second convolution unit 1033 is the same as that of the first convolution unit 102 in fig. 5, and thus is not described herein again.
Step S34: and inputting the third convolution value into a flattening layer of the identification card direction recognition model for flattening treatment to obtain a flattened convolution value.
In one embodiment, the flattening layer 104 may be implemented by a flatten function, and the process of inputting the third convolution value into the flattening layer 104 may use yflatten1=flatten(y3) Is shown in which y isflatten1Represents the flattened convolution value, y, resulting from data processing by the flattening layer 1043Represents the third convolution value obtained in step S33. The flatten function can combine the third convolution value y3One-dimensionalization is carried out to reduce the third convolution value y3The dimension of (2) and the data processing speed are improved.
In one embodiment, the flattening layer 104 may be implemented by a flatten function, and the process of inputting the third convolution value into the flattening layer may be implemented by yflatten1=flatten(y3) Is shown in which y isflatten1Represents the flattened convolution value, y, resulting from data processing by the flattening layer 1043Represents the third convolution value obtained in step S33. The flatten function can combine the third convolution value y3One-dimensionalization is carried out to reduce the third convolution value y3The dimension of (2) and the data processing speed are improved.
Step S35: and inputting the flattened convolution value into a linear layer of the identification card direction recognition model for linearization processing to obtain the shooting direction.
In one embodiment, linear layer 105 may be implemented by a linear function, and the process of flattening the convolution value input to linear layer 105 may be implemented by ylinear1=linear(yflatten1) Is shown in which y islinear1Representing a layer 105 of warp threadsData processing to obtain data representing shooting direction, flattening convolution value yflatten1Represents the third convolution value obtained in step S34. The linear function can ensure the flattened convolution value yflatten1And convergence is achieved, and the accuracy of obtaining the shooting direction is improved.
Step S4: and carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image.
It is understood that, based on the photographing direction obtained in step S3, the standard identification card image is obtained by rotating the identification card image with the center of the identification card image as the center of rotation against the obtained photographing direction. For example, when the shooting direction obtained in step S3 is that the id card text faces to the right, the standard id card image is obtained by rotating 90 degrees counterclockwise with the id card center as the rotation center.
Step S5: based on the front and back recognition results, the identity card reproduction recognition model is used for reproducing and recognizing the standard identity card image to obtain a first reproduction recognition result.
Referring to fig. 8, fig. 8 is a flowchart illustrating a step of obtaining a first duplication recognition result according to an embodiment of the present application.
Optionally, step S5 includes the following substeps:
step S51: and when the front and back recognition results show that the identity card image is the identity card front image, inputting the identity card image into a depth residual error network model of a first preset number of the front copying recognition model to perform feature extraction of the dimensions of the first preset number to obtain first feature extraction results of the first preset number.
Referring to fig. 9, fig. 9 is a block diagram of a front-view copying recognition model according to an embodiment of the present application. The identity card copying recognition model comprises a front copying recognition model and a back copying recognition model, the front copying recognition model and the back copying recognition model are identical in structure, when the front and back recognition results show that the identity card images are the front images of the identity card, the identity card images are input into the front copying recognition model, and when the front and back recognition results show that the identity card images are the back images of the identity card, the identity card images are input into the back copying recognition model.
In fig. 9, as an embodiment, the front-side duplication recognition model 20 includes a first preset number of depth residual network models 201, a second depth convolution unit 202, a fourth convolution unit 203, a sixth basic convolution unit 204, a seventh basic convolution unit 205, and a linearization layer 206.
As can be seen from fig. 9, after the identification card image with the front and back recognition results displayed as the front image is input into the front copying recognition model 20, the first feature extraction result is output through data processing of the depth residual error network model 201, and the first feature extraction result is used as the input of the second depth convolution unit 202 and is output to obtain the fourth convolution value. And taking each first feature extraction result and each fourth convolution value as the input of the fourth convolution unit 203 and outputting the input to obtain a fifth convolution value, and taking each first feature extraction result, each fourth convolution value and each fifth convolution value as the input of the sixth basic convolution unit 204 and outputting the input to obtain a sixth convolution value. The respective first feature extraction result, fifth convolution value, and sixth convolution value are input to the seventh basic convolution unit 205 and output a seventh convolution value, and the seventh convolution value is output to the linearization layer 206 and output the first duplication recognition result.
The depth residual network model is a ResNet network model, and the first preset number of the ResNet network model in step S51 is 3, which may be set according to actual requirements. The ResNet network models with the first preset number can extract features of the identity card image from the dimensionalities of the first preset number, for example, the features of different dimensionalities such as colors, text intervals or information layout positions on the identity card can be extracted to obtain a first feature extraction result, the depth residual error network models with the first preset number of the front copying recognition model can extract the features of the identity card image from the dimensionalities of the first preset number to obtain a first feature extraction result, the feature extraction from the multiple dimensionalities can improve the fit degree of the first feature extraction result and the identity card image, so that the first feature extraction result retains more feature information in the identity card image, and the first copying recognition obtained based on the first feature extraction result is improvedThe accuracy of the results. As an embodiment, the formula of the first preset number of ResNet network model extractions to obtain the first feature extraction result can be expressed as
Figure BDA0002858664330000141
Where n represents a first preset number. input1 indicates that the positive and negative recognition result is displayed as that the ID card image is the ID card positive image, ResNetiRepresenting the ith ResNet network model.
Step S52: and inputting the first preset number of first feature extraction results into a second deep convolution unit of the front-side copying recognition model to obtain a fourth convolution value, wherein the second deep convolution unit comprises a fourth basic convolution unit, a fifth basic convolution unit and a third convolution unit.
As an embodiment, the first feature extraction result is used as the input of the second deep convolution unit 202 to obtain the fourth convolution value, and the operation process of the second deep convolution unit 202 can be represented as y4=convdeep2(yh) Wherein, y4Representing the fourth convolution value, convdeep2Denotes the operation mode, y, of the second deep convolution unit 202hThe first feature extraction result is represented.
Step S53: and inputting the first preset number of first feature extraction results and the fourth convolution values into a fourth convolution unit of the front-side copying recognition model for convolution processing to obtain a fifth convolution value, wherein the fourth convolution unit comprises a third convolution layer and a third batch standardization layer.
As an embodiment, the fourth convolution value and the first feature extraction result are used as the input of the fourth convolution unit 203 to obtain a fifth convolution value, and the operation process of the fourth convolution unit 203 can be represented as y5=convblock4(y4+yh) Wherein, y5Representing the fifth convolution value, convblock4Denotes the operation mode of the fourth convolution unit 203, yhRepresents the first feature extraction result, y4Representing the fourth convolution value.
Step S54: and inputting the first preset number of first feature extraction results, the fourth convolution value and the fifth convolution value into a sixth basic convolution unit of the front-side copying recognition model to obtain a sixth convolution value, wherein the sixth basic convolution unit comprises a fourth convolution layer, a fourth batch normalization layer and a fourth activation function layer.
As an embodiment, the first feature extraction result, the fourth convolution value, and the fifth convolution value are used as the input of the sixth basic convolution unit 204 to obtain the sixth convolution value, and the operation process of the sixth basic convolution unit 204 may be represented as y6=convbase6(y4+y5+yh) Wherein, y4Representing a fourth convolution value, y5Represents a fifth convolution value, y6Representing the sixth convolution value, convbase6The operation manner of the sixth basic convolution unit 204 is shown.
Step S55: and inputting the first preset number of first feature extraction results, the fifth convolution value and the sixth convolution value into a seventh basic convolution unit of the front-side copying recognition model to obtain a seventh convolution value, wherein the seventh basic convolution unit comprises a fifth convolution layer, a fifth batch normalization layer and a fifth activation function layer.
As an embodiment, the first feature extraction result, the fifth convolution value and the sixth convolution value are used as the input of the seventh basic convolution unit 205 to obtain the seventh convolution value, and the operation process of the seventh basic convolution unit 205 can be represented as y7=convbase7(y5+y6+yh) Wherein, y5Represents a fifth convolution value, y6Representing the sixth convolution value, y7Representing the seventh convolution value, convbase7The operation manner of the seventh basic convolution unit 205 is shown.
Step S56: and inputting the seventh convolution value into a linearization layer of the front-side copying recognition model for linear processing to obtain a first copying recognition result.
It is understood that, in step S56, both linearization layer 206 and linearization layer 105 of the front-side rendering recognition model are implemented by linear function, and the operation process of linearization layer 206 can be implemented by ylinear2=linear(y7) Is represented by (a) in which ylinear2Representing the first recognition result, y7Denotes the seventhThe convolution value.
It is understood that in steps S52-S55, the second deep convolution unit 202 includes a fourth basic convolution unit, a fifth basic convolution unit and a third convolution unit, the second deep convolution unit 202 has the same structure as the first deep convolution unit 103, the fourth convolution unit 203 has a structure including a third convolution layer and a third batch normalization layer, the fourth convolution unit 203 has the same structure as the first convolution unit 102 and the second convolution unit 1033, the fourth convolution unit 203 includes a fourth convolution layer, a fourth batch normalization layer and a fourth activation function layer, and the sixth basic convolution unit 204 and the seventh basic convolution unit 205 have the same structure as the second basic convolution unit 1031 and the first basic convolution unit 101, which will not be described herein again.
Step S6: and copying and identifying the standard identity card image through the front and back universal copying and identifying model to obtain a second copying and identifying result. Referring to fig. 10 and fig. 11, fig. 10 is a flowchart illustrating a step of obtaining a second reproduction identification result according to an embodiment of the present application, and fig. 11 is a block diagram illustrating a front-back general reproduction identification model according to an embodiment of the present application. Optionally, step S6 includes the following substeps:
in fig. 11, a second preset number of VGG network models 301, a third deep convolution unit 302, a fourth deep convolution unit 303, and linear layers 304 are included in the front-back-side-common-use-copying recognition model 30.
As can be seen from fig. 11, after the front-back recognition result is displayed as a standard identification card image and is input into the front-back universal copying recognition model 30, first, a second feature extraction result is output through data processing of a second preset number (the second preset number may be set according to an actual situation, and 3 is taken as an example in fig. 11) of VGG network models, after each second feature extraction result is taken as an input of the third deep convolution unit 302 and is output to obtain an eighth convolution value, each second feature extraction result and the eighth convolution value are taken as an input of the fourth deep convolution unit 303 and are output to obtain a ninth convolution value. And taking the ninth convolution value as the input of the linear layer 304 and outputting the ninth convolution value to obtain a universal recognition result.
As an embodiment, in step S61, taking the second preset number set in fig. 11 as an example, the 3 VGG network models can perform feature extraction on the standard identification card image from 3 dimensions, for example, perform feature extraction on different dimensions such as color, text space, or information layout position on the identification card to obtain a second feature extraction result. The 3 VGG network models of the front-back general copying recognition model 30 can perform feature extraction on the standard identity card image from 3 dimensions to obtain a second feature extraction result, and perform feature extraction from multiple dimensions to improve the fitting degree of the second feature extraction result and the standard identity card image, so that the second feature extraction result retains more feature information in the standard identity card image, and the accuracy of the second copying recognition result obtained based on the second feature extraction result is improved.
Step S61: and respectively inputting the identity card images into a VGG network model with a second preset number of the front and back general copying identification models to perform feature extraction with a second preset number of dimensions to obtain a second feature extraction result with a second preset number.
It can be understood that, the VGG network model adopts a multi-level small volume block and a small-size pooling process from beginning to end, so that parameters of convolution and pooling can be reduced under the condition of ensuring that the second feature extraction result is obtained, the dimension of data of the identity card image can be reduced, and the expression capability of the obtained second feature extraction result on the identity card image can be improved.
As an embodiment, the processing procedure of the second preset number of VGG network models may be formulated by formula
Figure BDA0002858664330000161
Where m represents a second preset number. input2 represents a standard identification card image, VGGiRepresenting the ith VGG network model, yrRepresenting the second feature extraction result.
Step S62: and inputting second feature extraction results of a second preset number into a third deep convolution unit of the front-back general copying recognition model to obtain an eighth convolution value, wherein the third deep convolution unit comprises an eighth basic convolution unit, a ninth basic convolution unit and a fifth convolution unit.
As an embodiment, the second feature extraction result is used as the input of the third deep convolution unit 302 to obtain an eighth convolution value, and the operation process of the third deep convolution unit 302 can be represented as y8=convdeep3(yr) Wherein, y8Representing the eighth convolution value, convdeep3Indicates the operation mode, y, of the third deep convolution unit 302rRepresenting the second feature extraction result.
Step S63: and inputting second feature extraction results with a second preset number and the eighth convolution value into a fourth deep convolution unit of the front-back general copying recognition model for convolution processing to obtain a ninth convolution value, wherein the fourth deep convolution unit comprises a tenth basic convolution unit, an eleventh basic convolution unit and a sixth convolution unit.
As an embodiment, the ninth convolution value is obtained by using the second feature extraction result and the eighth convolution value as the input of the fourth deep convolution unit 303, and the operation process of the fourth deep convolution unit 303 can be represented as y9=convdeep4(yr+y8) Wherein, y8Represents the eighth convolution value, y9Representing the ninth convolution value, convdeep4Indicates the operation mode, y, of the fourth deep convolution unit 303rRepresenting the second feature extraction result.
Step S64: and inputting the ninth convolution value into a linear layer of the front-back universal copying recognition model for linear processing to obtain a universal recognition result.
It is understood that, in step S64, the linear layer 304 of the forward-reverse universal copying recognition model is implemented by a linear function, and the operation process of the linear layer 304 can be implemented by ylinear3=linear(y9) Is represented by (a) in which ylinear3Indicates the general recognition result, y9Representing a ninth convolution value.
It is understood that in steps S62-S64, the third deep convolution unit 302 includes an eighth basic convolution unit, a ninth basic convolution unit and a fifth convolution unit, the fourth deep convolution unit 303 includes a tenth basic convolution unit, an eleventh basic convolution unit and a sixth convolution unit, and the third deep convolution unit 302 and the fourth deep convolution unit 303 are the same as the first deep convolution unit 103 in structure, and therefore, the detailed description thereof is omitted.
Step S7: and correcting the original image according to the shooting direction to obtain a corrected original image, and performing general copying recognition on the corrected original image through a general copying recognition model to obtain a third copying recognition result.
Referring to fig. 12 and fig. 13, fig. 12 is a flowchart illustrating a step of obtaining a third reproduction identification result according to an embodiment of the present application, and fig. 13 is a block diagram illustrating a general reproduction identification model according to an embodiment of the present application. Optionally, step S7 includes the following substeps:
step S71: and inputting the corrected original image into a convolution unit of the universal copying recognition model to obtain a tenth convolution result.
In step S71, the convolution units of the generic-tap recognition model 40 include a twelfth basic convolution unit 401, a thirteenth basic convolution unit 402, a fifth deep convolution unit 403, a first residual convolution unit 404, a first structural convolution unit 405, a second residual convolution unit 406, a second structural convolution unit 407, a third residual convolution unit 408, a third structural convolution unit 409, and a fourth structural convolution unit 410, the twelfth basic convolution unit 401 includes a sixth convolution layer, a sixth batch of normalization layers, and a sixth activation function layer, the thirteenth basic convolution unit 402 includes a seventh convolution layer, a seventh batch of normalization layers, and a seventh activation function layer, the fifth deep convolution unit 403 includes a fourteenth basic convolution unit, a fifteenth basic convolution unit, and a seventh convolution unit, the first residual convolution unit 404 includes a first specified number of sixth deep convolution units, the first designated number is the number of all convolution units through which the standard identity card image enters the universal copying identification model, and the first structural convolution unit 405 includes a first adaptive pooling unit 4051, a sixteenth basic convolution unit 4052, an eighth convolution unit 4053 and a first activation unit 4054; the second residual convolution unit 406 comprises a second designated number of seventh deep convolution units, the second designated number is the number of all convolution units through which the standard identity card image enters the universal copying identification model, and the second structure convolution unit comprises a second self-adaptive pooling unit, a ninth basic convolution unit, a seventh convolution unit and a second activation unit; the third residual convolution unit comprises a third specified number of eighth depth convolution units, the third specified number is the number of all convolution units through which the standard identity card image enters the universal copying identification model, and the third structural convolution unit comprises a third self-adaptive pooling unit, an eighteenth basic convolution unit, a tenth convolution unit and a third activation unit; the fourth structural convolution unit comprises a fourth adaptive pooling unit, a nineteenth basic convolution unit, an eleventh convolution unit and a fourth activation unit.
It is understood that the twelfth basic convolution unit 401 has the same structure as the first basic convolution unit 101, and as an embodiment, the calculation process of the twelfth basic convolution unit 401 can be represented as ya=convbase12(input3) wherein yaDenotes the output result of the twelfth basic convolution unit 401, input3 denotes the corrected original image, convbase12The operation procedure of the twelfth basic convolution unit 401 is shown.
It is understood that the thirteenth basic convolution unit 402 has the same structure as the first basic convolution unit 101, and the calculation process of the thirteenth basic convolution unit 402 can be expressed as yb=convbase13(ya) Wherein y isaDenotes the output result, y, of the twelfth basic convolution unit 401bDenotes the output result, conv, of the thirteenth basic convolution element 402base13The operation procedure of the thirteenth basic convolution unit 402 is shown.
It is understood that the fifth deep convolution unit 403 has the same structure as the first deep convolution unit 103, and the calculation process of the fifth deep convolution unit 403 can be expressed as y as an embodimentc=convdeep5(yb) Wherein y isaDenotes the output result, y, of the thirteenth basic convolution unit 402cRepresents the output result of the fifth deep convolution unit 403, convdeep5Represents the operation of the fifth depth product unit 403And (6) carrying out the process.
As an embodiment, the first residual convolution unit 404 may be implemented by a residual convolution neural network, and it is understood that the first residual convolution unit 404 includes a first specified number of sixth deep convolution units, where the first specified number is the number of all convolution units through which the corrected original image enters the universal duplication recognition model, and the corrected original image enters the universal duplication recognition model and passes through four convolution units, that is, a twelfth basic convolution unit 401, a thirteenth basic convolution unit 402, a fifth deep convolution unit 403, and a first residual convolution unit 404, so that the first specified number is 4. As an embodiment, the calculation process of the first residual convolution unit 404 can be expressed as
Figure BDA0002858664330000181
Figure BDA0002858664330000182
Wherein, ycDenotes the output result, y, of the fifth deep convolution unit 403dRepresenting the output of the first residual convolution unit 404,
Figure BDA0002858664330000183
indicating the operation in the first residual convolution unit 404 and 4 indicating the first prescribed number.
Referring to fig. 14, fig. 14 is a block diagram of a first convolution unit according to an embodiment of the present application. As an embodiment, the first structural convolution unit 405 includes a first adaptive pooling unit 4051, a sixteenth basic convolution unit 4052, an eighth convolution unit 4053, and a first activation unit 4054; as an embodiment, the first adaptive pooling unit 4051 may be implemented by an adaptiveAvgPool function, and the first activation unit 4054 may be implemented by an hsigmoid function, where hsigmoid (x) 1/6relu (x +3), where x is an input to the hsigmoid function, where x yd
As an embodiment, the calculation process of the first structural convolution unit 405 can be expressed as ye=convsem1(yd) Specifically, canTo be represented as ye=convsem1(yd)=hsigmoid1(convblock8(convbase16(adaptiveAvgPool(yd))))。
In one embodiment, the second residual convolution unit 406 includes a second specified number of seventh deep convolution units, the second specified number is the number of all convolution units through which the corrected original image enters the generic copy identification model, and the corrected original image enters the generic copy identification model and then passes through six convolution units, namely, a twelfth basic convolution unit 401, a thirteenth basic convolution unit 402, a fifth deep convolution unit 403, a first residual convolution unit 404, a first structure convolution unit 405, and a second residual convolution unit 406, so that the second specified number is 6. As an embodiment, the calculation process of the second residual convolution unit 406 can be expressed as
Figure BDA0002858664330000184
Wherein, yeRepresents the output result, y, of the first structural convolution unit 405fRepresents the output result of the second residual convolution unit 406,
Figure BDA0002858664330000185
indicating the operation in the second residual convolution unit 406 and 6 indicating the second specified number.
It is understood that the second structure convolution unit 407 has the same structure as the first structure convolution unit 405, and as an embodiment, outputs y of the second residual convolution unit 406fThe calculation process after input into the second structural convolution unit 407 can be represented as yg=convsem2(yf)=hsigmoid2(convblock9(convbase17(adaptiveAvgPool(yf) ))) wherein y) is providedgRepresenting the output of the second structural convolution unit 407.
Similarly, the output result y of the second structural convolution unit 407 is convolvedgThe operation input to the third residual convolution unit 408 may be expressed as
Figure BDA0002858664330000191
The third designated number is set to 2, where ykRepresents the output of the third residual convolution unit 408,
Figure BDA0002858664330000192
representing the operation of the third residual convolution unit 408.
Similarly, the third structure convolution unit 409 has the same structure as the first structure convolution unit 405, and as an embodiment, outputs the output result y of the output result of the third residual convolution unit 408kThe calculation process after the input to the third structural convolution unit 409 can be represented as yl=convsem3(yk)=hsigmoid3(convblock10(convbase18(adaptiveAvgPool(yk) ))) wherein y) is providedlRepresenting the output of the third structural convolution unit 409.
Similarly, the fourth structural convolution unit 410 has the same structure as the first structural convolution unit 405, and as an embodiment, outputs y of the output result of the third structural convolution unit 409lThe calculation process after the input to the fourth structural convolution unit 410 can be expressed as ym=convsem4(yl)=hsigmoid4(convblock11(convbase19(adaptiveAvgPool(yk) ))) wherein y) is providedmRepresenting the output of the fourth structural convolution unit 410.
With continued reference to FIG. 13, the generic rendering recognition model 40 further includes a flattening layer 411, a linear layer 412, and a batch normalization layer 413.
Step S72: and inputting the tenth convolution result into a flattening layer of the universal copying recognition model for flattening processing to obtain a flattening processing result.
As an embodiment, the processing procedure of the flattening layer of the generic duplication recognition model 40 is similar to the operation of the flattening layer 104, and the specific calculation procedure can be represented as yflatten2=flatten(ym) Wherein, ymRepresents the output result, y, of the fourth structural convolution unit 410flatten2To flatten the layer 411, output result.
Step S73: and inputting the flattening processing result into a linear layer of the universal copying recognition model for linear processing to obtain a linearization processing result.
In one embodiment, the processing of the linear layer 412 of the generic rendering recognition model 40 is similar to the operation of the linearization layer 206, and the specific calculation process can be represented as ylinear4=flatten(yflatten2) Wherein, ylinear4Representing the output of the linear layer 412.
Step S74: and inputting the linearization processing result into a seventh batch standardization layer of the universal copying recognition model to obtain a third copying recognition result.
As an embodiment, the batch normalization layer 413 of the generic duplication recognition model 40 is processed similarly to the first batch normalization layer 1012, and the specific calculation process can be expressed as yn=bn(ylinear4) Wherein, ylinear4And represents the output result of the fourth structural convolution unit 410, i.e., the third duplication recognition result.
Step S8: and determining an identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result and the third reproduction identification result, wherein the identification card reproduction identification result is used for indicating whether the original image is reproduced.
It can be understood that the identification card reproduction identification can output a numerical value with a value between 0 and 1, and the closer to 1, the higher the possibility of identification card reproduction is.
Optionally, step S8 includes the following substeps:
obtaining a first preset weight of the first reproduction identification result, a second preset weight of the second reproduction identification result and a third preset weight of the third reproduction identification result based on the logistic regression model;
and carrying out weighted average on the first copying recognition result, the second copying recognition result and the third copying recognition result based on the first preset weight, the second preset weight and the third preset weight to obtain an identity card copying recognition result.
As an embodiment, the method will be based on logistic regressionThe first preset weight of the first copying recognition result obtained by the model is expressed by a (when the identity card image is the back side, the weight of the copying result obtained by the back side copying recognition model is expressed by d, and the obtained back side copying recognition result is expressed by DL4Representing), representing a second preset weight of a second reproduction identification result obtained based on the logistic regression model by using b, representing a third preset weight of a third reproduction identification result obtained based on the logistic regression model by using c, and representing a first reproduction identification result y obtained in step S5 by using clinear2By DL1Indicating that the second duplication recognition result y obtained in step S6linear3By DL2Indicating that the third duplication recognition result y obtained in step S7linear4By DL3Represents; the final obtained reproduction identification result can be expressed as:
Figure BDA0002858664330000201
in the case where all four weights obtained by the linear regression model are constants smaller than 1, the weights obtained by the linear regression model may be a-b-0.4469 c-0.3577 d-0.1954.
It can be understood that, the linear regression model is a statistical analysis model that determines the interdependent quantitative relationship between two or more variables by using regression analysis in mathematical statistics, and can use the least square method to find the linear relationship between the variables, that is, can use the least square method to obtain the linear relationship between the first preset weight, the second preset weight and the third preset weight, and a value of any point in the linear relationship can be taken to obtain a possible value of the first preset weight, the second preset weight and the third preset weight.
Referring to fig. 15, fig. 15 is a schematic view of an identification card duplication recognition apparatus according to an embodiment of the present application. The identification card reproduction recognizing device 90 includes:
the front-back recognition module 901 is configured to perform front-back recognition on the original image to obtain a front-back recognition result and a clipped identity card image, where the front-back recognition result is used to indicate that the identity card image is an identity card front image or an identity card back image.
And the direction identification module 902 is configured to perform direction identification on the identity card image by using the identity card direction identification model to obtain a shooting direction of the identity card image, and perform direction correction on the identity card image according to the shooting direction to obtain a standard identity card image.
And the correcting module 903 is used for performing direction correction on the identity card image according to the shooting direction to obtain a standard identity card image.
And the first copying and recognizing module 904 is configured to copy and recognize the standard identity card image by using the identity card copying and recognizing model based on the front and back recognition results to obtain a first copying and recognizing result.
And the second copying and recognizing module 905 is used for copying and recognizing the standard identity card image through the front and back general copying and recognizing model to obtain a second copying and recognizing result.
And a third copying and identifying module 906, configured to correct the original image according to the shooting direction to obtain a corrected original image, and perform general copying and identifying on the corrected original image through the general copying and identifying model to obtain a third copying and identifying result.
And the analysis module 907 is configured to determine an identification card copying recognition result according to the first copying recognition result, the second copying recognition result and the third copying recognition result, where the identification card copying recognition result is used to indicate whether the original image is copied.
Optionally, the identification card duplication recognition apparatus 90 further includes a preprocessing module 908, and the preprocessing module 908 includes:
judging whether the original image comprises an identity card image or not;
and when the original image comprises the identity card image, executing the steps of carrying out the front and back identification of the identity card on the original image to obtain a front and back identification result and the clipped identity card image.
Optionally, the direction identifying module 902 is configured to:
inputting an identity card image into a first basic convolution unit of an identity card direction identification model to perform basic convolution processing to obtain a first convolution value, wherein the first basic convolution unit comprises a first convolution layer, a first batch normalization layer and a first activation function layer.
And inputting the first convolution value into a first convolution unit of the identification card direction identification model for convolution processing to obtain a second convolution value, wherein the first convolution unit comprises a second convolution layer and a second batch standardization layer.
And inputting the first convolution value and the second convolution value into a first deep convolution unit of the identification card direction identification model for deep convolution processing to obtain a third convolution value, wherein the first deep convolution unit comprises a second basic convolution unit, a third basic convolution unit and a second convolution unit.
And inputting the third convolution value into a flattening layer of the identification card direction recognition model for flattening treatment to obtain a flattened convolution value.
And inputting the flattened convolution value into a linear layer of the identification card direction recognition model for linearization processing to obtain the shooting direction.
Optionally, the first duplication recognition module 904 is configured to:
when the front and back recognition results show that the identity card image is the identity card front image, inputting the identity card image into a depth residual error network model of a first preset number of the front copying recognition model to perform feature extraction of the dimensions of the first preset number to obtain first feature extraction results of the first preset number;
inputting a first preset number of first feature extraction results into a second deep convolution unit of the front-side copying recognition model to obtain a fourth convolution value, wherein the second deep convolution unit comprises a fourth basic convolution unit, a fifth basic convolution unit and a third convolution unit;
inputting a first preset number of first feature extraction results and a fourth convolution value into a fourth convolution unit of the front-side copying recognition model for convolution processing to obtain a fifth convolution value, wherein the fourth convolution unit comprises a third convolution layer and a third batch standardization layer;
inputting a first preset number of first feature extraction results, a fourth convolution value and a fifth convolution value into a sixth basic convolution unit of the front-side copying recognition model to obtain a sixth convolution value, wherein the sixth basic convolution unit comprises a fourth convolution layer, a fourth batch normalization layer and a fourth activation function layer;
inputting a first preset number of first feature extraction results, a fifth convolution value and a sixth convolution value into a seventh basic convolution unit of the front-side copying recognition model to obtain a seventh convolution value, wherein the seventh basic convolution unit comprises a fifth convolution layer, a fifth batch normalization layer and a fifth activation function layer;
and inputting the seventh convolution value into a linearization layer of the front-side copying recognition model for linear processing to obtain a first copying recognition result.
Optionally, the second duplication recognition module 905 is specifically configured to:
respectively inputting the identity card image into a VGG network model with a second preset number of front and back sides of the general copying identification model to perform feature extraction with a second preset number of dimensions to obtain a second feature extraction result with a second preset number;
inputting second feature extraction results of a second preset number into a third deep convolution unit of the front-back general copying recognition model to obtain an eighth convolution value, wherein the third deep convolution unit comprises an eighth basic convolution unit, a ninth basic unit and a fifth convolution unit;
inputting second feature extraction results with a second preset number and eighth convolution values into a fourth deep convolution unit of the front-back general copying recognition model for convolution processing to obtain a ninth convolution value, wherein the fourth deep convolution unit comprises a tenth basic convolution unit, an eleventh basic convolution unit and a sixth convolution unit;
and inputting the ninth convolution value into a linear layer of the front-back universal copying recognition model for linear processing to obtain a universal recognition result.
Optionally, the third duplication recognition module 906 includes:
inputting the corrected original image into a convolution unit of a universal copying recognition model to obtain a tenth convolution result, wherein the convolution unit of the universal copying recognition model comprises a twelfth basic convolution unit, a thirteenth basic convolution unit, a fifth deep convolution unit, a first residual convolution unit, a first structure convolution unit, a second residual convolution unit, a second structure convolution unit, a third residual convolution unit, a third structure convolution unit and a fourth structure convolution unit, the twelfth basic convolution unit comprises a sixth convolution layer, a sixth batch of standard layers and a sixth activation function layer, the thirteenth basic convolution unit comprises a seventh convolution layer, a seventh batch of standard layers and a seventh activation function layer, the fifth deep convolution unit comprises a fourteenth basic convolution unit, a fifteenth basic convolution unit and a seventh convolution unit, the first convolution unit comprises a first specified number of sixth deep convolution units, the first specified number is the number of all convolution units through which the corrected original image enters the universal copying recognition model, and the first structural convolution unit comprises a first self-adaptive pooling unit, a sixteenth basic convolution unit, an eighth convolution unit and a first activation unit; the second residual convolution unit comprises a second appointed number of seventh deep convolution units, the second appointed number is the number of all convolution units through which the corrected original image enters the universal copying identification model, and the second structure convolution unit comprises a second self-adaptive pooling unit, a seventeenth basic convolution unit, a ninth convolution unit and a second activation unit; the third residual convolution unit comprises a third specified number of eighth depth convolution units, the third specified number is a constant, and the third structural convolution unit comprises a third self-adaptive pooling unit, an eighteenth basic convolution unit, a tenth convolution unit and a third activation unit; the fourth structural convolution unit comprises a fourth adaptive pooling unit, a nineteenth basic convolution unit, an eleventh convolution unit and a fourth activation unit;
inputting the tenth convolution result into a flattening layer of the universal copying recognition model for flattening processing to obtain a flattening processing result;
inputting the flattening processing result into a linear layer of the universal copying recognition model for linear processing to obtain a linearization processing result;
and inputting the linearization processing result into a batch standardization layer of the universal reproduction identification model to obtain a third reproduction identification result.
Optionally, the analysis module 907 is configured to:
obtaining a first preset weight of the first reproduction identification result, a second preset weight of the second reproduction identification result and a third preset weight of the third reproduction identification result based on the logistic regression model;
and carrying out weighted average on the first copying recognition result, the second copying recognition result and the third copying recognition result based on the first preset weight, the second preset weight and the third preset weight to obtain an identity card copying recognition result.
The present embodiment also provides an electronic device, where the electronic device includes a memory and a processor, where the memory stores program instructions, and the processor executes the program instructions to perform the steps of any of the above methods.
The present embodiment also provides a storage medium having stored therein computer program instructions, which when executed by a processor, perform the steps of any of the above methods.
To sum up, the embodiment of the present application provides an identification card reproduction identification method, apparatus, electronic device and storage medium, and relates to the technical field of image processing, and the identification card reproduction identification method includes: and carrying out front and back recognition on the original image to obtain a front and back recognition result and a clipped identity card image, wherein the front and back recognition result is used for indicating that the identity card image is an identity card front image or an identity card back image. And carrying out direction identification on the identity card image by using an identity card direction identification model to obtain the shooting direction of the identity card image, and carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image. And based on the front and back recognition results, carrying out copying recognition on the standard identity card image by using an identity card copying recognition model to obtain a first copying recognition result. And copying and identifying the standard identity card image through a front-back universal copying and identifying model to obtain a second copying and identifying result. And correcting the original image according to the shooting direction to obtain a corrected original image, and performing general copying recognition on the corrected original image through a general copying recognition model to obtain a third copying recognition result. And determining an identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result and the third reproduction identification result, wherein the identification card reproduction identification result is used for indicating whether the original image is reproduced.
In the above implementation process, the identification card copying recognition model is used for copying and recognizing the standard identification card image obtained after the front and back sides of the identification card are recognized, the identification card copying recognition model is known to input the identification card copying recognition model, the input of the identification card copying recognition model belongs to the identification card front image or the identification card front image in the standard identification card image, the front and back sides general copying recognition model is used for copying and recognizing the standard identification card image with unknown front and back sides recognition result, the general copying recognition model is used for copying and recognizing the original image obtained through direction correction, namely the identification card copying recognition model, the front and back sides general copying recognition model and the general copying recognition model are used for respectively copying and recognizing images in different states containing the same identification card information, and the first copying recognition result obtained through copying and recognizing images in different states containing the same identification card information is used for obtaining the first copying and recognizing result, The method has the advantages that the second copying recognition result and the third copying recognition result determine the final copying recognition result of the original image, the image to be recognized does not have specific requirements, the applicability is high, various interference factors caused by different shooting differences can be eliminated, and therefore the accuracy of copying recognition of the identity card can be improved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. The apparatus embodiments described above are merely illustrative, and for example, the block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices according to various embodiments of the present application. In this regard, each block in the block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams, and combinations of blocks in the block diagrams, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Therefore, the present embodiment further provides a readable storage medium, in which computer program instructions are stored, and when the computer program instructions are read and executed by a processor, the computer program instructions perform the steps of any of the block data storage methods. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. An identification card copying and identifying method is characterized by comprising the following steps:
carrying out positive and negative identification on the original image to obtain a positive and negative identification result and a clipped identity card image, wherein the positive and negative identification result is used for indicating that the identity card image is the positive image of the identity card or the negative image of the identity card;
carrying out direction identification on the identity card image by using an identity card direction identification model to obtain the shooting direction of the identity card image;
carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image;
based on the front and back recognition results, performing copying recognition on the standard identity card image by using an identity card copying recognition model to obtain a first copying recognition result;
copying and identifying the standard identity card image through a front-back universal copying and identifying model to obtain a second copying and identifying result;
correcting the original image according to the shooting direction to obtain a corrected original image, and performing general copying recognition on the corrected original image through a general copying recognition model to obtain a third copying recognition result;
and determining an identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result and the third reproduction identification result, wherein the identification card reproduction identification result is used for indicating whether the original image is reproduced.
2. The method of claim 1, wherein before the identifying the front and back sides of the identification card of the original image and the clipped identification card image, the method further comprises:
judging whether the original image comprises an identity card image or not;
and when the original image comprises the identity card image, executing the steps of carrying out the front and back identification of the identity card on the original image to obtain a front and back identification result and the cut identity card image.
3. The method according to claim 1, wherein the obtaining of the shooting direction of the identification card image by performing direction recognition on the identification card image by using an identification card direction recognition model comprises:
inputting the identity card image into a first basic convolution unit of an identity card direction identification model to perform basic convolution processing to obtain a first convolution value, wherein the first basic convolution unit comprises a first convolution layer, a first batch normalization layer and a first activation function layer;
inputting the first convolution value into a first convolution unit of the input identity card direction identification model for convolution processing to obtain a second convolution value, wherein the first convolution unit comprises a second convolution layer and a second batch normalization layer;
inputting the first convolution value and the second convolution value into a first deep convolution unit of the input identity card direction identification model for deep convolution processing to obtain a third convolution value, wherein the first deep convolution unit comprises a second basic convolution unit, a third basic convolution unit and a second convolution unit;
inputting the third convolution value into a flattening layer of the identification card direction recognition model for flattening treatment to obtain a flattened convolution value;
and inputting the flattened convolution value into a linear layer of the identification card direction recognition model for linearization processing to obtain the shooting direction.
4. The method according to claim 1, wherein the identification card duplication recognition model comprises a front duplication recognition model and a back duplication recognition model, the front duplication recognition model and the back duplication recognition model are identical in structure, and the duplication recognition of the standard identification card image by the identification card duplication recognition model based on the front and back recognition results to obtain a first duplication recognition result, comprising:
when the front and back recognition results show that the identity card image is the identity card front image, inputting the identity card image into a depth residual error network model of a first preset number of the front copying recognition model to perform feature extraction of a first preset number of dimensions to obtain a first feature extraction result of the first preset number;
inputting the first preset number of first feature extraction results into a second deep convolution unit of the front-side copying recognition model to obtain a fourth convolution value, wherein the second deep convolution unit comprises a fourth basic convolution unit, a fifth basic convolution unit and a third convolution unit;
inputting the first preset number of first feature extraction results and the fourth convolution values into a fourth convolution unit of the front-side copying recognition model for convolution processing to obtain a fifth convolution value, wherein the fourth convolution unit comprises a third convolution layer and a third batch normalization layer;
inputting the first preset number of first feature extraction results, the fourth convolution value and the fifth convolution value into a sixth basic convolution unit of the front-side copying recognition model to obtain a sixth convolution value, wherein the sixth basic convolution unit comprises a fourth convolution layer, a fourth batch normalization layer and a fourth activation function layer;
inputting the first preset number of first feature extraction results, the fifth convolution value and the sixth convolution value into a seventh basic convolution unit of the front-side copying recognition model to obtain a seventh convolution value, wherein the seventh basic convolution unit comprises a fifth convolution layer, a fifth batch normalization layer and a fifth activation function layer;
and inputting the seventh convolution value into a linearization layer of the front side copying recognition model for linear processing to obtain a first copying recognition result.
5. The method according to claim 1, wherein the copying and recognizing the standard identification card image through the front-back universal copying and recognizing model to obtain a second copying and recognizing result comprises:
respectively inputting the identity card images into a VGG network model with a second preset number of dimensions of the front-back universal copying recognition model to perform feature extraction with the second preset number of dimensions to obtain a second feature extraction result with the second preset number;
inputting the second preset number of second feature extraction results into a third deep convolution unit of the front-back general copying recognition model to obtain an eighth convolution value, wherein the third deep convolution unit comprises an eighth basic convolution unit, a ninth basic unit and a fifth convolution unit;
inputting the second preset number of second feature extraction results and the eighth convolution value into a fourth deep convolution unit of the front-back general copying recognition model for convolution processing to obtain a ninth convolution value, wherein the fourth deep convolution unit comprises a tenth basic convolution unit, an eleventh basic convolution unit and a sixth convolution unit;
and inputting the ninth convolution value into a linear layer of the front-back universal copying recognition model for linear processing to obtain a universal recognition result.
6. The method according to claim 1, wherein the performing the universal copying recognition on the corrected original image through the universal copying recognition model to obtain a third copying recognition result comprises:
inputting the corrected original image into a convolution unit of the universal copying recognition model to obtain a tenth convolution result, wherein the convolution unit of the universal copying recognition model comprises a twelfth basic convolution unit, a thirteenth basic convolution unit, a fifth deep convolution unit, a first residual convolution unit, a first structure convolution unit, a second residual convolution unit, a second structure convolution unit, a third residual convolution unit, a third structure convolution unit and a fourth structure convolution unit, the twelfth basic convolution unit comprises a sixth convolution layer, a sixth batch of standard layers and a sixth activation function layer, the thirteenth basic convolution unit comprises a seventh convolution layer, a seventh batch of standard layers and a seventh activation function layer, and the fifth deep convolution unit comprises a fourteenth basic convolution unit, a fifteenth basic convolution unit and a seventh convolution unit, the first residual convolution unit comprises a first specified number of sixth depth convolution units, the first specified number is the number of all convolution units through which the corrected original image enters the universal reproduction identification model, and the first structural convolution unit comprises a first self-adaptive pooling unit, a sixteenth basic convolution unit, an eighth convolution unit and a first activation unit; the second residual convolution unit comprises a second specified number of seventh deep convolution units, the second specified number is the number of all convolution units through which the corrected original image enters the universal reproduction identification model, and the second structural convolution unit comprises a second self-adaptive pooling unit, a seventeenth basic convolution unit, a ninth convolution unit and a second activation unit; the third residual convolution unit comprises a third specified number of eighth deep convolution units, the third specified number is a constant, and the third structural convolution unit comprises a third adaptive pooling unit, an eighteenth basic convolution unit, a tenth convolution unit and a third activation unit; the fourth structural convolution unit comprises a fourth adaptive pooling unit, a nineteenth basic convolution unit, an eleventh convolution unit and a fourth activation unit;
inputting the tenth convolution result into a flattening layer of the universal copying recognition model for flattening processing to obtain a flattening processing result;
inputting the flattening processing result into a linear layer of the universal copying recognition model for linear processing to obtain a linearization processing result;
and inputting the linearization processing result into a batch standardization layer of the universal reproduction identification model to obtain a third reproduction identification result.
7. The method of claim 1, wherein determining the identification card duplication recognition result according to the first duplication recognition result, the second duplication recognition result and the third duplication recognition result comprises:
obtaining a first preset weight of the first reproduction identification result, a second preset weight of the second reproduction identification result and a third preset weight of the third reproduction identification result based on a logistic regression model;
and carrying out weighted average on the first copying recognition result, the second copying recognition result and the third copying recognition result based on the first preset weight, the second preset weight and the third preset weight to obtain the identity card copying recognition result.
8. An identification card reproduction and identification device, the device comprising:
the front and back recognition module is used for carrying out front and back recognition on the identity card of the original image to obtain a front and back recognition result and a clipped identity card image, and the front and back recognition result is used for indicating that the identity card image is an identity card front image or an identity card back image;
the direction recognition module is used for carrying out direction recognition on the identity card image by using an identity card direction recognition model to obtain the shooting direction of the identity card image and carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image;
the correction module is used for carrying out direction correction on the identity card image according to the shooting direction to obtain a standard identity card image;
the first copying recognition module is used for copying and recognizing the standard identity card image by using an identity card copying recognition model based on the front and back recognition results to obtain a first copying recognition result;
the second copying recognition module is used for copying and recognizing the standard identity card image through a front-back universal copying recognition model to obtain a second copying recognition result;
the third copying recognition module is used for correcting the original image according to the shooting direction to obtain a corrected original image, and performing general copying recognition on the corrected original image through a general copying recognition model to obtain a third copying recognition result;
and the analysis module is used for determining an identification card reproduction identification result according to the first reproduction identification result, the second reproduction identification result and the third reproduction identification result, and the identification card reproduction identification result is used for indicating whether the original image is reproduced.
9. An electronic device comprising a memory having stored therein program instructions and a processor that, when executed, performs the steps of the method of any of claims 1-7.
10. A storage medium having stored thereon computer program instructions for executing the steps of the method according to any one of claims 1 to 7 when executed by a processor.
CN202011558266.XA 2020-12-24 2020-12-24 Identity card copying and identifying method and device, electronic equipment and storage medium Active CN112580621B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011558266.XA CN112580621B (en) 2020-12-24 2020-12-24 Identity card copying and identifying method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011558266.XA CN112580621B (en) 2020-12-24 2020-12-24 Identity card copying and identifying method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112580621A true CN112580621A (en) 2021-03-30
CN112580621B CN112580621B (en) 2022-04-29

Family

ID=75140634

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011558266.XA Active CN112580621B (en) 2020-12-24 2020-12-24 Identity card copying and identifying method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112580621B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033530A (en) * 2021-05-31 2021-06-25 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991451A (en) * 2017-04-14 2017-07-28 武汉神目信息技术有限公司 A kind of identifying system and method for certificate picture
CN108460649A (en) * 2017-02-22 2018-08-28 阿里巴巴集团控股有限公司 A kind of image-recognizing method and device
CN108549836A (en) * 2018-03-09 2018-09-18 通号通信信息集团有限公司 Reproduction detection method, device, equipment and the readable storage medium storing program for executing of photo
WO2018166525A1 (en) * 2017-03-16 2018-09-20 北京市商汤科技开发有限公司 Human face anti-counterfeit detection method and system, electronic device, program and medium
CN109325933A (en) * 2017-07-28 2019-02-12 阿里巴巴集团控股有限公司 A kind of reproduction image-recognizing method and device
WO2019218621A1 (en) * 2018-05-18 2019-11-21 北京市商汤科技开发有限公司 Detection method for living being, device, electronic apparatus, and storage medium
CN111008651A (en) * 2019-11-13 2020-04-14 科大国创软件股份有限公司 Image reproduction detection method based on multi-feature fusion
CN111275685A (en) * 2020-01-20 2020-06-12 中国平安人寿保险股份有限公司 Method, device, equipment and medium for identifying copied image of identity document
CN111368944A (en) * 2020-05-27 2020-07-03 支付宝(杭州)信息技术有限公司 Method and device for recognizing copied image and certificate photo and training model and electronic equipment
WO2020147445A1 (en) * 2019-01-16 2020-07-23 深圳壹账通智能科技有限公司 Rephotographed image recognition method and apparatus, computer device, and computer-readable storage medium
CN111476268A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for training reproduction recognition model and image recognition
CN111767828A (en) * 2020-06-28 2020-10-13 京东数字科技控股有限公司 Certificate image copying and identifying method and device, electronic equipment and storage medium
US20200364820A1 (en) * 2019-05-16 2020-11-19 Beijing Xiaomi Mobile Software Co., Ltd. Image management method and apparatus, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460649A (en) * 2017-02-22 2018-08-28 阿里巴巴集团控股有限公司 A kind of image-recognizing method and device
WO2018166525A1 (en) * 2017-03-16 2018-09-20 北京市商汤科技开发有限公司 Human face anti-counterfeit detection method and system, electronic device, program and medium
CN106991451A (en) * 2017-04-14 2017-07-28 武汉神目信息技术有限公司 A kind of identifying system and method for certificate picture
CN109325933A (en) * 2017-07-28 2019-02-12 阿里巴巴集团控股有限公司 A kind of reproduction image-recognizing method and device
CN108549836A (en) * 2018-03-09 2018-09-18 通号通信信息集团有限公司 Reproduction detection method, device, equipment and the readable storage medium storing program for executing of photo
WO2019218621A1 (en) * 2018-05-18 2019-11-21 北京市商汤科技开发有限公司 Detection method for living being, device, electronic apparatus, and storage medium
WO2020147445A1 (en) * 2019-01-16 2020-07-23 深圳壹账通智能科技有限公司 Rephotographed image recognition method and apparatus, computer device, and computer-readable storage medium
US20200364820A1 (en) * 2019-05-16 2020-11-19 Beijing Xiaomi Mobile Software Co., Ltd. Image management method and apparatus, and storage medium
CN111008651A (en) * 2019-11-13 2020-04-14 科大国创软件股份有限公司 Image reproduction detection method based on multi-feature fusion
CN111275685A (en) * 2020-01-20 2020-06-12 中国平安人寿保险股份有限公司 Method, device, equipment and medium for identifying copied image of identity document
CN111476268A (en) * 2020-03-04 2020-07-31 中国平安人寿保险股份有限公司 Method, device, equipment and medium for training reproduction recognition model and image recognition
CN111368944A (en) * 2020-05-27 2020-07-03 支付宝(杭州)信息技术有限公司 Method and device for recognizing copied image and certificate photo and training model and electronic equipment
CN111767828A (en) * 2020-06-28 2020-10-13 京东数字科技控股有限公司 Certificate image copying and identifying method and device, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
尹京等: "数码翻拍图像取证算法", 《中山大学学报(自然科学版)》 *
李正浩等: "基于匹配技术的影像真实性鉴别", 《仪器仪表学报》 *
谢心谦等: "基于深度学习的图像翻拍检测", 《电脑知识与技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113033530A (en) * 2021-05-31 2021-06-25 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium
CN113033530B (en) * 2021-05-31 2022-02-22 成都新希望金融信息有限公司 Certificate copying detection method and device, electronic equipment and readable storage medium

Also Published As

Publication number Publication date
CN112580621B (en) 2022-04-29

Similar Documents

Publication Publication Date Title
US20230215197A1 (en) Systems and Methods for Detection and Localization of Image and Document Forgery
CN109829448B (en) Face recognition method, face recognition device and storage medium
WO2021196389A1 (en) Facial action unit recognition method and apparatus, electronic device, and storage medium
Gill et al. A review paper on digital image forgery detection techniques
Tokuda et al. Computer generated images vs. digital photographs: A synergetic feature and classifier combination approach
CN111832437A (en) Building drawing identification method, electronic equipment and related product
CN111626295B (en) Training method and device for license plate detection model
CN109670491A (en) Identify method, apparatus, equipment and the storage medium of facial image
CN110852311A (en) Three-dimensional human hand key point positioning method and device
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
JP7121132B2 (en) Image processing method, apparatus and electronic equipment
CN112580621B (en) Identity card copying and identifying method and device, electronic equipment and storage medium
CN108460775A (en) A kind of forge or true or paper money recognition methods and device
CN112907433B (en) Digital watermark embedding method, digital watermark extracting method, digital watermark embedding device, digital watermark extracting device, digital watermark embedding equipment and digital watermark extracting medium
EP3462378A1 (en) System and method of training a classifier for determining the category of a document
AU2020403709B2 (en) Target object identification method and apparatus
CN112800941B (en) Face anti-fraud method and system based on asymmetric auxiliary information embedded network
Kashyap et al. Robust detection of copy-move forgery based on wavelet decomposition and firefly algorithm
CN115188039A (en) Depth forgery video technology tracing method based on image frequency domain information
Zhang et al. Watermark retrieval from 3d printed objects via convolutional neural networks
CN111967579A (en) Method and apparatus for performing convolution calculation on image using convolution neural network
CN116645661B (en) Method and system for detecting duplicate prevention code
Agarwal et al. Image Forensics using Optimal Normalization in Challenging Environment
CN111597373B (en) Picture classifying method and related equipment based on convolutional neural network and connected graph
CN113158838B (en) Full-size depth map supervision-based face representation attack detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant