CN114743016A - Certificate authenticity identification method and device, electronic equipment and storage medium - Google Patents

Certificate authenticity identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114743016A
CN114743016A CN202210397692.2A CN202210397692A CN114743016A CN 114743016 A CN114743016 A CN 114743016A CN 202210397692 A CN202210397692 A CN 202210397692A CN 114743016 A CN114743016 A CN 114743016A
Authority
CN
China
Prior art keywords
frame
image
original image
neural network
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210397692.2A
Other languages
Chinese (zh)
Inventor
王小东
吕文勇
周智杰
朱羽
刘洪江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu New Hope Finance Information Co Ltd
Original Assignee
Chengdu New Hope Finance Information Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu New Hope Finance Information Co Ltd filed Critical Chengdu New Hope Finance Information Co Ltd
Priority to CN202210397692.2A priority Critical patent/CN114743016A/en
Publication of CN114743016A publication Critical patent/CN114743016A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a certificate authenticity identification method and device, an electronic device and a storage medium. The method comprises the following steps: acquiring multiple frames of original images in a video stream, and transforming the multiple frames of original images to obtain a target image corresponding to each frame of original image; respectively extracting features based on each frame of original image and each frame of target image by using a neural network model to obtain a plurality of features corresponding to each frame of target image; performing feature fusion on a plurality of features corresponding to each frame of target image to obtain fusion features corresponding to each frame of target image; inputting the fusion features into a trained multi-frame recognition model to obtain target features corresponding to each frame of target image; and classifying the target characteristics respectively corresponding to all the target images by using the multi-frame recognition model to obtain a classification result. A plurality of features are extracted by the neural network model and fused and classified to obtain a classification result, so that the multi-frame model can be used for identifying the true and false of the image, and the accuracy of identifying the true and false of the certificate is improved.

Description

Certificate authenticity identification method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, in particular to a certificate authenticity identification method and device, an electronic device and a storage medium.
Background
With the development of computers and artificial intelligence technologies, more and more services are on-line and digitalized, application scenes for certificate identification are increasingly rich, people need to identify the authenticity of certificates when handling various services, and the certificates cannot be shot screens or printed paper.
At present, the identification of the authenticity of the certificate is mainly to judge the authenticity by extracting image features and utilizing a machine learning model, or to train and obtain the identification result of the authenticity of the certificate by utilizing a deep learning model based on a large amount of data, the schemes all depend on the comprehensiveness of an image acquisition sample, but along with the improvement of the photographing capability of a mobile phone camera, the pictures photographed by the mobile phone are higher and higher, the difference from a real identity card is very small, the difficulty is added to the identification of the authenticity of the certificate, and therefore the existing scheme is difficult to accurately judge the authenticity of the image.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method and an apparatus for identifying the authenticity of a certificate, an electronic device, and a storage medium, so as to improve the accuracy of identifying the authenticity of the certificate.
In a first aspect, an embodiment of the present application provides a method for identifying authenticity of a certificate, where the method includes: acquiring multiple frames of original images in a video stream, and respectively transforming the multiple frames of original images to obtain a target image corresponding to each frame of original image; respectively extracting features based on each frame of original image and the corresponding target image by utilizing a neural network model to obtain a plurality of features corresponding to each frame of target image; performing feature fusion on a plurality of features corresponding to each frame of target image to obtain fusion features corresponding to each frame of target image; inputting the fusion characteristics corresponding to each frame of target image into a trained multi-frame recognition model, obtaining the target characteristics corresponding to each frame of target image, classifying the target characteristics corresponding to all the target images respectively, and obtaining a classification result, wherein the classification result is a recognition result of the authenticity of the certificate to be detected.
In the embodiment of the application, a multi-frame original image of a video stream is obtained, a target image corresponding to each frame of original image is obtained after the multi-frame original image is converted, then feature extraction is respectively carried out by utilizing a neural network model based on each frame of original image and the corresponding target image, a plurality of features corresponding to each frame of target image are obtained, feature fusion is carried out on the plurality of features to obtain fusion features, finally the fusion features are input into a multi-frame recognition model to obtain the target features, and classification results are obtained by classifying all the target features to represent the recognition results of the to-be-detected certificate. On the basis that the characteristics of each frame of image are respectively extracted by the neural network model and fused by the multi-frame recognition model, the certificate true and false recognition result based on the video stream is obtained by combining the time sequence of the video stream, and the accuracy of certificate true and false recognition is improved.
Further, the neural network model includes a first neural network, the target image includes a reflection image, the converting is performed on the multiple frames of original images respectively to obtain a target image corresponding to each frame of original image, including: adjusting the screen brightness of equipment for shooting the certificate to be preset screen brightness to obtain a light reflection image corresponding to each frame of target image; correspondingly, the step of respectively performing feature extraction by using a neural network model based on each frame of original image and the corresponding target image to obtain a plurality of features corresponding to each frame of target image includes: inputting each frame of original image and the corresponding reflection image into the first neural network; and performing feature extraction on each frame of original image and the corresponding reflection image by using the first neural network to obtain a first feature corresponding to each frame of target image.
In the embodiment of the application, the neural network model may include a first neural network, the first neural network may be configured to recognize characteristics such as deformation, shadow, and reflection in each frame of original image, the target image includes a reflection image obtained through conversion of the original image, the reflection image is first adjusted to a preset screen brightness by a screen brightness of an apparatus for photographing a certificate to obtain a reflection image corresponding to each frame of original image, then each frame of original image and the corresponding reflection image are input to the first neural network, feature extraction is performed on each frame of target image and the corresponding reflection image by using the first neural network, and finally, a first feature corresponding to each frame of target image is obtained, and the first feature may be used to represent features of the target image in the aspects of deformation, shadow, and reflection. The extraction of the first features in the target image is completed through the first neural network, and the accuracy of certificate true and false recognition is improved.
Further, the neural network model includes a second neural network, the second neural network has an attention mechanism, the original image includes a reference frame for detecting the document to be detected, the target image includes a width extended image and a height extended image, the multiple frames of original images are respectively transformed to obtain a target image corresponding to each frame of original image, including: keeping the height of the reference frame in each frame of original image unchanged, extending the reference frame along the width direction to preset a frame image with a first length, and obtaining a width extension image corresponding to each frame of original image; keeping the width of the reference frame in each frame of original image unchanged, extending the reference frame along the height direction to preset a frame image with a second length, and obtaining a height extended image corresponding to each frame of original image; correspondingly, the step of respectively performing feature extraction by using a neural network model based on each frame of original image and the corresponding target image to obtain a plurality of features corresponding to each frame of target image includes: inputting the height extension image and the width extension image corresponding to each frame of original image into the second neural network; and performing feature extraction on the height extension image and the width extension image corresponding to each frame by using the second neural network to obtain second features corresponding to each frame of target image.
In the embodiment of the application, the neural network model comprises a second neural network, the second neural network can be used for feature recognition of the border and the moire fringes aiming at each frame of original image, the second neural network is provided with an attention mechanism which is used for enabling the second neural network to focus more on the border and the moire fringes, each original image comprises a reference frame for detecting a certificate to be detected, the target image comprises a width extension image and a height extension image, firstly, the height of the reference frame in each frame of original image is kept unchanged, the border image with a preset first length is extended along the width direction of the reference frame, the width extension image corresponding to each original image is obtained, then, the border image with a preset second length is correspondingly extended along the height direction, the height extension image is obtained, and the obtained height extension image and width extension image are correspondingly input into the second neural network, and performing feature extraction to obtain a second feature corresponding to each frame of target image, wherein the second feature is used for representing the features of each frame of target image in the aspects of frames and moire fringes. The second characteristic is extracted through the second neural network, and the accuracy of certificate true and false recognition is improved.
Further, the neural network model includes a third neural network, the third neural network has an attention mechanism, the target image includes a spectrum image, the converting is performed on the multiple frames of original images respectively, and the obtaining of the target image corresponding to each frame of original image includes: performing frequency domain transformation on each frame of original image by utilizing Fourier transformation to obtain a frequency domain image corresponding to each frame of original image; correspondingly, the step of respectively performing feature extraction by using a neural network model based on each frame of original image and the corresponding target image to obtain a plurality of features corresponding to each frame of target image includes: inputting the frequency domain image corresponding to each frame of original image into the third neural network; and performing feature extraction on the frequency domain image corresponding to each frame of original image by using a third neural network to obtain a third feature corresponding to each frame of target image.
In the embodiment of the present application, a third neural network is included in the neural network model, and the third neural network can be used for distinguishing the original image in the aspects of copying and color printing for each frame, and the third neural network is provided with an attention mechanism which is used for enabling the third neural network model to focus more on copying and color printing characteristics. The target image comprises a frequency domain image, the frequency domain image corresponding to each frame of original image is obtained by performing frequency domain transformation on each frame of original image, then the frequency domain image is input into a third neural network for feature extraction, and third features corresponding to each frame of target image are obtained and used for representing feature recognition results in the aspects of copying and color printing in the target image. The third neural network is utilized to complete the extraction of the third features, and the accuracy of certificate true and false recognition is improved.
Further, the neural network model comprises a plurality of neural networks, and the neural network model determines a final loss function value according to the loss function value and the weighting parameter corresponding to each neural network in the training process.
In the embodiment of the application, the neural network model may be composed of a plurality of neural networks, or may be understood as a branch composed of a plurality of neural networks, and the neural network model determines a final loss function value according to a loss function value and a weighting parameter corresponding to each neural network in a training process, thereby realizing training of the neural network model and improving the accuracy of the neural network model.
Further, before transforming multiple frames of original images respectively to obtain a target image corresponding to each frame of original image, the method further includes: acquiring a reference frame included in an original image corresponding to a current frame; detecting whether the certificate to be detected exists in the reference frame by using a certificate existence detection algorithm; if so, transforming the original image corresponding to the current frame to obtain a corresponding target image; if not, detecting whether the reference frame in the original image corresponding to the next frame has the certificate to be detected by using the certificate existence detection algorithm.
In the embodiment of the application, before multiple frames of original images are respectively transformed to obtain the target image corresponding to each frame of original image, a reference frame in the original image corresponding to a current frame can be obtained, then whether the certificate to be detected exists in the reference frame is detected by using a certificate existence detection algorithm, if so, the original image corresponding to the current frame is transformed to obtain the corresponding target image, and if not, whether the certificate to be detected exists in the reference frame in the original image corresponding to a next frame is detected by using the certificate existence detection algorithm. And the unqualified images in the multiple frames of original images are screened in advance by using a certificate detection algorithm, and the identification of the next frame of original image is directly carried out, so that the speed of identifying the authenticity of the certificate is increased.
Further, before transforming multiple frames of original images respectively to obtain a target image corresponding to each frame of original image, the method further includes: acquiring angle information of each frame of original image; judging whether the angle information of each frame of original image accords with a preset reference angle value by using a certificate direction detection algorithm; if so, transforming the original image corresponding to the current frame to obtain a corresponding target image; and if not, rotating the original image to enable the angle of the original image to accord with the reference angle value.
In the embodiment of the application, before multiple frames of original images are respectively transformed to obtain the target image corresponding to each frame of original image, the angle information of each frame of original image can be obtained, whether the angle information of each frame of original image accords with a preset reference angle value is judged, if the angle information of each frame of original image accords with the preset reference angle value, the original image is input into the neural network model, and if the angle information of each frame of original image does not accord with the preset reference angle value, the original image is rotated until the angle of the original image accords with the reference angle value. The method has the advantages that the direction of the original image is detected by using a certificate direction detection algorithm, the image which does not meet the requirements is rotated, and the problem of certificate angle is prevented from influencing the correctness of certificate true and false identification.
Further, the method further comprises: acquiring a sample set to be trained; determining the reference frame proportion which is accorded with the certificate image to be detected and included in each sample in the sample set to be trained; determining the proportion of the sample corresponding to each reference frame proportion in the sample set to be trained according to the reference frame proportion which the certificate image to be detected conforms to; and determining the proportion of the preset number of standard reference frames according to the proportion of the sample corresponding to the proportion of each reference frame in the sample set to be trained.
In the embodiment of the application, before a sample set to be trained is trained, the sample set to be trained is obtained, the reference frame proportion of the certificate image to be tested is determined through the sample set to be trained, the proportion of the sample corresponding to each reference frame proportion is determined according to the determined reference frame proportion, and finally the proportion of the standard reference frames with the preset number is determined by utilizing the proportion. The method comprises the steps of analyzing a sample set to be trained to obtain a plurality of reference frame proportions, and determining the standard reference frame proportions with preset numbers according to the sample proportion corresponding to each reference frame proportion, so that the reference frame proportions before training are confirmed, and the recognition speed of the neural network model is improved.
Further, before transforming multiple frames of original images respectively to obtain a target image corresponding to each frame of original image, the method further includes: and performing data annotation on each frame of original image, and adding a noise component into each frame of original image to realize data enhancement of the original image, wherein the noise component comprises at least one of Gaussian noise and salt and pepper noise.
In the embodiment of the application, before feature extraction is performed on each frame of original image by using the neural network model, data annotation can be performed on each frame of original image, and a noise component is added to each frame of original image to realize data enhancement on the original image, wherein the noise component includes at least one of gaussian noise and salt and pepper noise. The data of the original image is enhanced by carrying out data annotation and adding noise components to the original image, so that the definition of the original image is improved, and identification of authenticity of the certificate is facilitated.
In a second aspect, an embodiment of the present application provides a device for identifying whether a document is true or false, where the device includes: the acquisition module is used for acquiring multiple frames of original images in the video stream, and respectively transforming the multiple frames of original images to obtain a target image corresponding to each frame of original image; the extraction module is used for respectively extracting features based on each frame of original image and the corresponding target image by utilizing a neural network model to obtain a plurality of features corresponding to each frame of target image; the fusion module is used for performing feature fusion on a plurality of features corresponding to each frame of target image to obtain fusion features corresponding to each frame of target image; and the identification module is used for inputting the fusion characteristics corresponding to each frame of target image into the trained multi-frame identification model, obtaining the target characteristics corresponding to each frame of target image, classifying the target characteristics corresponding to all the target images respectively, and obtaining a classification result, wherein the classification result is an identification result of the to-be-detected certificate.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the processor and the memory complete mutual interaction through the bus;
the memory stores program instructions executable by the processor, the processor being capable of performing the method of the first aspect when invoked by the program instructions.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, including:
the computer readable storage medium stores computer instructions which cause the computer to perform the method of the first aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of a certificate authenticity identification method according to an embodiment of the present application;
fig. 2 is a schematic view of an image recognition process of a multi-frame recognition model according to an embodiment of the present application;
fig. 3 is a schematic diagram of a target image of an image of different types of identification cards after fourier transform according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for identifying authenticity of an identification card according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating an image recognition process of a neural network model according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a certificate authenticity identification device according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
Fig. 1 is a schematic flowchart of a certificate authenticity identification method provided in an embodiment of the present application, and as shown in fig. 1, the method may be applied to a service; the server may specifically be an application server, and may also be a Web server. The method comprises the following steps:
step 101: acquiring multiple frames of original images in a video stream, and respectively transforming the multiple frames of original images to obtain a target image corresponding to each frame of original image.
The video stream may be collected by a terminal device and then sent to a server, where the terminal device may be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), and the like. The video stream can be automatically acquired through a streaming media technology, so that the certificate can be automatically scanned in real time, the comprehensive and complete mobile phone certificate information can be realized, and the authenticity of the certificate video stream can be ensured.
The video stream is analyzed by the server in a manner that the video class is read by using the videocaptue in the OpenCV to realize analysis, so that multiple frames of original images are obtained. The original image can be an image containing a to-be-detected certificate, authenticity of the certificate needs to be identified, the to-be-detected certificate can be an effective certificate such as an identity card, a driving license or a social security card, and the type of the to-be-detected certificate is not limited in the application. The original images can generate an image list according to the sequence of the video stream, the original images are transformed correspondingly to obtain target images, then the target images are input into a neural network model for feature extraction, for example, the transformation can be brightness transformation, frame transformation or frequency domain transformation, the server performs targeted processing on each frame of original images, for example, the frequency domain transformation is performed through Fourier transformation, so that the copying and color printing features in the original images are more obvious and even visible to the naked eye, more accurate features can be extracted from the processed images through the neural network model, and the accuracy of true and false recognition is improved.
Step 102: and respectively extracting features based on each frame of original image and the corresponding target image by using a neural network model to obtain a plurality of features corresponding to each frame of target image.
Certificate detection is used as a target detection category, a neural network model can be specifically built on the basis of an SSD detection frame or a Yolo detection frame, for example, the neural network model can be a Yolov5 basic frame, the neural network model can include a plurality of neural networks for respectively extracting different features, can also be a neural network which comprises different neural network branches, corresponding features are obtained by inputting a frame of target image into different branches every time, the neural network model is a single-frame image recognition model, and the specific structure of the neural network model is not limited in the application. The plurality of features extracted by the server from the target image may be valid information of a plurality of aspects contained in the image.
Step 103: and performing feature fusion on a plurality of features corresponding to each frame of target image to obtain fusion features corresponding to each frame of target image.
The server may perform feature fusion on a plurality of features extracted from each frame of target image in consideration of insufficient model identification capability using unilateral features, and obtain a fusion feature corresponding to each frame of target image, where the fusion feature includes valid information of the target image in multiple aspects, so as to further improve accuracy of model identification, and the fusion manner of the fusion feature may be a connection manner, and the obtained plurality of features are connected and converted into one-dimensional vector, for example, the plurality of features extracted by the neural network model includes three feature vectors: the size of the reflection feature vector is 5 x 512, the size of the frame feature vector is 10 x 256, and the size of the reproduction feature vector is 10 x 256.
Step 104: inputting the fusion characteristics corresponding to each frame of target image into a trained multi-frame recognition model, obtaining the target characteristics corresponding to each frame of target image, classifying the target characteristics corresponding to all the target images respectively, and obtaining a classification result, wherein the classification result is a recognition result of the authenticity of the certificate to be detected.
Fig. 2 is a schematic diagram of an image recognition process of a multi-frame recognition model provided in an embodiment of the present application, and as shown in fig. 2, taking certificate true and false recognition of an identity card as an example, the fusion features corresponding to each obtained frame of target image are sorted according to a frame stream sequence, and from a first frame of identity card feature, a second identity card feature to a last frame of identity card feature, the multi-frame recognition model may be a Long-short-term memory (LSTM) model) The deep learning model may be a 2-layer LSTM model, and the server performs secondary feature extraction on the basis of the obtained fusion features to obtain a target feature corresponding to each frame of target image, where the target feature may be a group of feature vectors arranged according to a frame stream sequence of an original image, and the specific implementation may be: the server inputs the fusion features obtained by fusion into the LSTM model of the 2 layers for feature extraction, and records the obtained features as FiAnd i is the time sequence number of the original image of each frame.
The multi-frame recognition model also comprises a classifier which can be a SoftMax classifier capable of expanding the difference of scores and fusing the obtained fusion characteristics FiThe difference of scores can be further enlarged through the SoftMax classifier, and the identification effect of certificate true and false classification is improved. The server obtains a classification result by utilizing the time sequence relation among all the frames of target images, the classification result can be used for representing the identification result of the authenticity of the certificate to be detected, the authenticity analysis of the multi-frame image certificate based on the video stream is realized, and the identification accuracy of the authenticity of the certificate is further improved on the basis of insufficient identification accuracy of a single frame image.
The neural network model may include a plurality of neural networks or a plurality of branch models, which are used for extracting different features, for example, a first neural network, specifically, for example, a Resnet34 residual network model may be included, the first neural network may be used for identifying an id card as an example of a difference between deformation, a light shadow and reflection of a true or false document, and the original image is denoted by id _ orign, in a process of extracting features by using the neural network model, first, the screen brightness of a device for shooting the document is adjusted to be a preset screen brightness, which may be the maximum brightness of the device or 90% of the maximum brightness, and the reflection image after brightness transformation is obtained and is input as a target image into the first neural network model for feature extraction.
Because the material of ID card itself just has certain reflection of light, simultaneously if what shoot is electronic screen, the screen also has the reflection of light, this 2 kinds of reflection of light if strengthen with real natural light, can lead to real ID card photo and the ID card photo information difference of reproduction to be very big, in order to strengthen this kind of difference of true and false discernment, we can change screen luminance automatically when the ID card scans, adjust screen luminance to preset luminance and go real-time scanning, the ID card that obtains like this really shoots and can have very big difference with the reproduction, can regard as id _ light with the reflection of light image that obtains. Then, each frame of original image id _ orign and corresponding reflection image id _ light are input into a first neural network, the obtained first feature size is 5 × 512, and the first feature size is marked as feature _ all, so that the problem of identification of deformation, shadow and reflection existing in an image identification layer is solved.
In some implementations, the neural network model may further include a second neural network, which may also exist as a branch of the neural network model, for example, a Resnet50+ SE residual network model, the second neural network may be used for the difference between the bounding box and the moire pattern for the true and false document images, the SE may be an attention mechanism for making the second neural network focus more on the recognition of the bounding box and the moire pattern, and the target image includes the highly extended image and the highly extended image obtained by the transformation of the bounding box.
During scanning of a certificate, regarding differences in borders, the differences are generally easily found in the width of an image, in order to identify information of the borders, the image is subjected to width-level transformation, firstly, the height of a reference frame in each frame of an original image is kept unchanged, the reference frame is the size of an identification card image in the original image, the reference frame is extended along the width direction to obtain a border image with a preset first length, the preset first length may be a border length in the outer width direction of the reference frame, for example, the image width and the image height of each frame of the original image are respectively image _ width and image _ height, and four corner coordinates of the reference frame of the certificate are selected as: (X)1,Y1),(X2,Y2),(X3,Y3),(X4,Y4) In scanning, if a frame exists, it is generally easy to find the frame in the width of the image, and in order to recognize the frame information,the image is subjected to width layer transformation, the height coordinate of the ID card detection frame is kept unchanged, and X is allowed to pass through1=0,X2=image_width,X3=image_width,X4And (5) performing identity card matting on the original image by using the coordinate point, recording the identity card matting as id _ width, and acquiring a width extension image according to the coordinates of the four corner points after transformation.
Then, for the difference between the light shadow and the light spot, it is generally easy to find on the height information of the image, in order to effectively identify the light shadow or the light spot, and perform a height level transformation on the image, the server may keep the width of the certificate image unchanged, extend the reference frame along the height direction to obtain a frame image with a preset second length, to obtain a height extended image, where the preset second length may be the frame length in the height direction outside the reference frame, for example, Y may be allowed1=0,Y2=0,Y3=image_height,Y4And then, performing cutting operation on the original image by using the coordinate point, and recording the cutting operation as id _ height to obtain a highly extended image.
After obtaining the height extended image and the width extended image corresponding to each frame of original image, the server may use the width image id _ width and the height image id _ height of the image to perform feature extraction using Resnet50+ SE, where the extracted feature is 10 × 256 and is denoted as feature _ wh, to solve the true and false recognition that the frame and moire fringes of the image are obvious.
In some implementations, the neural network model may further include a third neural network, which may also be a branch structure of the neural network model, for example, a Resnet50+ SE residual network model, the third neural network may be used for differences between copying and color printing for images of genuine and fake documents, and the SE may be an attention mechanism for making the third neural network focus more on recognition of copying and color printing.
Taking identification of identity cards and certificates as an example, taking identification of identity cards and true and false identification of identity cards as an example, fig. 3 is a schematic diagram of target images of different types of identity cards after fourier transform provided by the embodiment of the present application, including three types of transformed identity cards, namely a transformed image of a true identity card, a transformed image of a color printing identity card and a transformed image of a rephotographed identity card, which are sequentially transformed images of the true identity card, the color printing identity card and the rephotographed identity card, wherein the images of the different types of identity cards have a plurality of information differences visible to the naked eye after fourier transform, such as information in various aspects of deformation, light shadow, reflection, frames, moire patterns and the like, and the true identity card and the false identity card are different in image frequency domain level performance because frequency level represents the change degree of image gray scale in frequency domain, for reflection, there are generally light spots, which are more than normal identity card images, for color printing, the frequency distribution of the image is unbalanced when the paper is white, the real identity card is in a frequency domain part, the distribution display center of the real identity card is uniformly dispersed towards the periphery, the copying type and the printing type develop in the horizontal and vertical directions, the lines are curved, the server can use Fourier transform to perform spectrogram conversion on the image and convert the image into a frequency domain image, the frequency domain image is used for identifying the identity card and is marked as id _ mfcc to serve as the obtained frequency domain image, and the frequency domain image obtained by converting the spectrogram can more obviously perceive the difference of the real identity card in copying and color printing so that naked eyes can identify the true identity card and improve the accuracy of identifying the true identity card.
Then, for the recognition of copying and color printing, the original image is transformed into a frequency domain image by using Fourier transform, and the identity card frequency domain image id _ mfcc is subjected to feature extraction by using a Resnet50+ SE network model, wherein the extracted feature is 10 × 256 and is recorded as feature _ light as a third feature, so that the true and false recognition of the image in the aspects of copying and color printing is solved.
As can be seen from the fact that the neural network model may include a plurality of neural networks or network branches, in the training process of the neural network model, the neural network model determines a final loss function value according to each neural network and a corresponding loss function value and a weighting function, fig. 4 is a schematic flow chart of an identification card true-false identification method provided in the embodiment of the present application, as shown in fig. 4, for example: the neural network model may include three neural networks, a first neural network Resnet34 may be designed using cross entropy as a Loss function, denoted as Loss _ all, a second neural network Resnet50+ SE uses Softmax as a Loss function, denoted as Loss _ sm, a third neural network Resnet50+ SE uses L2 Loss as a Loss function, denoted as Loss _ L2, the model and the Loss function used by each of the neural networks may be further adjusted according to recognition scenarios, and for training of the neural network model, the overall Loss function is a weighting of the 3 Loss functions, that is: the Loss function is obtained by weighting parameters in front of the Loss function, namely w1 × low _ all + w2 × low _ sm + w3 × low _ l2, and can also be obtained by model training.
Before a plurality of frames of original images are respectively transformed to obtain a target image corresponding to each frame of original image, two pre-screening modes are included, one mode is to use a certificate existence detection algorithm to detect whether the certificate to be detected exists in the original image, and the other mode is to use a certificate direction detection algorithm to detect whether the angle of the certificate to be detected meets the requirement. The first pre-screening method may be: firstly, screening the image without the certificate, wherein the screening mode is that a reference frame included in the original image corresponding to the current frame is firstly obtained, and only if the certificate to be detected is in the reference frame, the authenticity of the certificate can be continuously identified. The server can detect whether the certificate to be detected exists in the reference frame by using a deployed certificate existence detection algorithm, if so, the original image corresponding to the current frame is converted to obtain a target image, if not, the original image can be discarded, and the next original image is continuously detected by using the certificate existence detection algorithm. The original images without the certificate to be detected are screened in advance, so that the speed of identifying the authenticity of the certificate is increased.
In another mode of pre-screening the original image, firstly, angle information of each frame of original image is obtained, then, a certificate direction detection algorithm is utilized to judge whether the angle information of each frame of original image accords with a preset reference angle value, if so, the original image corresponding to the current frame is transformed to obtain a target image, if not, the original image is rotated to enable the angle of the original image to accord with the preset reference angle value, for example, in the identity card direction detection, the preset reference angle value can be set to be that a human image surface and a national emblem surface of each frame of image face downwards and are marked as 0 degrees, if the direction has a certain angle of rotation, the identity card can be rotated to 0 degrees, so that the original image of the identity card is aligned, and the validity of video stream feature extraction is ensured.
Fig. 5 is a schematic diagram of an image recognition process of a neural network model according to an embodiment of the present application, and as shown in fig. 5, a process of identifying identity cards is specifically taken as an example, and a specific implementation manner may be:
step 501: the server receives and analyzes the video stream from the user equipment to obtain a plurality of frames of original images; the video stream comprises a certificate image to be detected shot by a user.
Step 502: the server detects whether the identity card exists in each frame of original image by using a certificate existence algorithm. For the embodiment of detecting the identity card by using the certificate existence algorithm, reference may be made to the certificate existence detection method in the embodiment corresponding to fig. 1.
Step 503: and the server detects whether the angle of the image to be detected in each frame of original image accords with a preset angle or not by using a certificate direction detection algorithm. The server detects the angle of the image to be detected in the original image by referring to the certificate direction detection method provided by the embodiment corresponding to fig. 1.
Step 504: and the server utilizes the neural network model to extract the characteristics of the single-frame identity card. The target image is obtained after each frame of identity card image is transformed, the neural network model is utilized to extract the features of the target image, so that a plurality of features of a single frame of identity card can be obtained, and the specific neural network model extraction features can refer to the method for extracting the features by the neural network model provided by the embodiment corresponding to fig. 1.
Step 505: the server utilizes the multi-frame identification model to input the fused features into the multi-frame identification model for classification on the basis of feature extraction of the single-frame identity card, so as to obtain a true and false identification result. The method for obtaining the true and false recognition results by the server through multi-frame recognition model classification can refer to the feature fusion method and the classification method provided by the embodiment corresponding to fig. 1.
Before the certificate is subjected to true and false recognition, the proportion of an Anchor ratio reference frame needs to be designed in a priori, the Anchor ratio reference frame is designed to have large influence on the detection precision and performance of the identity card, the detection speed is higher when the number of the reference frames is smaller, a sample set to be trained is obtained firstly, and the proportion of the certificate reference frame corresponding to each sample is marked; secondly, the method comprises the following steps: calculating an aspect ratio of the document based on the four points of the reference frame; thirdly, drawing a distribution diagram of the aspect ratio, and observing the sample amount of each ratio; finally, the aspect ratio which occupies a large part is taken as the proportion design of the standard reference frame, and the main reference frame proportion is 1:1.2, 1:1.4 and 1: 1.6.
Before the server utilizes the neural network to extract the features, data labeling can be carried out on each frame of original image, a noise component is added into each frame of original image, data enhancement on the original image is achieved, at least one of Gaussian noise or salt-and-pepper noise can be included in the noise component, accordingly, under the condition that network conditions are not good, the definition of the original image can be improved, and the identification accuracy of the authenticity of the certificate is improved.
Fig. 6 is a schematic structural diagram of a certificate authenticity identification device provided in an embodiment of the present application, where the device may be a module, a program segment, or code on an electronic device. It should be understood that the apparatus corresponds to the above-mentioned embodiment of the method in fig. 1, and can perform various steps related to the embodiment of the method in fig. 1, and the specific functions of the apparatus can be referred to the above description, and the detailed description is appropriately omitted here to avoid redundancy. The embodiment of the application provides a certificate true and false recognition device, and the device includes:
the acquisition module 601 is configured to acquire multiple frames of original images in a video stream, and respectively transform the multiple frames of original images to acquire a target image corresponding to each frame of original image;
an extraction module 602, configured to perform feature extraction based on each frame of original image and a corresponding target image respectively by using a neural network model, to obtain a plurality of features corresponding to each frame of target image;
a fusion module 603, configured to perform feature fusion on multiple features corresponding to each frame of target image to obtain a fusion feature corresponding to each frame of target image;
the identification module 604 is configured to input the fusion features corresponding to each frame of target image into the trained multi-frame identification model, obtain target features corresponding to each frame of target image, classify the target features corresponding to all the target images, and obtain a classification result, where the classification result is an identification result of whether the to-be-detected certificate is true or false.
On the basis of the above embodiment, the neural network model includes a first neural network, and the target image includes a reflex image.
On the basis of the foregoing embodiment, the obtaining module 601 is specifically configured to:
adjusting the screen brightness of equipment for shooting the certificate to be preset screen brightness to obtain a light reflection image corresponding to each frame of target image;
correspondingly, the extracting module 602 is specifically configured to:
inputting each frame of original image and the corresponding reflection image into the first neural network;
and performing feature extraction on each frame of original image and the corresponding reflection image by using the first neural network to obtain a first feature corresponding to each frame of original image.
On the basis of the embodiment, the neural network model comprises a second neural network with an attention mechanism, the original image comprises a reference frame for detecting the certificate to be detected, and the target image comprises a width extension image and a height extension image.
On the basis of the foregoing embodiment, the obtaining module 601 is specifically configured to:
keeping the height of the reference frame in each frame of original image unchanged, extending the reference frame along the width direction to preset a frame image with a first length, and obtaining a width extension image corresponding to each frame of original image;
keeping the width of the reference frame in each frame of original image unchanged, extending the reference frame along the height direction to preset a frame image with a second length, and obtaining a height extended image corresponding to each frame of original image;
accordingly, correspondingly, the extracting module 602 is specifically configured to:
inputting the height extension image and the width extension image corresponding to each frame of original image into the second neural network;
and performing feature extraction on the height extension image and the width extension image corresponding to each frame by using the second neural network to obtain second features corresponding to each frame of target image.
On the basis of the above embodiment, the neural network model includes a third neural network with a mechanism of attention therein, and the target image includes a spectral image.
On the basis of the foregoing embodiment, the obtaining module 601 is specifically configured to:
performing frequency domain transformation on each frame of original image by utilizing Fourier transformation to obtain a frequency domain image corresponding to each frame of original image;
accordingly, correspondingly, the extracting module 602 is specifically configured to:
inputting the frequency domain image corresponding to each frame of target image into the third neural network;
and performing feature extraction on the frequency domain image corresponding to each frame of target image by using a third neural network to obtain a third feature corresponding to each frame of target image.
On the basis of the above embodiment, the neural network model includes a plurality of neural networks, and the neural network model determines a final loss function value according to the loss function value and the weighting parameter corresponding to each neural network in the training process.
On the basis of the above embodiment, the apparatus further includes a certificate detection module configured to:
acquiring a reference frame included in an original image corresponding to a current frame;
detecting whether the certificate to be detected exists in the reference frame by using a certificate existence detection algorithm;
if yes, converting the original image corresponding to the current frame to obtain a corresponding target image;
if not, detecting whether the reference frame in the original image corresponding to the next frame has the certificate to be detected by using the certificate existence detection algorithm.
On the basis of the above embodiment, the apparatus further includes a direction detection module configured to:
acquiring angle information of each frame of original image;
judging whether the angle information of each frame of original image accords with a preset reference angle value by using a certificate direction detection algorithm;
if so, transforming the original image corresponding to the current frame to obtain a corresponding target image;
and if not, rotating the original image to enable the angle of the original image to accord with the reference angle value.
On the basis of the above embodiment, the apparatus further includes a training module configured to:
acquiring a sample set to be trained;
determining the proportion of reference frames which are included in each sample in the sample set to be trained and are accorded with the certificate image to be tested;
determining the proportion of the sample corresponding to each reference frame proportion in the sample set to be trained according to the reference frame proportion which the certificate image to be detected conforms to;
and determining the proportion of the standard reference frames with the preset number according to the proportion of the sample corresponding to the proportion of each reference frame in the sample set to be trained.
On the basis of the above embodiment, the apparatus further includes a data enhancement module configured to:
and performing data annotation on each frame of original image, and adding a noise component into each frame of original image to realize data enhancement of the original image, wherein the noise component comprises at least one of Gaussian noise and salt and pepper noise.
Fig. 7 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and as shown in fig. 7, the electronic device includes: a processor (processor)701, a memory (memory)702, and a bus 803; wherein,
the processor 701 and the memory 702 complete interaction with each other through the bus 703;
the processor 701 is configured to call the program instructions in the memory 702 to execute the methods provided by the above-described method embodiments.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor 701 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. Which may implement or perform the various methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory 702 may include, but is not limited to, Random Access Memory (RAM), Read Only Memory (ROM), Programmable Read-Only Memory (PROM), Erasable Read-Only Memory (EPROM), Electrically Erasable Read-Only Memory (EEPROM), and the like.
The present embodiment discloses a computer program product comprising a computer program stored on a computer-readable storage medium, the computer program comprising program instructions, which when executed by a computer, enable the computer to execute the certificate authenticity identification method provided by the above-mentioned method embodiments.
The embodiment provides a computer-readable storage medium, which stores computer instructions, and the computer instructions enable the computer to execute the certificate authenticity identification method provided by the above method embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or mutual connection may be an indirect coupling or mutual connection of devices or units through some mutual interfaces, and may be in an electric, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (12)

1. A method for identifying the authenticity of a certificate is characterized by comprising the following steps:
acquiring multiple frames of original images in a video stream, and respectively transforming the multiple frames of original images to obtain a target image corresponding to each frame of original image;
respectively extracting features based on each frame of original image and the corresponding target image by using a neural network model to obtain a plurality of features corresponding to each frame of target image;
performing feature fusion on a plurality of features corresponding to each frame of target image to obtain fusion features corresponding to each frame of target image;
inputting the fusion characteristics corresponding to each frame of target image into a trained multi-frame recognition model, obtaining the target characteristics corresponding to each frame of target image, classifying the target characteristics corresponding to all the target images respectively, and obtaining a classification result, wherein the classification result is a recognition result of the authenticity of the certificate to be detected.
2. The method according to claim 1, wherein the neural network model includes a first neural network, the target image includes a reflection image, and the transforming the plurality of frames of original images respectively to obtain the target image corresponding to each frame of original image includes:
adjusting the screen brightness of equipment for shooting the certificate to be preset screen brightness to obtain a light reflection image corresponding to each frame of target image;
correspondingly, the step of respectively performing feature extraction by using a neural network model based on each frame of original image and the corresponding target image to obtain a plurality of features corresponding to each frame of target image includes:
inputting each frame of original image and the corresponding reflection image into the first neural network;
and performing feature extraction on each frame of original image and the corresponding reflection image by using the first neural network to obtain a first feature corresponding to each frame of target image.
3. The method according to claim 1, wherein the neural network model includes a second neural network with an attention mechanism, the raw images include reference frames for detecting the document to be detected, the target images include width-extended images and height-extended images, the frames of raw images are respectively transformed to obtain the target image corresponding to each frame of raw image, and the method includes:
keeping the height of the reference frame in each frame of original image unchanged, extending the reference frame along the width direction to preset a frame image with a first length, and obtaining a width extension image corresponding to each frame of original image;
keeping the width of the reference frame in each frame of original image unchanged, extending the reference frame along the height direction to preset a frame image with a second length, and obtaining a height extended image corresponding to each frame of original image;
correspondingly, the step of respectively performing feature extraction by using a neural network model based on each frame of original image and the corresponding target image to obtain a plurality of features corresponding to each frame of target image includes:
inputting the height extension image and the width extension image corresponding to each frame of original image into the second neural network;
and performing feature extraction on the height extension image and the width extension image corresponding to each frame of original image by using the second neural network to obtain a second feature corresponding to each frame of target image.
4. The method of claim 1, wherein the neural network model includes a third neural network with a attention mechanism therein, the target image includes a spectrum image, and the transforming the plurality of frames of original images respectively to obtain the target image corresponding to each frame of original image includes:
performing frequency domain transformation on each frame of original image by utilizing Fourier transformation to obtain a frequency domain image corresponding to each frame of original image;
correspondingly, the step of respectively performing feature extraction by using a neural network model based on each frame of original image and the corresponding target image to obtain a plurality of features corresponding to each frame of target image includes:
inputting the frequency domain image corresponding to each frame of original image into the third neural network;
and performing feature extraction on the frequency domain image corresponding to each frame of the original image by using a third neural network to obtain a third feature corresponding to each frame of the target image.
5. The method of claim 1, wherein the neural network model comprises a plurality of neural networks, and wherein the neural network model determines a final loss function value based on the corresponding loss function value and weighting parameter for each neural network during the training process.
6. The method according to claim 1, before transforming a plurality of frames of original images respectively to obtain the target image corresponding to each frame of original image, the method further comprises:
acquiring a reference frame included in an original image corresponding to a current frame;
detecting whether the certificate to be detected exists in the reference frame by using a certificate existence detection algorithm;
if so, transforming the original image corresponding to the current frame to obtain a corresponding target image;
if not, detecting whether the reference frame in the original image corresponding to the next frame has the certificate to be detected by using the certificate existence detection algorithm.
7. The method according to claim 1, before transforming a plurality of frames of original images respectively to obtain the target image corresponding to each frame of original image, the method further comprises:
acquiring angle information of each frame of original image;
judging whether the angle information of each frame of original image accords with a preset reference angle value by using a certificate direction detection algorithm;
if so, transforming the original image corresponding to the current frame to obtain a corresponding target image;
and if not, rotating the original image to enable the angle of the original image to accord with the reference angle value.
8. The method according to any one of claims 1-7, further comprising:
acquiring a sample set to be trained;
determining the proportion of reference frames which are included in each sample in the sample set to be trained and are accorded with the certificate image to be tested;
determining the proportion of the sample corresponding to the proportion of each reference frame in the sample set to be trained according to the proportion of the reference frame to which the certificate image to be detected conforms;
and determining the proportion of the preset number of standard reference frames according to the proportion of the sample corresponding to the proportion of each reference frame in the sample set to be trained.
9. The method according to claim 1, before transforming a plurality of frames of original images respectively to obtain the target image corresponding to each frame of original image, the method further comprises:
and performing data annotation on each frame of original image, and adding a noise component into each frame of original image to realize data enhancement of the original image, wherein the noise component comprises at least one of Gaussian noise and salt and pepper noise.
10. A certificate true and false recognition device, comprising:
the acquisition module is used for acquiring multiple frames of original images in the video stream, and respectively transforming the multiple frames of original images to obtain a target image corresponding to each frame of original image;
the extraction module is used for respectively extracting features based on each frame of original image and the corresponding target image by utilizing a neural network model to obtain a plurality of features corresponding to each frame of target image;
the fusion module is used for carrying out feature fusion on a plurality of features corresponding to each frame of target image to obtain fusion features corresponding to each frame of target image;
and the identification module is used for inputting the fusion characteristics corresponding to each frame of target image into the trained multi-frame identification model, obtaining the target characteristics corresponding to each frame of target image, classifying the target characteristics corresponding to all the target images respectively, and obtaining a classification result, wherein the classification result is an identification result of the to-be-detected certificate.
11. An electronic device, comprising: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-9.
12. A computer-readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1-9.
CN202210397692.2A 2022-04-15 2022-04-15 Certificate authenticity identification method and device, electronic equipment and storage medium Pending CN114743016A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210397692.2A CN114743016A (en) 2022-04-15 2022-04-15 Certificate authenticity identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210397692.2A CN114743016A (en) 2022-04-15 2022-04-15 Certificate authenticity identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114743016A true CN114743016A (en) 2022-07-12

Family

ID=82280679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210397692.2A Pending CN114743016A (en) 2022-04-15 2022-04-15 Certificate authenticity identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114743016A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375998A (en) * 2022-10-24 2022-11-22 成都新希望金融信息有限公司 Certificate identification method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229499A (en) * 2017-10-30 2018-06-29 北京市商汤科技开发有限公司 Certificate recognition methods and device, electronic equipment and storage medium
CN111080593A (en) * 2019-12-07 2020-04-28 上海联影智能医疗科技有限公司 Image processing device, method and storage medium
CN111324874A (en) * 2020-01-21 2020-06-23 支付宝实验室(新加坡)有限公司 Certificate authenticity identification method and device
CN111738979A (en) * 2020-04-29 2020-10-02 北京易道博识科技有限公司 Automatic certificate image quality inspection method and system
WO2021208728A1 (en) * 2020-11-20 2021-10-21 平安科技(深圳)有限公司 Method and apparatus for speech endpoint detection based on neural network, device, and medium
CN113591603A (en) * 2021-07-09 2021-11-02 北京旷视科技有限公司 Certificate verification method and device, electronic equipment and storage medium
CN113837026A (en) * 2021-09-03 2021-12-24 支付宝(杭州)信息技术有限公司 Method and device for detecting authenticity of certificate
CN114220449A (en) * 2021-12-24 2022-03-22 瓴盛科技有限公司 Voice signal noise reduction processing method and device and computer readable medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229499A (en) * 2017-10-30 2018-06-29 北京市商汤科技开发有限公司 Certificate recognition methods and device, electronic equipment and storage medium
CN111080593A (en) * 2019-12-07 2020-04-28 上海联影智能医疗科技有限公司 Image processing device, method and storage medium
CN111324874A (en) * 2020-01-21 2020-06-23 支付宝实验室(新加坡)有限公司 Certificate authenticity identification method and device
CN111738979A (en) * 2020-04-29 2020-10-02 北京易道博识科技有限公司 Automatic certificate image quality inspection method and system
WO2021208728A1 (en) * 2020-11-20 2021-10-21 平安科技(深圳)有限公司 Method and apparatus for speech endpoint detection based on neural network, device, and medium
CN113591603A (en) * 2021-07-09 2021-11-02 北京旷视科技有限公司 Certificate verification method and device, electronic equipment and storage medium
CN113837026A (en) * 2021-09-03 2021-12-24 支付宝(杭州)信息技术有限公司 Method and device for detecting authenticity of certificate
CN114220449A (en) * 2021-12-24 2022-03-22 瓴盛科技有限公司 Voice signal noise reduction processing method and device and computer readable medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115375998A (en) * 2022-10-24 2022-11-22 成都新希望金融信息有限公司 Certificate identification method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Raghunandan et al. Riesz fractional based model for enhancing license plate detection and recognition
CN111652223B (en) Certificate identification method and device
da Silva Pinto et al. Video-based face spoofing detection through visual rhythm analysis
Sepas-Moghaddam et al. Light field-based face presentation attack detection: reviewing, benchmarking and one step further
Boulkenafet et al. Scale space texture analysis for face anti-spoofing
JP2020525947A (en) Manipulated image detection
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN110838119B (en) Human face image quality evaluation method, computer device and computer readable storage medium
KR20200118842A (en) Identity authentication method and device, electronic device and storage medium
Chen et al. Image splicing detection via camera response function analysis
CN111259891B (en) Method, device, equipment and medium for identifying identity card in natural scene
CN111931783A (en) Training sample generation method, machine-readable code identification method and device
Raigonda Signature Verification System Using SSIM In Image Processing
CN118135641B (en) Face counterfeiting detection method based on local counterfeiting area detection
CN112101359A (en) Text formula positioning method, model training method and related device
CN114743016A (en) Certificate authenticity identification method and device, electronic equipment and storage medium
CN113468954B (en) Face counterfeiting detection method based on local area features under multiple channels
CN113920434A (en) Image reproduction detection method, device and medium based on target
Hu et al. Structure destruction and content combination for generalizable anti-spoofing
CN110335406B (en) Multimedia glasses type portable currency detector
JP3962517B2 (en) Face detection method and apparatus, and computer-readable medium
CN112395580A (en) Authentication method, device, system, storage medium and computer equipment
CN111126283A (en) Rapid in-vivo detection method and system for automatically filtering fuzzy human face
CN115546598A (en) Depth forged image detection method and system based on frequency domain transformation
CN114926829A (en) Certificate detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination