CN110427972B - Certificate video feature extraction method and device, computer equipment and storage medium - Google Patents

Certificate video feature extraction method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110427972B
CN110427972B CN201910613629.6A CN201910613629A CN110427972B CN 110427972 B CN110427972 B CN 110427972B CN 201910613629 A CN201910613629 A CN 201910613629A CN 110427972 B CN110427972 B CN 110427972B
Authority
CN
China
Prior art keywords
image
certificate
frame
matching score
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910613629.6A
Other languages
Chinese (zh)
Other versions
CN110427972A (en
Inventor
韩天奇
钱浩然
彭宇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhongan Information Technology Service Co ltd
Original Assignee
Zhongan Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongan Information Technology Service Co Ltd filed Critical Zhongan Information Technology Service Co Ltd
Priority to CN201910613629.6A priority Critical patent/CN110427972B/en
Publication of CN110427972A publication Critical patent/CN110427972A/en
Application granted granted Critical
Publication of CN110427972B publication Critical patent/CN110427972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a certificate video feature extraction method and device, computer equipment and a storage medium, and belongs to the technical field of video processing. The method comprises the following steps: acquiring an image sequence from a video containing a document, the image sequence comprising at least one frame of document image; extracting a plurality of characteristic subgraphs of each frame of certificate image according to a plurality of anti-counterfeiting characteristics of the certificate; respectively carrying out corresponding matching calculation on a plurality of characteristic subgraphs of each frame of certificate image and a plurality of preset image templates to obtain a matching score vector of each frame of certificate image corresponding to the plurality of image templates; and forming a matching score matrix according to the matching score vector corresponding to each frame of certificate image, and mapping the matching score matrix into a video feature vector. The invention extracts the certificate video characteristics based on template matching to realize the authenticity identification of the certificate, has good universality, can give consideration to various characteristics including dynamic characteristics, and has good expandability.

Description

Certificate video feature extraction method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of video processing, in particular to a certificate video feature extraction method and device, computer equipment and a storage medium.
Background
Video classification has wide application in a plurality of fields such as anti-fraud, riot and terrorist detection and the like. One typical application is the authentication of documents in the financial field.
General video classification methods, for example, patent applications with application number CN201711420935.5 entitled "video classification model training method, apparatus, storage medium, and electronic device", and patent applications with application number CN201711172591.0 entitled "video classification method, video classification apparatus, and electronic device", all require deep neural network to extract features, and further require a large amount of training data or at least a pre-training model that can complete similar tasks.
However, in the problem of certificate authentication, because the certificate relates to factors such as user privacy, difficulty in data acquisition and labeling, and the like, a large number of standard data sets are usually not available, and different certificate anti-counterfeiting features are also very different, so that videos available for training an algorithm are few, a proper pre-training model is difficult to find for extracting features, and the method is not suitable for directly extracting features by using a neural network. In addition, as the quality of the video collected by the user usually cannot reach a very high standard, and the accuracy of the identification features of a single certificate is often low, the authenticity identification needs to be realized by simultaneously utilizing multiple features. If the method used in extracting multiple anti-counterfeiting features is very different, the system is too complex and is inconvenient to maintain and expand.
Disclosure of Invention
In order to solve at least one problem mentioned in the background art, the invention provides a certificate video feature extraction method, a certificate video feature extraction device, a computer device and a storage medium, which are used for extracting certificate video features based on template matching so as to realize authenticity identification of a certificate, have good universality, can give consideration to various features including dynamic features, and have good expandability.
The embodiment of the invention provides the following specific technical scheme:
in a first aspect, the present invention provides a method for extracting a video feature of a certificate, where the method includes:
acquiring an image sequence from a video containing a document, the image sequence comprising at least one frame of document image;
extracting a plurality of characteristic subgraphs of each frame of the certificate image according to a plurality of anti-counterfeiting characteristics of the certificate;
respectively carrying out corresponding matching calculation on a plurality of characteristic subgraphs of each frame of the certificate image and a plurality of preset image templates to obtain a matching score vector of each frame of the certificate image corresponding to the plurality of image templates;
and forming a matching score matrix according to the matching score vector corresponding to each frame of the certificate image, and mapping the matching score matrix into a video feature vector.
In a preferred embodiment, the acquiring of the sequence of images from the video containing the document comprises:
extracting a plurality of frames of certificate images from the video containing the certificates;
correcting the extracted multiple frames of certificate images;
and selecting at least one frame of the certificate image which is successfully corrected to form the image sequence.
In a preferred embodiment, before the step of extracting a plurality of feature subgraphs of each frame of the certificate image according to a plurality of anti-counterfeiting features of the certificate, the method further comprises the following steps:
and preprocessing each frame of the certificate image, wherein the preprocessing at least comprises one of white balance, noise reduction and sharpening.
In a preferred embodiment, the plurality of security features of the document comprise at least two of a color-changing ink, a chip block, an animated printed letter, and an animated printed portrait of the document.
In a preferred embodiment, the performing corresponding matching calculation on the plurality of feature sub-images of each frame of the certificate image and a plurality of preset image templates to obtain matching score vectors of each frame of the certificate image corresponding to the plurality of image templates includes:
for each characteristic subgraph of each frame of the certificate image, the following operations are carried out:
if the characteristic subgraph is not the dynamic printing portrait, performing relevant matching on the characteristic subgraph and an image template corresponding to the characteristic subgraph, and calculating a matching score corresponding to the characteristic subgraph;
if the characteristic sub-image is a dynamic printing portrait, respectively extracting human face characteristics from the dynamic printing portrait and a corresponding portrait template, and calculating a matching score corresponding to the dynamic printing portrait according to the extracted human face characteristics;
and generating a matching score vector of each frame of the certificate image corresponding to the plurality of image templates according to the matching score corresponding to each characteristic subgraph of each frame of the certificate image.
In a preferred embodiment, said mapping said match score matrix to a video feature vector comprises:
and calculating each column of the matching score matrix according to a preset mapping function, and mapping the calculation result of each column of the matching score matrix to the video feature vector.
In a preferred embodiment, the computing comprises one of a linear operation and a non-linear operation;
the linear operation includes a sum and difference operation between one or more of a mean calculation, a median, a maximum, and a minimum.
In a preferred embodiment, the method further comprises:
inputting the video feature vectors into a preset regression model to obtain a video classification result indicating the authenticity of the certificate;
in a preferred embodiment, the preset regression model is a linear regression model, a logistic regression model or a neural network model;
the feature weight parameters of the preset regression model are obtained by training in a machine learning method in advance or are preset.
In a second aspect, a device for extracting video features of a certificate is provided, the device comprising:
the acquisition module is used for acquiring an image sequence from a video containing the certificate, wherein the image sequence comprises at least one frame of certificate image;
the extraction module is used for extracting a plurality of characteristic subgraphs of each frame of the certificate image according to a plurality of anti-counterfeiting characteristics of the certificate;
the matching module is used for correspondingly matching and calculating a plurality of characteristic subgraphs of each frame of the certificate image with a plurality of preset image templates respectively to obtain a matching score vector of each frame of the certificate image corresponding to the plurality of image templates;
and the mapping module is used for forming a matching score matrix according to the matching score vector corresponding to each frame of the certificate image and mapping the matching score matrix into a video characteristic vector.
In a third aspect, a computer device is provided, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
The embodiment of the invention provides a certificate video feature extraction method, a certificate video feature extraction device, computer equipment and a storage medium, wherein an image sequence is acquired from a video containing a certificate, and the image sequence comprises at least one certificate image; extracting a plurality of characteristic subgraphs of each frame of certificate image according to a plurality of anti-counterfeiting characteristics of the certificate; respectively carrying out corresponding matching calculation on a plurality of characteristic subgraphs of each frame of certificate image and a plurality of preset image templates to obtain a matching score vector of each frame of certificate image corresponding to the plurality of image templates; and forming a matching score matrix according to the matching score vector corresponding to each frame of certificate image, and mapping the matching score matrix into a video feature vector. Therefore, the method extracts corresponding regional subgraphs from the certificate video frame according to different anti-counterfeiting characteristics of the certificate, matches the regional subgraphs with the image template of the real certificate, and takes the matching value as the characteristic, so that the characteristic extraction can be completed only by acquiring a small amount of real certificate data and manufacturing the template without using a deep neural network. In addition, the invention extracts the video characteristics of the certificate by matching the characteristic subgraphs to the templates in different states so as to realize the authenticity identification of the certificate, has good universality, can give consideration to various characteristics including dynamic characteristics, has good expandability and can adapt to the anti-counterfeiting requirements of the certificate in various scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram illustrating an application environment of a certificate video feature extraction method provided by an embodiment of the invention;
FIG. 2 is a flow chart of a method for extracting the video features of the certificate according to an embodiment of the invention;
FIG. 3 illustrates a schematic diagram of a feature subgraph of a single-frame document image provided by an embodiment of the invention;
FIG. 4 is a schematic flow chart of image matching for a single-frame document according to an embodiment of the present invention;
FIG. 5 shows a schematic diagram of a certificate authenticity identification process of an embodiment of the invention;
FIG. 6 shows a schematic diagram of region blocks extracted in a document image white balance process of an embodiment of the invention;
fig. 7 shows a block diagram of a certificate video feature extraction device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It is to be understood that, unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is, what is meant is "including, but not limited to".
Furthermore, in the description of the present invention, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Fig. 1 is a schematic application environment diagram of a certificate video feature extraction method according to an embodiment of the present invention. As shown in fig. 1, a first terminal 102 and a second terminal 106 communicate with a server 104 through a network, respectively. The first terminal 102 is configured to upload a video of a certificate to a server, the server is configured to receive the video uploaded by the first terminal 102, extract a certificate video feature based on template matching to implement authenticity identification of the certificate, and the second terminal 106 is configured to receive a certificate authenticity identification result of the server 104. The first terminal 102 may be an electronic device having a built-in video capture module or an external video capture module, the electronic device may be but is not limited to various personal computers, notebook computers, smart phones and tablet computers, the second terminal 106 may be but is not limited to various personal computers, notebook computers, smart phones and tablet computers, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
It should be noted that the certificate video feature extraction method provided by the invention can be applied to extracting features of an identity card in a financial loan scene, a financial insurance scene or other scenes to realize authenticity identification of the identity card, and can also be applied to extracting features of certificates other than the identity card in a specific scene to be used for authenticity identification, such as credit cards and the like.
The certificate video feature extraction method provided by the embodiment of the invention is exemplified by a 2003 edition of Chinese hong Kong identity card.
In one embodiment, as shown in fig. 2, a method for extracting video features of a certificate is provided, which is described by taking the method as an example applied to the server in fig. 1, and includes the following steps:
step 201, an image sequence is obtained from a video containing a certificate, and the image sequence comprises at least one frame of certificate image.
The video containing the certificate can be uploaded to the server by a terminal (a first terminal in fig. 1), and the terminal acquires the video containing the certificate through video acquisition of the certificate.
Specifically, the server samples the certificate video to extract at least one frame of certificate image to form an image sequence.
Step 202, extracting a plurality of characteristic subgraphs of each frame of certificate image according to a plurality of anti-counterfeiting characteristics of the certificate.
The security feature of a document is defined herein as a feature that is capable of providing a security function to the document.
Wherein the plurality of security features of the document comprise at least two of a color shifting ink, a chip block, a motion printed letter, and a motion printed portrait of the document. In this embodiment, the plurality of security features including color shifting inks, chip pieces, motion printed letters, and motion printed figures are preferred.
Wherein, the color-changing ink means that the color of the color-changing ink of the certificate changes at different angles; the chip block is an anti-counterfeiting chip in the certificate, is a static characteristic and usually has no obvious change; the dynamic printed letters indicate that the anti-counterfeiting characters printed on the certificate are displayed as one letter at certain angles, and the anti-counterfeiting characters are displayed as another letter at certain angles; and (6) dynamically printing a portrait. Clear portrait or unclear portrait can appear under the different angles of the dynamic printing portrait.
Specifically, the server locates respective areas of multiple anti-counterfeiting features of the certificate in the certificate image according to each frame of certificate image of the image sequence, and extracts multiple feature sub-images of the certificate image to obtain multiple feature sub-images of each frame of certificate image.
In addition, the process of extracting a plurality of characteristic subgraphs of the certificate image can be realized by the following operations:
the server interacts with a preset terminal (a second terminal in fig. 1)), the image sequence is sent to the preset terminal, the preset terminal extracts a plurality of characteristic subgraphs of each frame of certificate image based on a manual extraction mode, and the extraction result of the characteristic subgraphs is returned to the server.
For example, according to a plurality of anti-counterfeiting features of a document, a plurality of extracted feature sub-images of a single-frame document image can be shown in fig. 3, and fig. 3 is a schematic diagram of a feature sub-image of a single-frame document image according to an embodiment of the present invention, where areas a, b, c, and d in each solid frame in fig. 3 are a color-changing ink sub-image, a chip block sub-image, a dynamic printing letter sub-image, and a dynamic printing portrait, respectively.
In the embodiment of the invention, as the plurality of anti-counterfeiting features of the certificate comprise at least two of the color-changing ink, the chip block, the dynamic printing letters and the dynamic printing portrait of the certificate, and the anti-counterfeiting features are all obviously visible features, corresponding feature subgraphs can be extracted from the certificate image according to the anti-counterfeiting features without the fact that the video quality of the certificate reaches a very high standard in the process of identifying the authenticity of the certificate, thereby obviously reducing the requirements on data acquisition conditions.
And 203, correspondingly matching and calculating the plurality of characteristic subgraphs of each frame of certificate image with a plurality of preset image templates respectively to obtain matching score vectors of each frame of certificate image corresponding to the plurality of image templates.
Specifically, the corresponding relation between a plurality of characteristic subgraphs and a plurality of preset image templates is determined, on the basis of the corresponding relation, the plurality of characteristic subgraphs of each frame of certificate image are respectively subjected to corresponding matching calculation with the plurality of preset image templates, and matching scores respectively corresponding to the plurality of characteristic subgraphs of each frame of certificate image are obtained, wherein the matching scores are used for expressing the matching degree between the characteristic subgraphs and the corresponding image templates; and obtaining a matching score vector of each frame of certificate image corresponding to the plurality of image templates according to the matching scores respectively corresponding to the plurality of characteristic subgraphs of each frame of certificate image.
In this embodiment, a corresponding relationship between the feature subgraph and the image template may be determined based on a correlation matching method, if the extracted feature subgraphs include a color-changing ink subgraph, a chip block subgraph, a dynamic printing letter subgraph and a dynamic printing portrait, the color-changing ink subgraph may be respectively matched with a first color ink template and a second color ink template to calculate corresponding distribution scores based on the corresponding relationship, the chip block subgraph is matched with the chip block template to calculate matching scores, the dynamic printing letter subgraph is respectively matched with the first letter template and the second letter template to calculate corresponding distribution scores, the dynamic printing portrait is matched with the portrait template to calculate distribution scores, and M is 6 matching scores in total.
The first color ink template, the second color ink template, the chip block template, the first letter template and the second letter template are respectively generated in advance based on real certificates, and the generated templates can be suitable for information common to different certificates, such as 2003 version Chinese hong Kong identity cards, and can comprise a partial yellow color-changing ink template, a partial red color-changing ink template, a chip block template, a dynamic printing letter template for displaying a letter H and a dynamic printing letter template for displaying a letter K. The portrait template is generated based on the face image in the certificate to be identified, that is, the portrait template is generated based on the face image which is extracted in real time and is specific to each certificate, and the generated portrait template is different because the faces of different certificates are different. The portrait template generated based on the face image in the certificate can be shown in fig. 3, and the area e in the dashed box in fig. 3 is the portrait template.
As shown in fig. 4, fig. 4 is a schematic diagram of a single-frame certificate image matching process provided in an embodiment of the present invention, where the certificate image is an image of a 2003 version of a chinese hong kong identity card, a plurality of feature sub-images such as a chip block sub-image, a dynamic printing sub-image, and a dynamic printing letter sub-image are extracted from the single-frame certificate image, the chip block sub-image is matched with the chip block template, the dynamic printing sub-image is matched with a portrait template, and the dynamic printing letter sub-image is respectively matched with a dynamic printing letter template in which a letter H appears and a dynamic printing letter template in which a letter K appears, so that matching scores respectively corresponding to the feature sub-images of the single-frame certificate image can be obtained, and further, matching score vectors corresponding to the plurality of image templates of the single-frame certificate image can be obtained.
And 204, forming a matching score matrix according to the matching score vector corresponding to each frame of certificate image, and mapping the matching score matrix into a video feature vector.
Specifically, matching score vectors corresponding to each frame of certificate image are stacked, the dimensionality of each matching score vector is M, and a matching score matrix is obtained
Figure GDA0003457126950000091
Wherein the element is the element fijRepresenting the j-th template dimension under the ith frame, and mapping the matching score matrix F to the video feature vector x ═ phi (F) according to a mapping function, wherein
Figure GDA0003457126950000092
For the mapping function, V is the dimension of the video feature vector X.
In this embodiment, for the P frames of certificate images, if the matching score vectors corresponding to the multiple images of each frame of certificate image are formed based on the matching scores corresponding to the first color ink template, the second color ink template, the chip block template, the first letter template, the second letter template, and the portrait template, the matching score matrix F is (F ═ F)1,f2,f3,f4,f5,f6) Has a total of 6 columns, wherein f1P match scores, f, between a color changing ink subgraph comprising a P frame credential image and a first color ink template2P match scores, f, between a color changing ink subgraph comprising a P-frame credential image and a second color ink template3P matching scores, f, between a chip block subgraph comprising a P-frame certificate image and a chip block template4P matching scores between a dynamic printed letter subgraph comprising a P-frame document image and a first letter template, f5P matching scores between a dynamic printed letter subgraph comprising a P-frame document image and a second letter template, f6Including P match scores between the animated print portrait of the P-frame document image and the portrait template.
Further, the method provided by the embodiment of the present invention may further include the steps of:
and inputting the video feature vectors into a preset regression model to obtain a video classification result indicating the authenticity of the certificate.
Specifically, the video feature vector is input into a preset regression model y ═ g (x, w), wherein y is a video identification score, w is a parameter of the model, g (·) is the regression model, according to a feature weight parameter w in the regression model, each video feature in the video feature vector is subjected to weighted calculation to obtain a video identification score, according to the video identification score, a video classification result indicating the authenticity of the certificate is generated, wherein the video identification score is used for indicating the probability that the certificate is a real certificate, if the video identification score is larger than a preset threshold value, the certificate is a real certificate, and otherwise, the certificate is a counterfeit certificate.
The preset regression model can be a linear regression model, a logistic regression model or a neural network model, and the logistic regression model is used as a preferred scheme in the invention.
The feature weight parameters of the preset regression model can be obtained by training in advance by using a machine learning method, and can also be directly set according to experience, such as directly performing weighted average on each feature.
As a preferred scheme, in the embodiment of the present invention, a machine learning method is used for training to obtain the feature weight parameters of the preset regression model, and the specific training process is as follows:
obtaining a plurality of marked sample videos, wherein the plurality of sample videos comprise videos of real anti-counterfeiting products and videos of imitated anti-counterfeiting products;
for each sample video of the plurality of sample videos, performing the following operations:
acquiring an image sequence from a sample video, extracting a plurality of characteristic subgraphs of each frame of certificate image according to a plurality of anti-counterfeiting characteristics of the certificate, and performing corresponding matching calculation on the plurality of characteristic subgraphs of each frame of certificate image and a plurality of preset image templates respectively to obtain a matching score vector of each frame of certificate image corresponding to the plurality of image templates;
forming a matching score matrix according to the matching score vector corresponding to each frame of certificate image, and mapping the matching score matrix into a video feature vector;
and inputting the video feature vectors of the sample videos into a regression model for training to obtain feature weight parameters of the regression model.
Illustratively, acquiring N-50 certificate videos including a real certificate and a counterfeit certificate (such as printing or copying of the certificate), wherein a label corresponding to the real certificate is 1, a label corresponding to the counterfeit certificate is 0, extracting video feature vectors from the N certificate videos, and training feature weight parameters of a regression model.
In this embodiment, since the regression model does not have a high requirement on the number of training samples for deep learning, the regression model for video classification can be obtained through training a small number of training samples, and thus accurate identification of authenticity of the certificate can be realized based on the trained regression model.
Exemplarily, as shown in fig. 5, fig. 5 shows a schematic diagram of a certificate authenticity identification process provided by an embodiment of the present invention, for a video V including a certificate, first, P frames of certificate images are obtained from the video V, then, according to anti-counterfeiting features of the certificate, a plurality of feature sub-images of each frame of certificate image are extracted, and each frame of certificate image is subjected to corresponding matching calculation with a plurality of image templates, so as to obtain a matching score vector corresponding to each frame of certificate image, form a matching score matrix F, where the number of lines of the matching score matrix is P, then, a video feature vector X is generated according to the matching score matrix F, and the video feature vector X is input into a regression model, so as to obtain a video classification result Y indicating certificate authenticity.
The certificate video feature extraction method provided by the embodiment of the invention obtains an image sequence from a video containing a certificate, wherein the image sequence comprises at least one frame of certificate image; extracting a plurality of characteristic subgraphs of each frame of certificate image according to a plurality of anti-counterfeiting characteristics of the certificate; respectively carrying out corresponding matching calculation on a plurality of characteristic subgraphs of each frame of certificate image and a plurality of preset image templates to obtain a matching score vector of each frame of certificate image corresponding to the plurality of image templates; and forming a matching score matrix according to the matching score vector corresponding to each frame of certificate image, and mapping the matching score matrix into a video feature vector. Therefore, the method extracts corresponding regional subgraphs from the certificate video frame according to different anti-counterfeiting characteristics of the certificate, matches the regional subgraphs with the image template of the real certificate, and takes the matching value as the characteristic, so that the characteristic extraction can be completed only by acquiring a small amount of real certificate data and manufacturing the template without using a deep neural network. In addition, the invention extracts the video characteristics of the certificate by matching the characteristic subgraphs to the templates in different states so as to realize the authenticity identification of the certificate, has good universality, can give consideration to various characteristics including dynamic characteristics, has good expandability and can adapt to the anti-counterfeiting requirements of the certificate in various scenes.
In one embodiment, the above-mentioned process of acquiring the image sequence from the video containing the certificate may include:
extracting multi-frame certificate images from the video containing the certificates, correcting the extracted multi-frame certificate images, and selecting at least one frame of certificate image which is successfully corrected to form an image sequence.
In particular, a uniform sampling method can be used to extract multiple frames of document images from a video containing the document. For example, the video p is extracted every 1 second by using uniform sampling, and 3 frames, it is understood that other methods in the prior art may also be used to extract multiple frames of document images, such as key frame extraction, which is not specifically limited in this embodiment of the present invention.
In addition, in consideration of the angle problem of the certificate image, for each extracted frame of certificate image, Scale-invariant feature transform (SIFT) can be used for realizing certificate detection and correction so as to improve the accuracy of certificate authenticity identification.
In one embodiment, before the step of extracting a plurality of feature sub-images of each frame of document image according to a plurality of anti-counterfeiting features of the document, the method may further include:
and preprocessing each frame of certificate image, wherein the preprocessing at least comprises one of white balance, noise reduction and sharpening.
The preprocessing method may adopt white balance, and the process may include:
extracting the region blocks from four corners of the certificate image respectively, and calculating average R, G and B color values of the extracted region blocks
Figure GDA0003457126950000121
Using average R, G, B color values
Figure GDA0003457126950000122
Normalizing the color (r, g, b) of each pixel in the document image to obtain a normalized color
Figure GDA0003457126950000123
As shown in fig. 6, four region blocks extracted from the four corners of the document image correspond to the four region blocks, respectively, and the regions shown by the solid line boxes in fig. 6 correspond to the four region blocks.
By carrying out white balance processing on the certificate image, the image colors under illumination of different color temperatures can be automatically adjusted, and the certificate image is optimized, so that a plurality of characteristic subgraphs can be conveniently extracted from the certificate image subsequently.
The image noise reduction can be used for carrying out noise reduction processing on the certificate image based on median filtering so as to reduce noise interference in the certificate image. During specific implementation, a filtering window with a certain size is selected, the gray value of the filtering window at the initial filtering position is replaced by the median value in the neighborhood of the filtering window, and further the pixel values around the filtering window are closer to the real values, so that isolated noise points are eliminated.
The image sharpening can highlight the characteristic sub-images in the certificate image by adjusting the contrast of the certificate image so as to facilitate the extraction of the subsequent characteristic sub-images.
In an embodiment, the above-mentioned performing corresponding matching calculation on the multiple feature sub-images of each frame of the document image and the preset multiple image templates to obtain a matching score vector corresponding to each frame of the document image may include:
aiming at each characteristic subgraph of each frame of certificate image, if the characteristic subgraph is not a dynamic printing portrait, performing relevant matching on the characteristic subgraph and an image template corresponding to the characteristic subgraph, and calculating a matching score corresponding to the characteristic subgraph; if the characteristic sub-image is a dynamic printing portrait, respectively extracting human face characteristics from the dynamic printing portrait and a corresponding portrait template, and calculating a matching score corresponding to the dynamic printing portrait according to the extracted human face characteristics; and generating a matching score vector of each frame of certificate image corresponding to the plurality of image templates according to the matching score corresponding to each characteristic subgraph of each frame of certificate image.
Specifically, for each characteristic sub-image of each frame of certificate image, if the characteristic sub-image is not a dynamic printing portrait, matching is performed through a spatial two-dimensional sliding template, namely, an image template corresponding to the characteristic sub-image is overlapped on the characteristic sub-image for translation search, the position of a pixel point with the minimum error is found, and a matching score corresponding to the characteristic sub-image is calculated; if the characteristic sub-image is a dynamic printing portrait, using dlib to respectively extract human face characteristics from the dynamic printing portrait and a corresponding portrait template, and calculating a matching score corresponding to the dynamic printing portrait according to the extracted human face characteristics.
In one embodiment, mapping the matching score matrix to the video feature vector as described above may include:
and calculating each column of the matching score matrix according to a preset mapping function, and mapping the calculation result of each column of the matching score matrix to the video feature vector.
Wherein the calculation of each column of the matching score matrix comprises one of a linear operation and a non-linear operation, and the linear operation comprises a sum-difference operation between one or more of a mean calculation, a median, a maximum and a minimum.
Wherein, the calculation result of each column of the matching score matrix is mapped to the video feature vector, and a plurality of mapping modes can be provided, including:
the first method is as follows: and correspondingly mapping the calculation result of each column in the matching score matrix into each eigenvalue in the video eigenvector, wherein the dimension of the video eigenvector is equal to the number of columns in the matching score matrix. For example, the mean value calculation is directly performed on each column in the matching score matrix, and the mean value of each column is used as each feature value of the video feature vector; for another example, each column in the matching score matrix is subjected to nonlinear operation, and after each column is sorted in the order of the matching scores from large to small, a value arranged at the nth bit is selected from each column to be used as each eigenvalue of the video eigenvector, wherein N can be equal to 3, so that noise can be prevented from affecting the calculation of the eigenvalue.
The second method comprises the following steps: and correspondingly mapping different calculation results of each column in the matching score matrix into each eigenvalue in the video eigenvector, wherein the dimensionality of the video eigenvector is greater than the number of columns in the matching score matrix. For example, the dimensions of the video feature vector include x ═ x (x)1,x2,...,xV) A total of V-7 dimensions, wherein the first 6 dimensions satisfy xk=max(fk) K is less than or equal to 6, is the maximum value of each column in the matching score matrix and represents the maximum matching score of the corresponding image template in the video, and x is taken in the 7 th dimension7=min(f6) The reason for selecting the characteristics is that the change characteristics of the dynamic printing portrait at different angles can be reflected on the assumption that the dynamic printing portrait can have an unclear head portrait at certain angles.
In addition, the mapping method may also be that a median is taken for each two columns in the matching score matrix, and the sum of the medians of the two columns is used as a feature value of the video feature vector, at this time, the dimension of the video feature vector may be smaller than the number of columns of the matching score matrix.
In the embodiment of the invention, the matching score matrix is mapped into the video feature vector, so that the matching scores corresponding to different image templates in the video are used as features, and the feature extraction can be finished by only acquiring a small amount of real certificate data and manufacturing the template without using a deep neural network, thereby having good universality, taking various features including dynamic features into consideration and having good expandability.
In one embodiment, as shown in fig. 7, there is provided a document video feature extraction apparatus, the apparatus comprising:
an acquisition module 71 for acquiring a sequence of images from a video containing a document, the sequence of images comprising at least one frame of a document image;
the extraction module 73 is used for extracting a plurality of characteristic subgraphs of each frame of certificate image according to a plurality of anti-counterfeiting characteristics of the certificate;
the matching module 74 is configured to perform corresponding matching calculation on the multiple feature sub-images of each frame of the certificate image and multiple preset image templates, and obtain matching score vectors of each frame of the certificate image corresponding to the multiple image templates;
and the mapping module 75 is configured to form a matching score matrix according to the matching score vector corresponding to each frame of certificate image, and map the matching score matrix into a video feature vector.
In a preferred embodiment, the obtaining module 71 is specifically configured to:
extracting multi-frame certificate images from the video containing the certificates;
correcting each extracted frame of certificate image;
and selecting at least one frame of certificate image which is successfully corrected to form an image sequence.
In a preferred embodiment, the apparatus further comprises:
and a preprocessing module 72, configured to preprocess each frame of the document image, where the preprocessing includes at least one of white balancing, noise reduction, and sharpening.
In a preferred embodiment, the plurality of security features of the document comprise at least two of a color-changing ink, a chip block, motion-printed letters, and motion-printed figures of the document.
In a preferred embodiment, the matching module 74 is specifically configured to:
for each feature sub-image of each frame of the credential image,
if the characteristic subgraph is not the dynamic printing portrait, performing relevant matching on the characteristic subgraph and an image template corresponding to the characteristic subgraph, and calculating a matching score corresponding to the characteristic subgraph;
if the characteristic sub-image is a dynamic printing portrait, respectively extracting human face characteristics from the dynamic printing portrait and a corresponding portrait template, and calculating a matching score corresponding to the dynamic printing portrait according to the extracted human face characteristics;
and generating a matching score vector of each frame of certificate image corresponding to the plurality of image templates according to the matching score corresponding to each characteristic subgraph of each frame of certificate image.
In a preferred embodiment, the mapping module 75 is specifically configured to:
and calculating each column of the matching score matrix according to a preset mapping function, and mapping the calculation result of each column of the matching score matrix to the video feature vector.
In a preferred embodiment, the calculation includes one of a linear operation and a non-linear operation;
the linear operations include a sum and difference operation between one or more of a mean calculation, a median, a maximum, and a minimum.
In a preferred embodiment, the apparatus further comprises:
and the identification module 76 is used for inputting the video feature vectors into a preset regression model to obtain a video classification result indicating the authenticity of the certificate.
In a preferred embodiment, the preset regression model is a linear regression model, a logistic regression model or a neural network model;
the feature weight parameters of the preset regression model are obtained by training in a machine learning method in advance or are preset.
The certificate authenticity identification device provided by the embodiment of the invention belongs to the same inventive concept as the certificate video feature extraction method provided by the embodiment of the invention, can execute the certificate video feature extraction method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects for executing the certificate video feature extraction method. For details of the technology that are not described in detail in this embodiment, reference may be made to the method for extracting the video feature of the certificate provided in the embodiment of the present invention, and details are not described here again.
In one embodiment, there is provided a computer device comprising:
one or more processors;
a memory;
a program stored in the memory, which when executed by the one or more processors, causes the processors to perform the steps of the credential video feature extraction method of the above-described embodiments.
In one embodiment, a computer readable storage medium is provided, which stores a program that, when executed by a processor, causes the processor to perform the steps of the credential video feature extraction method of the above-described embodiments.
As will be appreciated by one of skill in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A method for extracting video features of a certificate is characterized by comprising the following steps:
acquiring an image sequence from a video containing a document, the image sequence comprising at least one frame of document image;
extracting a plurality of characteristic subgraphs of each frame of the certificate image according to a plurality of anti-counterfeiting characteristics of the certificate;
respectively carrying out corresponding matching calculation on a plurality of characteristic subgraphs of each frame of the certificate image and a plurality of preset image templates to obtain a matching score vector of each frame of the certificate image corresponding to the plurality of image templates; the method specifically comprises the following steps: for each characteristic subgraph of each frame of the certificate image, the following operations are carried out: if the characteristic subgraph is not the dynamic printing portrait, performing relevant matching on the characteristic subgraph and an image template corresponding to the characteristic subgraph, and calculating a matching score corresponding to the characteristic subgraph; if the characteristic sub-image is a dynamic printing portrait, respectively extracting human face characteristics from the dynamic printing portrait and a corresponding portrait template, and calculating a matching score corresponding to the dynamic printing portrait according to the extracted human face characteristics; generating a matching score vector of each frame of the certificate image corresponding to the plurality of image templates according to the matching score corresponding to each characteristic subgraph of each frame of the certificate image;
and forming a matching score matrix according to the matching score vector corresponding to each frame of the certificate image, and mapping the matching score matrix into a video feature vector.
2. The method of claim 1, wherein the capturing of the sequence of images from the video containing the document comprises:
extracting a plurality of frames of certificate images from the video containing the certificates;
correcting the extracted certificate image of each frame;
and selecting at least one frame of the certificate image which is successfully corrected to form the image sequence.
3. The method of claim 1, wherein prior to the step of extracting a plurality of feature sub-images of each frame of the document image based on a plurality of security features of the document, the method further comprises:
and preprocessing each frame of the certificate image, wherein the preprocessing at least comprises one of white balance, noise reduction and sharpening.
4. The method of claim 1, wherein the plurality of security features of the document comprise at least two of a color-changing ink, a chip block, motion-printed letters, and motion-printed figures of the document.
5. The method of any one of claims 1 to 4, wherein mapping the match score matrix to a video feature vector comprises:
and calculating each column of the matching score matrix according to a preset mapping function, and mapping the calculation result of each column of the matching score matrix to the video feature vector.
6. The method of claim 5, wherein the computing comprises one of a linear operation and a non-linear operation;
the linear operation includes a sum and difference operation between one or more of a mean calculation, a median, a maximum, and a minimum.
7. The method of claim 1, further comprising:
and inputting the video feature vectors into a preset regression model to obtain a video classification result indicating the authenticity of the certificate.
8. A credential video feature extraction apparatus, the apparatus comprising:
the acquisition module is used for acquiring an image sequence from a video containing the certificate, wherein the image sequence comprises at least one frame of certificate image;
the extraction module is used for extracting a plurality of characteristic subgraphs of each frame of the certificate image according to a plurality of anti-counterfeiting characteristics of the certificate;
the matching module is used for correspondingly matching and calculating a plurality of characteristic subgraphs of each frame of the certificate image with a plurality of preset image templates respectively to obtain a matching score vector of each frame of the certificate image corresponding to the plurality of image templates; the method specifically comprises the following steps: for each characteristic subgraph of each frame of the certificate image, the following operations are carried out: if the characteristic subgraph is not the dynamic printing portrait, performing relevant matching on the characteristic subgraph and an image template corresponding to the characteristic subgraph, and calculating a matching score corresponding to the characteristic subgraph; if the characteristic sub-image is a dynamic printing portrait, respectively extracting human face characteristics from the dynamic printing portrait and a corresponding portrait template, and calculating a matching score corresponding to the dynamic printing portrait according to the extracted human face characteristics; generating a matching score vector of each frame of the certificate image corresponding to the plurality of image templates according to the matching score corresponding to each characteristic subgraph of each frame of the certificate image;
and the mapping module is used for forming a matching score matrix according to the matching score vector corresponding to each frame of the certificate image and mapping the matching score matrix into a video characteristic vector.
9. A computer device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 7.
CN201910613629.6A 2019-07-09 2019-07-09 Certificate video feature extraction method and device, computer equipment and storage medium Active CN110427972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910613629.6A CN110427972B (en) 2019-07-09 2019-07-09 Certificate video feature extraction method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910613629.6A CN110427972B (en) 2019-07-09 2019-07-09 Certificate video feature extraction method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110427972A CN110427972A (en) 2019-11-08
CN110427972B true CN110427972B (en) 2022-02-22

Family

ID=68409120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910613629.6A Active CN110427972B (en) 2019-07-09 2019-07-09 Certificate video feature extraction method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110427972B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111310634B (en) * 2020-02-10 2024-03-15 支付宝实验室(新加坡)有限公司 Certificate type recognition template generation method, certificate recognition method and device
CN114419712A (en) * 2020-05-14 2022-04-29 支付宝(杭州)信息技术有限公司 Feature extraction method for protecting personal data privacy, model training method and hardware
CN112200136A (en) * 2020-10-29 2021-01-08 腾讯科技(深圳)有限公司 Certificate authenticity identification method and device, computer readable medium and electronic equipment
CN112686844B (en) * 2020-12-10 2022-08-30 深圳广电银通金融电子科技有限公司 Threshold setting method, storage medium and system based on video quality inspection scene
CN113111770B (en) * 2021-04-12 2022-09-13 杭州赛鲁班网络科技有限公司 Video processing method, device, terminal and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510630A (en) * 2002-12-25 2004-07-07 鲍东山 Composite signal processing anti-fogery mark testing systems
CN103035061A (en) * 2012-09-29 2013-04-10 广州广电运通金融电子股份有限公司 Anti-counterfeit characteristic generation method of valuable file and identification method and device thereof
CN103426016A (en) * 2013-08-14 2013-12-04 湖北微模式科技发展有限公司 Method and device for authenticating second-generation identity card
CN105046807A (en) * 2015-07-09 2015-11-11 中山大学 Smart mobile phone-based counterfeit banknote identification method and system
CN108320373A (en) * 2017-01-17 2018-07-24 深圳怡化电脑股份有限公司 A kind of method and device of the detection of guiding against false of paper currency mark
US10096123B2 (en) * 2016-04-19 2018-10-09 Cisco Technology, Inc. Method and device for establishing correspondence between objects in a multi-image source environment
CN109389153A (en) * 2018-08-31 2019-02-26 众安信息技术服务有限公司 A kind of holographic false proof code check method and device
CN109409349A (en) * 2018-02-02 2019-03-01 深圳壹账通智能科技有限公司 Credit certificate discrimination method, device, terminal and computer readable storage medium
CN109815491A (en) * 2019-01-08 2019-05-28 平安科技(深圳)有限公司 Answer methods of marking, device, computer equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510630A (en) * 2002-12-25 2004-07-07 鲍东山 Composite signal processing anti-fogery mark testing systems
CN103035061A (en) * 2012-09-29 2013-04-10 广州广电运通金融电子股份有限公司 Anti-counterfeit characteristic generation method of valuable file and identification method and device thereof
CN103426016A (en) * 2013-08-14 2013-12-04 湖北微模式科技发展有限公司 Method and device for authenticating second-generation identity card
CN105046807A (en) * 2015-07-09 2015-11-11 中山大学 Smart mobile phone-based counterfeit banknote identification method and system
US10096123B2 (en) * 2016-04-19 2018-10-09 Cisco Technology, Inc. Method and device for establishing correspondence between objects in a multi-image source environment
CN108320373A (en) * 2017-01-17 2018-07-24 深圳怡化电脑股份有限公司 A kind of method and device of the detection of guiding against false of paper currency mark
CN109409349A (en) * 2018-02-02 2019-03-01 深圳壹账通智能科技有限公司 Credit certificate discrimination method, device, terminal and computer readable storage medium
CN109389153A (en) * 2018-08-31 2019-02-26 众安信息技术服务有限公司 A kind of holographic false proof code check method and device
CN109815491A (en) * 2019-01-08 2019-05-28 平安科技(深圳)有限公司 Answer methods of marking, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Detection of Rogue Certificates from Trusted Certificate Authorities Using Deep Neural Networks;Zheng Dong et al;《ACM Transactions on Privacy and Security》;20160930;第1-31页 *
证件图像识别技术研究与应用;饶凌云;《中国优秀硕士学位论文全文数据库信息科技辑》;20150315;第I138-1936页 *

Also Published As

Publication number Publication date
CN110427972A (en) 2019-11-08

Similar Documents

Publication Publication Date Title
CN110427972B (en) Certificate video feature extraction method and device, computer equipment and storage medium
Elmahmudi et al. Deep face recognition using imperfect facial data
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
Choi et al. Color local texture features for color face recognition
Fourati et al. Anti-spoofing in face recognition-based biometric authentication using image quality assessment
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
US11354917B2 (en) Detection of fraudulently generated and photocopied credential documents
Bristow et al. Why do linear SVMs trained on HOG features perform so well?
WO2016131083A1 (en) Identity verification. method and system for online users
CN110210503B (en) Seal identification method, device and equipment
US11367310B2 (en) Method and apparatus for identity verification, electronic device, computer program, and storage medium
Armas Vega et al. Copy-move forgery detection technique based on discrete cosine transform blocks features
EP4109332A1 (en) Certificate authenticity identification method and apparatus, computer-readable medium, and electronic device
EP4085369A1 (en) Forgery detection of face image
US10134149B2 (en) Image processing
CN111898520A (en) Certificate authenticity identification method and device, computer readable medium and electronic equipment
CN109376717A (en) Personal identification method, device, electronic equipment and the storage medium of face comparison
Jwaid et al. Study and analysis of copy-move & splicing image forgery detection techniques
Kumar et al. Syn2real: Forgery classification via unsupervised domain adaptation
Khuspe et al. Robust image forgery localization and recognition in copy-move using bag of features and SVM
Hannan et al. Analysis of Detection and Recognition of Human Face Using Support Vector Machine
Rusia et al. A Color-Texture-Based Deep Neural Network Technique to Detect Face Spoofing Attacks
Turhal et al. A new face presentation attack detection method based on face-weighted multi-color multi-level texture features
CN110415424B (en) Anti-counterfeiting identification method and device, computer equipment and storage medium
EP4266264A1 (en) Unconstrained and elastic id document identification in an rgb image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240306

Address after: Room 1179, W Zone, 11th Floor, Building 1, No. 158 Shuanglian Road, Qingpu District, Shanghai, 201702

Patentee after: Shanghai Zhongan Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: ZHONGAN INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240415

Address after: Room 1179, W Zone, 11th Floor, Building 1, No. 158 Shuanglian Road, Qingpu District, Shanghai, 201702

Patentee after: Shanghai Zhongan Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: ZHONGAN INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

Country or region before: China