CN110415424B - Anti-counterfeiting identification method and device, computer equipment and storage medium - Google Patents

Anti-counterfeiting identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110415424B
CN110415424B CN201910521772.2A CN201910521772A CN110415424B CN 110415424 B CN110415424 B CN 110415424B CN 201910521772 A CN201910521772 A CN 201910521772A CN 110415424 B CN110415424 B CN 110415424B
Authority
CN
China
Prior art keywords
image
video
preset
feature
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910521772.2A
Other languages
Chinese (zh)
Other versions
CN110415424A (en
Inventor
韩天奇
钱浩然
彭宇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhongan Information Technology Service Co ltd
Original Assignee
Zhongan Information Technology Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongan Information Technology Service Co Ltd filed Critical Zhongan Information Technology Service Co Ltd
Priority to CN201910521772.2A priority Critical patent/CN110415424B/en
Publication of CN110415424A publication Critical patent/CN110415424A/en
Application granted granted Critical
Publication of CN110415424B publication Critical patent/CN110415424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/2016Testing patterns thereon using feature extraction, e.g. segmentation, edge detection or Hough-transformation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/202Testing patterns thereon using pattern matching

Abstract

The invention discloses an anti-counterfeiting identification method, an anti-counterfeiting identification device, computer equipment and a storage medium, and belongs to the technical field of anti-fraud. The method comprises the following steps: receiving the video of the anti-counterfeiting product uploaded by the terminal; extracting multi-frame images containing anti-counterfeiting products from the video; extracting characteristic values of preset characteristics from the multi-frame image, and forming two characteristic value groups based on the characteristic values of the preset characteristics; correspondingly inputting the two characteristic value groups into two preset regression models respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting product is a genuine product, and the score confidence coefficient is used for representing the reliability of the video identification score; and judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition, and if so, determining that the anti-counterfeiting product is a genuine product. Compared with the prior art, the method can conveniently, accurately and reliably identify the authenticity of the anti-counterfeiting product.

Description

Anti-counterfeiting identification method and device, computer equipment and storage medium
Technical Field
The invention relates to the technical field of anti-fraud, in particular to an anti-counterfeiting identification method, an anti-counterfeiting identification device, computer equipment and a storage medium.
Background
Anti-counterfeiting products are products with preset characteristics, such as bank notes, certificates, credit cards, identity cards, bank securities or anti-counterfeiting labels, and the like, and have wide requirements in the fields of finance, daily life and the like.
In the prior art, the authenticity of a certificate is usually judged by using special equipment according to certain preset characteristics of an anti-counterfeiting product. For example, the document "a system for fluorescent detection of anti-counterfeit mark of RMB" uses devices such as a closed frame, a displacement table and a spectroradiometer to detect the fluorescent characteristic of RMB; the document "paper identification device and paper identification method" uses an infrared transmission image and an infrared reflection image to realize the detection of yen; for another example, the invention relates to a method and a device for detecting RMB color-changing ink based on Lab color space, and the application number is CN201510054056.X, and damaged RMB is identified by utilizing LAB color space. However, these methods either require more complex equipment or can only deal with the problem of a specific scene because the selected features are relatively unique. In addition, along with the continuous application of anti-counterfeiting technology, the fraud means is correspondingly diversified and refined, and the anti-counterfeiting technology needs to be continuously adaptively improved.
Disclosure of Invention
To solve at least one of the problems mentioned in the background above, the present invention provides an anti-counterfeit authentication method, apparatus, computer device and storage medium.
The embodiment of the invention provides the following specific technical scheme:
in a first aspect, the present invention provides a method of authenticating an anti-counterfeit, the method comprising:
receiving the video of the anti-counterfeiting product uploaded by the terminal;
extracting a plurality of frames of images from the video, each frame of images including the security article;
extracting feature values of preset features from the multi-frame image, and forming two feature value groups based on the feature values of the preset features;
correspondingly inputting the two characteristic value groups into two preset regression models respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting product is a genuine product, and the score confidence coefficient is used for representing the reliability of the video identification score;
and judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition, and if so, determining that the anti-counterfeiting product is a genuine product.
In a preferred embodiment, said extracting from said video a plurality of frames of images each containing said security article comprises:
sampling the video to obtain an image sequence with a fixed frame number;
and detecting the anti-counterfeiting product in each frame of image of the image sequence, and extracting the image of which the anti-counterfeiting product is detected to obtain the multi-frame image.
In a preferred embodiment, when the security article is a document, the predetermined features comprise first features and second features, the first features comprise at least one of color shifting ink, motion sensitive printing and feature blocks, and the second features comprise at least one of image sharpness and image highlights.
In a preferred embodiment, the process of extracting the characteristic value of the color-changing ink includes:
respectively extracting first region subgraphs where color-changing ink is located from each frame image in the multi-frame images, and segmenting the color-changing ink and a background from each first region subgraph;
calculating the normalized color of the color-changing ink in each first region subgraph according to the color mean value of the color-changing ink in each first region subgraph and the color mean value of the background so as to obtain a normalized color matrix of the color-changing ink in the multi-frame image;
calculating an included angle matrix meeting preset conditions according to the normalized color matrix of the color-changing ink;
and acquiring a characteristic value of the color-changing ink according to the included angle matrix.
In a preferred embodiment, the process of extracting the feature values of the dynamic printing includes:
initializing the occurrence times of the first preset character image and the second preset character image to be zero values;
respectively extracting a second region subgraph where the dynamic printing is located from each frame image in the multiple frames of images;
matching each extracted second regional sub-image with the first preset character image and the second preset character image respectively, and calculating a first similarity and a second similarity corresponding to each second regional sub-image;
counting the occurrence times of the first preset character image and the second preset character image in the multi-frame image according to the first similarity and the second similarity corresponding to each second region subgraph;
and taking the counted occurrence times of the first preset character image and the second preset character image in the multi-frame image together as the characteristic value of the dynamic printing.
In a preferred embodiment, the extracting process of the feature value of the feature block includes:
respectively extracting a third region subgraph where the feature block is located from each frame image in the multi-frame images;
matching each extracted third region sub-image with a preset feature block image, and calculating the feature block similarity corresponding to each third region sub-image to obtain a feature block similarity vector corresponding to the multi-frame image;
and acquiring the characteristic value of the characteristic block according to the characteristic block similarity vector.
In a preferred embodiment, the extracting process of the feature value of the image definition includes:
carrying out graying processing on the multi-frame images respectively;
for each frame image in the multi-frame image after graying, respectively calculating gradient images of the image in the x direction and the y direction by using a Sobel operator, calculating the square sum of the gradients of each pixel in the image in the x direction and the y direction, and averaging to obtain the definition of the image;
obtaining a definition vector of the multi-frame image according to the definition of the image of each frame;
and acquiring a characteristic value of the image definition according to the definition vector.
In a preferred embodiment, the process of extracting the feature value of the image highlight includes:
carrying out graying processing on the multi-frame images respectively;
aiming at each frame image in the multi-frame images after graying, calculating an intensity median of the image, and determining a pixel with a pixel intensity exceeding a high light intensity threshold as a high light pixel of the image, wherein the high light intensity threshold is the product of the intensity median of the image and a preset coefficient, and the preset coefficient is greater than 1;
correspondingly calculating highlight proportion of highlight pixels of each frame of image in each frame of image respectively to obtain highlight proportion vectors of the multi-frame image;
and acquiring a characteristic value of the image highlight according to the highlight proportion vector.
In a preferred embodiment, the two feature value groups include a first feature value group and a second feature value group, and the forming of the two feature value groups based on the feature values of the preset features includes:
forming the first feature value group based on at least one of the feature value of the color-changing ink, the feature value of the motion printing and the feature value of the feature block and combining the feature value of the highlight of the image, wherein the first feature value group is used for calculating the video appraisal score;
forming the second feature value group based on at least one of the feature value of the image sharpness and the feature value of the image highlights in combination with the feature values of the feature blocks, wherein the second feature value group is used for calculating the score confidence.
In a preferred embodiment, the feature weight parameters of the two regression models are preset or are obtained by training in a machine learning method.
In a preferred embodiment, the regression function of each of the two regression models is linear regression, logistic regression, tree model or neural network.
In a preferred embodiment, the training comprises:
obtaining a plurality of marked sample videos, wherein the sample videos comprise videos of real anti-counterfeiting products and videos of imitated anti-counterfeiting products;
for each sample video in the plurality of sample videos, extracting feature values of sample features from the sample video, and forming two sample feature value groups based on the feature values of the sample features;
and correspondingly inputting the two sample characteristic value groups into the two regression models respectively for training to obtain respective characteristic weight parameters of the two regression models.
In a preferred embodiment, the method further comprises:
if the video identification score and the score confidence coefficient meet a second preset threshold condition, determining the anti-counterfeiting product as a counterfeit product;
if the video identification score and the score confidence coefficient meet a third preset threshold condition, sending the video to a preset terminal so that the video is transferred to a manual review stage;
and if the video identification score and the score confidence coefficient meet a fourth preset threshold condition, sending the video to a preset terminal according to a preset probability so that the video is transferred to a manual review stage, otherwise, sending prompt information to the terminal so as to prompt the terminal to acquire the video of the anti-counterfeiting product again.
In a preferred embodiment, the method further comprises:
and obtaining an identification result which is returned by the preset terminal and is subjected to manual review, and optimizing the characteristic weight parameters of the two regression models based on the identification result.
In a second aspect, there is provided a tamper-evident device, the device comprising:
the receiving module is used for receiving the video of the anti-counterfeiting product uploaded by the terminal;
the extraction module is used for extracting multiple frames of images which all contain the anti-counterfeiting product from the video;
the extraction module is used for extracting a characteristic value of a preset characteristic from the multi-frame image;
the grouping module is used for forming two characteristic value groups based on the characteristic values of the preset characteristics;
the prediction module is used for correspondingly inputting the two characteristic value groups into two preset regression models respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting product is a genuine product, and the score confidence coefficient is used for representing the reliability of the video identification score;
and the identification module is used for judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition or not, and if so, determining that the anti-counterfeiting product is a genuine product.
In a third aspect, a computer device is provided, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as in any one of the first aspects.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
The embodiment of the invention provides an anti-counterfeiting identification method, an anti-counterfeiting identification device, computer equipment and a storage medium, wherein a multi-frame image containing an anti-counterfeiting product is extracted from a video of the anti-counterfeiting product uploaded by a receiving terminal, characteristic values of preset characteristics are extracted from the multi-frame image, two characteristic value groups are formed based on the characteristic values of the preset characteristics, the two characteristic value groups are respectively and correspondingly input into two preset regression models to obtain a video identification value and a value confidence coefficient, the video identification value is used for representing the probability that the anti-counterfeiting product is a genuine product, the confidence coefficient is used for representing the reliability of the video identification value, and the anti-counterfeiting product is determined to be the genuine product when the video identification value and the value confidence coefficient meet a first preset threshold condition. According to the technical scheme provided by the invention, the authenticity identification of the anti-counterfeiting products such as certificates, paper money and the like can be realized only by simple interaction between the terminal and the server without using complex identification equipment, and the anti-counterfeiting identification is accurate and reliable; in addition, the technical scheme provided by the invention has strong expandability and can meet the anti-counterfeiting requirements under various scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an application environment of an anti-counterfeit authentication method provided by an embodiment of the invention;
FIG. 2 is a schematic flow chart of an anti-counterfeit authentication method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of preset features of a credential video provided by an embodiment of the invention;
FIG. 4a is a schematic view of a frame of a credential image provided by an embodiment of the present invention;
FIG. 4b is a diagram illustrating the detection and correction results of a document according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a color-changing ink region and a background region provided by an embodiment of the invention;
fig. 6 is a block diagram of an anti-counterfeit authentication device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description of the present invention are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified.
Fig. 1 is a schematic application environment diagram of the anti-counterfeiting identification method provided by the embodiment of the invention. As shown in fig. 1, a first terminal 102 and a second terminal 106 communicate with a server 104 through a network, respectively. The first terminal 102 is configured to upload a video of the anti-counterfeit product to the server, the server is configured to receive the video uploaded by the first terminal 102, perform feature extraction on the video to realize authenticity identification of the anti-counterfeit product, and the second terminal 106 is configured to perform manual audit on the anti-counterfeit product in the video when the server 104 cannot identify authenticity of the anti-counterfeit product, and return a manual audit result to the server 104. The first terminal 102 may be an electronic device having a built-in video capture module or an external video capture module, the electronic device may be but is not limited to various personal computers, notebook computers, smart phones and tablet computers, the second terminal 106 may be but is not limited to various personal computers, notebook computers, smart phones and tablet computers, and the server 104 may be implemented by an independent server or a server cluster formed by a plurality of servers.
It should be noted that the anti-counterfeit identification method provided by the present invention can be applied to the authenticity identification of identification cards, anti-counterfeit documents other than identification cards, bank notes, credit cards, bank securities and anti-counterfeit labels, for example, when an insurance account is opened, an insurance company needs a user to provide identification card information and verify whether a user identification card is real, for example, when network payment is performed, a consumer needs to provide credit card information and verify whether a credit card is real, and the embodiment of the present invention does not limit the specific application scenarios.
The identification method provided by the embodiment of the invention is explained by taking a 2003 version hong Kong identity card as an anti-counterfeiting product.
In one embodiment, as shown in fig. 2, there is provided a method of authenticating a security device, the method comprising:
step 201, receiving the video of the anti-counterfeiting product uploaded by the terminal.
Specifically, the server receives the video of the anti-counterfeiting product uploaded by the terminal.
Illustratively, when a user opens an account (such as an insurance account) on a client of a terminal (i.e., a first terminal in fig. 1), the client informs the user that a video of an identity card needs to be acquired, and provides a video acquisition operation description of the identity card to the user in the form of a text or video example, so that the user performs video acquisition on the identity card according to the video acquisition operation description, and after the video acquisition is completed, the client sends the video of the identity card to a server.
In this embodiment, in order to enable the video of the anti-counterfeit product to be dynamically changed, so that the anti-counterfeit feature of the anti-counterfeit product can be accurately extracted from the video, the video acquisition operation instruction may include requirement information for indicating the rotation direction and/or the rotation angle of the anti-counterfeit product. For example, in the process of taking a video of an identity card by a user using a mobile phone, the user is required to rotate the identity card upwards and then downwards (or rotate the identity card leftwards and then rotate rightwards, etc.), and in the process, the user is not required to strictly align the position of the identity card, and the harsh background or ambient light requirement is not required.
Step 202, extracting multiple frames of images which all contain anti-counterfeiting products from the video.
Specifically, the server may extract the detected image containing the anti-counterfeit article by detecting the anti-counterfeit article in the video. The detection method includes, but is not limited to, a deep learning based method, an edge detection based method, and the like.
The anti-counterfeiting identification method can be used for identifying the authenticity of anti-counterfeiting products in specific application scenes, such as identity cards, and can also be applied to the authenticity identification of anti-counterfeiting products in different application scenes, such as identity cards, credit cards and the like.
If the anti-counterfeiting product identification method is suitable for the authenticity identification of anti-counterfeiting products under different application scenes, after the server extracts various images containing the anti-counterfeiting products, the server can also identify the type of the anti-counterfeiting products to be identified, namely identify whether the anti-counterfeiting products are identity cards, credit cards, paper money or other types of anti-counterfeiting products, and then extract preset characteristics of the correspondingly identified anti-counterfeiting products from the multi-frame images. The server can recognize the category of the anti-counterfeiting product by adopting a preset recognition method, and the method comprises a method based on deep learning or traditional characteristics, wherein the method based on the deep learning comprises the steps of directly taking an original image as input to train a Convolutional Neural Network (CNN) to classify the image, extracting image characteristics (such as SIFT) in an area, and then classifying the image by using a classifier (such as SVM).
Step 203, extracting feature values of preset features from the multi-frame image, and forming two feature value groups based on the feature values of the preset features.
In this embodiment, the preset feature is a feature that is determined in advance according to the anti-counterfeit feature of the anti-counterfeit product and the characteristics of the anti-counterfeit product itself, and that can be extracted by a video acquisition method. Different classes of security articles may correspond to the same or different predetermined characteristics, for example, the predetermined characteristics corresponding to an identification card (e.g., a 2003 version of hong kong identification card) may include, but are not limited to, color shifting ink, motion print, feature blocks, image sharpness, and image highlights. The predetermined features corresponding to the banknote (e.g., the 2015 RMB) may include, but are not limited to, portrait watermark, horizontal and vertical double number, image sharpness, and image highlights. Wherein, the corresponding relation between the anti-counterfeiting product and the preset characteristics is stored in the server in advance.
Wherein, the color-changing ink means that the color of the color-changing ink of the anti-counterfeiting product changes at different angles. By kinetic printing is meant that the security character printed on the security article appears as one letter at certain angles and as another letter at certain angles, for example, the security character on a 2003 version of hong kong identification card appears as H at certain angles and as K at certain angles. The characteristic block refers to a chip block in the anti-counterfeiting product. The image definition refers to whether the edge and the like of each frame of image containing the anti-counterfeiting product are clear or not; the highlight of the image means whether the image has strong highlight to influence the identification of the anti-counterfeiting product. Taking a 2003 edition hong kong identity card as an example to explain the preset features, as shown in fig. 3, fig. 3 is a schematic diagram of the preset features of a certificate video provided by an embodiment of the present invention, and the preset features pointed by arrows a, b, c, d, and e are color changing ink, motion printing, feature blocks, image definition, and image highlight, respectively.
Specifically, the server may determine the preset feature corresponding to the anti-counterfeit product according to the corresponding relationship between the anti-counterfeit product and the preset feature, and extract a feature value of the preset feature from the multi-frame image including the anti-counterfeit product. For example, if the anti-counterfeit product is an identification card, the characteristic value of the preset characteristic extracted by the server is as follows: the characteristic value of the color-changing ink, the characteristic value of the dynamic printing, the characteristic value of the characteristic block, the characteristic value of the image definition and the characteristic value of the image highlight.
And the server divides the extracted characteristic values of the preset characteristics into two groups according to a preset grouping mode to obtain two characteristic value groups. Wherein, the feature value of the preset feature included in each feature value group may be one or more. For example, one set of feature values includes the feature values of the color-changing ink and the feature values of the kinetic printing used to calculate the video appraisal score; the other feature value group includes a feature value of image sharpness and a feature value of image highlights, and is used for calculating a score confidence of the video appraisal score, and a specific grouping manner of the embodiment of the invention is not limited.
And 204, correspondingly inputting the two characteristic value groups into two preset regression models respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting product is a genuine product, and the score confidence coefficient is used for representing the reliability of the video identification score.
Specifically, the server performs weighted calculation on the feature values in the two feature value groups respectively according to the feature weight parameters and the regression functions of the two regression models, and obtains a video identification score and a score confidence. I.e. two sets x of characteristic values1,x2Inputting the corresponding results into two regression models f (-) and g (-) and outputting the result as a video appraisal score y (-) f (x)1,w1) And score confidence z ═ g (x)2,w2)。
The feature weight parameters of the two regression models may be preset, that is, weights corresponding to the features are preset in sequence according to importance degrees of different features for all the features in each feature value group, and the feature weight parameters of the two regression models may also be obtained by training in advance using a machine learning method. The regression function of each of the two regression models may be linear regression, logistic regression, tree model or neural network.
Step 205, judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition, and if so, determining that the anti-counterfeiting product is a genuine product.
Wherein different threshold conditions are preset for identifying the authenticity of the anti-counterfeiting product. The first preset threshold condition can be set to be that when the video identification score is greater than the first threshold and the score confidence degree is greater than the second threshold, the anti-counterfeiting product is a genuine product. The first threshold and the second threshold may be set according to actual needs, for example, the first threshold is set to 0.7, and the second threshold is set to 0.5.
The anti-counterfeiting identification method provided by the embodiment of the invention comprises the steps of extracting a plurality of frames of images which all contain anti-counterfeiting products from videos uploaded by a receiving terminal, then extracting characteristic values of preset characteristics from the plurality of frames of images, forming two characteristic value groups based on the characteristic values of the preset characteristics, correspondingly inputting the two characteristic value groups into two preset regression models respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting products are genuine, the score confidence coefficient is used for representing the reliability of the video identification score, and the anti-counterfeiting products are determined to be genuine when the video identification score and the score confidence coefficient meet a first preset threshold condition. According to the technical scheme provided by the invention, the authenticity identification of the anti-counterfeiting products such as certificates, paper money and the like can be realized only by simple interaction between the terminal and the server without using complex identification equipment, and the anti-counterfeiting identification is accurate and reliable; in addition, the technical scheme provided by the invention has strong expandability and can meet the anti-counterfeiting requirements under various scenes.
In one embodiment, the step of extracting multiple frames of images each containing a security article from the video may include:
sampling a video to obtain an image sequence with a fixed frame number, detecting an anti-counterfeiting product in each frame of image of the image sequence, and extracting the image with the anti-counterfeiting product detected to obtain a plurality of frame images.
The video sampling method described above may use uniform sampling to obtain an image sequence with a fixed number of frames. For example, M ═ 60 frames are extracted from a video using uniform sampling, and an image sequence is formed. It is to be understood that the video sample may also adopt other methods in the prior art, such as key frame extraction, and this is not specifically limited by the embodiment of the present invention.
The detection method can realize the detection of the anti-counterfeiting product in each frame of image by using Scale-invariant feature transform (SIFT). It is understood that the detection of the anti-counterfeit article may also adopt other methods in the prior art, such as a method based on deep learning, a method based on edge detection, and the like, and the embodiment of the present invention is not limited in this respect.
Optionally, in order to facilitate subsequent more accurate and rapid extraction of the feature value of the preset feature from the image containing the anti-counterfeit product, after the step of extracting the image in which the anti-counterfeit product is detected to obtain multiple frames of images, the method may further include:
and carrying out correction processing on the anti-counterfeiting product in each extracted frame image to obtain a plurality of frame images of the corrected anti-counterfeiting product. Among other things, the forward process may use a scale-invariant feature transform. In addition, other methods in the prior art, such as a deep learning-based method, can be adopted to correct the anti-counterfeiting product. As shown in fig. 4a and 4b, fig. 4a is a schematic diagram of a frame of certificate image provided in an embodiment of the present invention, and fig. 4b is a schematic diagram of a result of certificate detection and correction provided in an embodiment of the present invention, and the result shown in fig. 4b can be obtained by detecting and correcting the frame of certificate image shown in fig. 4 a.
In one embodiment, where the security article is a document, the predetermined characteristic comprises a first characteristic and a second characteristic, the first characteristic comprising at least one of color shifting ink, motion sensitive printing and feature blocks, and the second characteristic comprising at least one of image sharpness and image highlights. In order to more accurately authenticate the anti-counterfeiting product, such as identity card authenticity authentication, characteristic values of various characteristics of color-changing ink, dynamic printing, characteristic blocks, image definition and image highlight can be extracted from various images containing the identity card.
In one embodiment, the above-mentioned process for extracting the characteristic value of the color-changing ink may include:
respectively extracting first region subgraphs where color-changing ink is located from each frame image in the multi-frame images, and segmenting the color-changing ink and the background from each first region subgraph; calculating the normalized color of the color-changing ink in each first region subgraph according to the color mean value of the color-changing ink in each first region subgraph and the color mean value of the background so as to obtain a normalized color matrix of the color-changing ink in the multi-frame image; calculating an included angle matrix meeting preset conditions according to the normalized color matrix of the color-changing ink; and obtaining the characteristic value of the color-changing ink according to the included angle matrix.
Specifically, for each frame of image k, extracting a regional subgraph where the color-changing ink is located, and dividing a color-changing ink part and a background part. The color-changing ink and the background can be segmented from each first region subgraph by using a threshold-based segmentation method, and other methods in the prior art, such as a region-based segmentation method, an edge-based segmentation method, and the like, can also be used. As shown in fig. 5, fig. 5 is a schematic diagram of a color-changing ink region and a background region according to an embodiment of the present invention, in the subgraph of the region where the color-changing ink is located shown in fig. 5, an arrow F indicates the color-changing ink region, and an arrow B indicates the background region.
In this embodiment, for the first region subgraph in each frame of image k, the color mean of the color-changing ink portion is first calculated
Figure BDA0002096932940000121
And color mean of background part
Figure BDA0002096932940000122
Calculating normalized color
Figure BDA0002096932940000123
Obtaining a normalized color matrix of all the image color-changing ink parts
Figure BDA0002096932940000124
Calculating an included angle matrix A meeting preset conditions according to a normalized color matrix of the color-changing ink, wherein the included angle matrix A meets the requirements of ith row and jth column elements of the included angle matrix A
Figure BDA0002096932940000125
The maximum included angle a in the included angle matrix AmaxMax (a) is a characteristic value of the color-changing ink.
In one embodiment, the process of extracting the feature value of the dynamic printing may include:
initializing the occurrence times of the first preset character image and the second preset character image to be zero values; respectively extracting second regional subgraphs where dynamic printing is located from each frame image in the multi-frame images; matching each extracted second region sub-image with a first preset character image and a second preset character image respectively, and calculating a first similarity and a second similarity corresponding to each second region sub-image; counting the occurrence times of the first preset character image and the second preset character image in the multi-frame image according to the first similarity and the second similarity corresponding to each second region subgraph; and the counted occurrence times of the first preset character image and the second preset character image in the multi-frame image are jointly used as the characteristic value of the dynamic printing.
The first preset character image and the second preset character image are images which are respectively displayed by one anti-counterfeiting character on the anti-counterfeiting product at different angles.
Illustratively, the characteristic values of the extracted color-changing ink are illustrated by a 2003 version of hong Kong ID card, the left letter of which is shown as H at certain angles and K at certain angles. Number of occurrences C of initial letter H, KK=0,CH0, extracting the sub-image of the dynamic printing part for each frame of image K, matching the sub-image with the pre-made sub-image of the letter H, K, and respectively calculating the similarity (V) of H and KK,VH) When V isK≥0.8VHAnd V isKWhen the letter K is greater than or equal to 0.7, the letter K is considered to be captured, namely CKPlus 1, similarly, when VH≥0.8VKAnd V isHC is more than or equal to 0.7HAnd adding 1. Will (C)K,CH) As a characteristic value of the motion print.
In one embodiment, the above-mentioned process of extracting feature values of the feature block may include:
respectively extracting a third region subgraph where the feature block is located from each frame image in the multiple frame images; matching the extracted third region sub-images with preset feature block images, and calculating feature block similarity corresponding to the third region sub-images to obtain feature block similarity vectors corresponding to multiple frames of images; and acquiring the characteristic value of the characteristic block according to the characteristic block similarity vector.
Specifically, for each frame of image k, extracting a sub-image where the feature block part is located, matching the sub-image with a feature block sub-image which is made in advance, and calculating the similarity skAnd obtaining the similarity vector s of the feature blocks of all the images as(s)1,s2,...,sk,...)TTaking the maximum value smaxMax(s) as the feature block feature.
In one embodiment, the above process of extracting the feature value of the image sharpness may include:
carrying out graying processing on the multi-frame images respectively; for each frame of image in the grayed multi-frame image, respectively calculating gradient images of the image along the x direction and the y direction by using a Sobel operator, calculating the square sum of the gradients of each pixel in the image in the x direction and the y direction, and averaging to obtain the definition of the image; obtaining definition vectors of multiple frames of images according to the definition of each frame of image; and acquiring the feature value of the image definition according to the definition vector.
Specifically, for each frame of image k, firstly converting the image k into a gray scale image, respectively calculating gradient images of the image k along the x direction and the y direction by using a Sobel operator, calculating the square sum of gradients of each pixel in the x direction and the y direction, and averaging to obtain the definition dkObtaining the definition vector d ═ d (d) of all images1,d2,...,dk,...)TTaking the median dmedMean (d) is a feature value of image sharpness.
In one embodiment, the above process of extracting feature values of image highlights may include:
carrying out graying processing on the multi-frame images respectively; aiming at each frame of image in the grayed multi-frame image, calculating an intensity median of the image, and determining a pixel with the pixel intensity exceeding a high light intensity threshold as a high light pixel of the image, wherein the high light intensity threshold is the product of the intensity median of the image and a preset coefficient, and the preset coefficient is greater than 1; respectively and correspondingly calculating highlight proportion of highlight pixels of each frame of image in each frame of image to obtain highlight proportion vectors of multiple frames of images; and acquiring the characteristic value of the image highlight according to the highlight proportion vector.
Specifically, for each frame of image k, it is first converted into a gray-scale image, and the intensity median o is calculatedmedDefining an image intensity o satisfying o > η omedThe time is a highlight point, where η is 1.4 in this embodiment, and the proportion h of the highlight point pixel in the whole video is calculatedkObtaining the highlight proportion vector h ═ h (h) of all images1,h2,...,hk,...)TTaking the median value hmedMean (h) as a feature value of the image highlight.
In one embodiment, the two feature value sets include a first feature value set and a second feature value set, and the step of forming the two feature value sets based on the feature values of the preset features may include:
forming a first characteristic value group based on at least one of the characteristic value of the color-changing ink, the characteristic value of the dynamic printing and the characteristic value of the characteristic block and combining the characteristic value of highlight of the image, wherein the first characteristic value group is used for calculating a video identification score; and forming a second characteristic value group based on at least one of the characteristic value of the image definition and the characteristic value of the image highlight and combining the characteristic values of the characteristic blocks, wherein the second characteristic value group is used for calculating the score confidence.
In this embodiment, in order to more accurately perform the anti-counterfeit processPerforming authenticity identification, such as identification card authenticity identification, selecting characteristic values of four characteristics of color-changing ink, dynamic printing, feature block matching and highlight detection, and using 5-dimensional characteristics in total to identify point characteristics, namely a first characteristic value group x1=(amax,CK,CH,smax,hmed)TSelecting feature values of three features of feature block matching, image definition and highlight detection, and using the 3-dimensional features to calculate the confidence coefficient of the score, namely the second feature value group x2=(smax,dmed,hmed)T
In one embodiment, the training process may include:
obtaining a plurality of marked sample videos, wherein the sample videos comprise videos of real anti-counterfeiting products and videos of imitated anti-counterfeiting products, extracting characteristic values of sample characteristics from the sample videos aiming at each sample video of the sample videos, and forming two sample characteristic value groups based on the characteristic values of the sample characteristics; and correspondingly inputting the two sample characteristic value groups into the two regression models respectively for training to obtain respective characteristic weight parameters of the two regression models.
Wherein the two regression models include an identification score model and a score confidence model.
Illustratively, capture N ═ 30 document videos, including authentic and counterfeit documents (e.g., prints or copies of documents), with high quality video capture where the image is sharp and less highlights, and low quality video capture where blurry or often a large range of highlights appear. Assigning 0 to the confidence coefficient of the low-quality video score without training an identification score model; the confidence coefficient of the high-quality video score is assigned to 1, the identification score of the real certificate is assigned to 1, and the identification score of the counterfeit certificate is assigned to 0. And extracting characteristics of the N videos, and training an identification score model and a score confidence coefficient model to obtain an initial parameter w.
In one embodiment, the method further comprises:
if the video identification value and the value confidence coefficient meet a second preset threshold condition, determining that the anti-counterfeiting product is a counterfeit product, if the video identification value and the value confidence coefficient meet a third preset threshold condition, sending the video to a preset terminal to enable the video to be transferred to a manual checking stage, if the video identification value and the value confidence coefficient meet a fourth preset threshold condition, sending the video to the preset terminal with preset probability to enable the video to be transferred to the manual checking stage, and otherwise, sending prompt information to the terminal to prompt the terminal to re-collect the video of the anti-counterfeiting product.
The second preset threshold condition can be set to be that when the video identification score is smaller than the third threshold and the score confidence coefficient is larger than the second threshold, the anti-counterfeiting product is a counterfeit product. The third threshold value can be set according to actual needs, for example, the third threshold value is set to 0.4.
The third preset threshold condition can be set to be that when the identification score of the video is between the third threshold and the first threshold and the confidence coefficient of the score is greater than the second threshold, the authenticity of the anti-counterfeiting product cannot be accurately judged at the moment, and the video can be sent to a preset terminal so that the video is transferred to a manual checking stage.
And when the fourth preset threshold condition is set to be that the score confidence coefficient is smaller than the second threshold, the video is sent to the preset terminal according to the probability that P is 0.1, so that the video is transferred to a manual review stage, and otherwise, prompt information is sent to the terminal to reacquire the video.
In one embodiment, the method further comprises:
and obtaining an identification result which is returned by a preset terminal and is subjected to manual review, and optimizing the respective characteristic weight parameters of the two regression models based on the identification result.
In this embodiment, when entering manual review, Δ w may be updated according to the new weight of the manually labeled tag computation model to obtain a new model weight wnewPreferably, λ w — λ Δ w is 0.001. And when the manual review is not required, the new video is directly waited for uploading.
In one embodiment, as shown in FIG. 6, there is provided an authentication device comprising:
the receiving module 61 is used for receiving the video of the anti-counterfeiting product uploaded by the terminal;
an extraction module 62, configured to extract multiple frames of images each including an anti-counterfeit product from the video;
an extracting module 63, configured to extract a feature value of a preset feature from a multi-frame image;
a grouping module 64, configured to form two feature value groups based on feature values of preset features;
the prediction module 65 is configured to correspondingly input the two feature value sets into two preset regression models respectively to obtain a video identification score and a score confidence, where the video identification score is used to represent a probability that the anti-counterfeit product is a genuine product, and the score confidence is used to represent a reliability of the video identification score;
and the identification module 66 is used for judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition, and if so, determining that the anti-counterfeiting product is a genuine product.
In a preferred embodiment, the extraction module 62 is specifically configured to:
sampling a video to obtain an image sequence with a fixed frame number;
and detecting the anti-counterfeiting product in each frame of image of the image sequence, and extracting the image of the detected anti-counterfeiting product to obtain a plurality of frames of images.
In a preferred embodiment, where the security article is a document, the predetermined characteristic comprises a first characteristic and a second characteristic, the first characteristic comprising at least one of a colour shifting ink, a motion sensitive print and a feature block, and the second characteristic comprising at least one of image sharpness and image highlights.
In a preferred embodiment, the extraction module 63 is specifically configured to:
respectively extracting first region subgraphs where color-changing ink is located from each frame image in the multi-frame images, and segmenting the color-changing ink and the background from each first region subgraph;
calculating the normalized color of the color-changing ink in each first region subgraph according to the color mean value of the color-changing ink in each first region subgraph and the color mean value of the background so as to obtain a normalized color matrix of the color-changing ink in the multi-frame image;
calculating an included angle matrix meeting preset conditions according to the normalized color matrix of the color-changing ink;
and obtaining the characteristic value of the color-changing ink according to the included angle matrix.
In a preferred embodiment, the extraction module 63 is specifically configured to:
initializing the occurrence times of the first preset character image and the second preset character image to be zero values;
respectively extracting second regional subgraphs where dynamic printing is located from each frame image in the multi-frame images;
matching each extracted second region sub-image with a first preset character image and a second preset character image respectively, and calculating a first similarity and a second similarity corresponding to each second region sub-image;
counting the occurrence times of the first preset character image and the second preset character image in the multi-frame image according to the first similarity and the second similarity corresponding to each second region subgraph;
and the counted occurrence times of the first preset character image and the second preset character image in the multi-frame image are jointly used as the characteristic value of the dynamic printing.
In a preferred embodiment, the extraction module 63 is specifically configured to:
respectively extracting a third region subgraph where the feature block is located from each frame image in the multiple frame images;
matching the extracted third region sub-images with preset feature block images, and calculating feature block similarity corresponding to the third region sub-images to obtain feature block similarity vectors corresponding to multiple frames of images;
and acquiring the characteristic value of the characteristic block according to the characteristic block similarity vector.
In a preferred embodiment, the extraction module 63 is specifically configured to:
carrying out graying processing on the multi-frame images respectively;
for each frame of image in the grayed multi-frame image, respectively calculating gradient images of the image along the x direction and the y direction by using a Sobel operator, calculating the square sum of the gradients of each pixel in the image in the x direction and the y direction, and averaging to obtain the definition of the image;
obtaining definition vectors of multiple frames of images according to the definition of each frame of image;
and acquiring the feature value of the image definition according to the definition vector.
In a preferred embodiment, the extraction module 63 is specifically configured to:
carrying out graying processing on the multi-frame images respectively;
aiming at each frame of image in the grayed multi-frame image, calculating an intensity median of the image, and determining a pixel with the pixel intensity exceeding a high light intensity threshold as a high light pixel of the image, wherein the high light intensity threshold is the product of the intensity median of the image and a preset coefficient, and the preset coefficient is greater than 1;
respectively and correspondingly calculating highlight proportion of highlight pixels of each frame of image in each frame of image to obtain highlight proportion vectors of multiple frames of images;
and acquiring the characteristic value of the image highlight according to the highlight proportion vector.
In a preferred embodiment, the two feature value sets include a first feature value set and a second feature value set, and grouping module 64 is specifically configured to:
forming a first characteristic value group based on at least one of the characteristic value of the color-changing ink, the characteristic value of the dynamic printing and the characteristic value of the characteristic block and combining the characteristic value of highlight of the image, wherein the first characteristic value group is used for calculating a video identification score;
and forming a second characteristic value group based on at least one of the characteristic value of the image definition and the characteristic value of the image highlight and combining the characteristic values of the characteristic blocks, wherein the second characteristic value group is used for calculating the score confidence.
In a preferred embodiment, the feature weight parameters of the two regression models are preset or are obtained by training in a machine learning method.
In a preferred embodiment, the regression function of each of the two regression models is linear regression, logistic regression, tree model or neural network.
In a preferred embodiment, the apparatus further comprises a training module 67, the training module 67 being specifically configured to:
obtaining a plurality of marked sample videos, wherein the plurality of sample videos comprise videos of real anti-counterfeiting products and videos of imitated anti-counterfeiting products;
for each sample video in the plurality of sample videos, extracting feature values of sample features from the sample video, and forming two sample feature value groups based on the feature values of the sample features;
and correspondingly inputting the two sample characteristic value groups into the two regression models respectively for training to obtain respective characteristic weight parameters of the two regression models.
In a preferred embodiment, the authentication module 66 is further configured to:
if the video identification score and the score confidence coefficient meet a second preset threshold condition, determining the anti-counterfeiting product as a counterfeit product;
the apparatus further comprises a sending module 68, wherein the sending module 68 is specifically configured to:
if the video identification score and the score confidence coefficient meet a third preset threshold condition, sending the video to a preset terminal so that the video is transferred to a manual review stage;
and if the video identification score and the score confidence coefficient meet a fourth preset threshold condition, sending the video to a preset terminal according to a preset probability so as to enable the video to be transferred to a manual checking stage, otherwise, sending prompt information to the terminal so as to prompt the terminal to acquire the video of the anti-counterfeiting product again.
In a preferred embodiment, the apparatus further comprises an optimization module 69, the optimization module 69 is specifically configured to:
and obtaining an identification result which is returned by a preset terminal and is subjected to manual review, and optimizing the respective characteristic weight parameters of the two regression models based on the identification result.
The anti-counterfeiting identification device provided by the embodiment belongs to the same inventive concept as the anti-counterfeiting identification method provided by the embodiment of the invention, can execute the anti-counterfeiting identification method provided by the embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the anti-counterfeiting identification method. For technical details that are not described in detail in this embodiment, reference may be made to the anti-counterfeit identification method provided in this embodiment of the present invention, and details are not described here.
In addition, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory;
a program stored in the memory, which when executed by the one or more processors, causes the processors to perform the steps of the anti-counterfeit authentication method of the embodiments described above.
Another embodiment of the present invention further provides a computer-readable storage medium, which stores a program, and when the program is executed by a processor, the processor executes the steps of the anti-counterfeit authentication method according to the above embodiment.
As will be appreciated by one of skill in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (17)

1. A method of authenticating an object, the method comprising:
receiving the video of the anti-counterfeiting product uploaded by the terminal;
extracting a plurality of frames of images from the video, each frame of images including the security article;
extracting feature values of preset features from the multi-frame image, and forming two feature value groups based on the feature values of the preset features;
correspondingly inputting the two characteristic value groups into two regression models obtained through pre-training respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting product is a genuine product, and the score confidence coefficient is used for representing the reliability of the video identification score;
and judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition, and if so, determining that the anti-counterfeiting product is a genuine product.
2. The method of claim 1, wherein the extracting from the video a plurality of frames of images each containing the security article comprises:
sampling the video to obtain an image sequence with a fixed frame number;
and detecting the anti-counterfeiting product in each frame of image of the image sequence, and extracting the image of which the anti-counterfeiting product is detected to obtain the multi-frame image.
3. The method of claim 1, wherein the security article is a document, the predetermined characteristic comprises a first characteristic and a second characteristic, the first characteristic comprises at least one of color shifting ink, motion sensitive printing, and feature blocks, and the second characteristic comprises at least one of image sharpness and image highlights.
4. The method according to claim 3, wherein the process of extracting the characteristic value of the color-changing ink comprises:
respectively extracting first region subgraphs where color-changing ink is located from each frame image in the multi-frame images, and segmenting the color-changing ink and a background from each first region subgraph;
calculating the normalized color of the color-changing ink in each first region subgraph according to the color mean value of the color-changing ink in each first region subgraph and the color mean value of the background so as to obtain a normalized color matrix of the color-changing ink in the multi-frame image;
calculating an included angle matrix meeting preset conditions according to the normalized color matrix of the color-changing ink;
and acquiring a characteristic value of the color-changing ink according to the included angle matrix.
5. The method according to claim 3, wherein the process of extracting the feature values of the dynamic printing comprises:
initializing the occurrence times of the first preset character image and the second preset character image to be zero values;
respectively extracting a second region subgraph where the dynamic printing is located from each frame image in the multiple frames of images;
matching each extracted second regional sub-image with the first preset character image and the second preset character image respectively, and calculating a first similarity and a second similarity corresponding to each second regional sub-image;
counting the occurrence times of the first preset character image and the second preset character image in the multi-frame image according to the first similarity and the second similarity corresponding to each second region subgraph;
and taking the counted occurrence times of the first preset character image and the second preset character image in the multi-frame image together as the characteristic value of the dynamic printing.
6. The method according to claim 3, wherein the extracting process of the feature value of the feature block comprises:
respectively extracting a third region subgraph where the feature block is located from each frame image in the multi-frame images;
matching each extracted third region sub-image with a preset feature block image, and calculating the feature block similarity corresponding to each third region sub-image to obtain a feature block similarity vector corresponding to the multi-frame image;
and acquiring the characteristic value of the characteristic block according to the characteristic block similarity vector.
7. The method according to claim 3, wherein the extraction process of the feature value of the image definition comprises:
carrying out graying processing on the multi-frame images respectively;
for each frame image in the multi-frame image after graying, respectively calculating gradient images of the image in the x direction and the y direction by using a Sobel operator, calculating the square sum of the gradients of each pixel in the image in the x direction and the y direction, and averaging to obtain the definition of the image;
obtaining a definition vector of the multi-frame image according to the definition of the image of each frame;
and acquiring a characteristic value of the image definition according to the definition vector.
8. The method according to claim 3, wherein the process of extracting the feature value of the image highlight comprises:
carrying out graying processing on the multi-frame images respectively;
aiming at each frame image in the multi-frame images after graying, calculating an intensity median of the image, and determining a pixel with a pixel intensity exceeding a high light intensity threshold as a high light pixel of the image, wherein the high light intensity threshold is the product of the intensity median of the image and a preset coefficient, and the preset coefficient is greater than 1;
correspondingly calculating highlight proportion of highlight pixels of each frame of image in each frame of image respectively to obtain highlight proportion vectors of the multi-frame image;
and acquiring a characteristic value of the image highlight according to the highlight proportion vector.
9. The method according to any one of claims 3 to 8, wherein the two feature value groups comprise a first feature value group and a second feature value group, and the forming of the two feature value groups based on the feature values of the preset features comprises:
forming the first feature value group based on at least one of the feature value of the color-changing ink, the feature value of the motion printing and the feature value of the feature block and combining the feature value of the highlight of the image, wherein the first feature value group is used for calculating the video appraisal score;
forming the second feature value group based on at least one of the feature value of the image sharpness and the feature value of the image highlights in combination with the feature values of the feature blocks, wherein the second feature value group is used for calculating the score confidence.
10. The method according to claim 1, wherein the feature weight parameters of the two regression models are obtained by training in advance using a machine learning method.
11. The method according to claim 1 or 10, wherein the regression function of each of the two regression models is linear regression, logistic regression, tree model or neural network.
12. The method of claim 10, wherein the training comprises:
obtaining a plurality of marked sample videos, wherein the sample videos comprise videos of real anti-counterfeiting products and videos of imitated anti-counterfeiting products;
for each sample video in the plurality of sample videos, extracting feature values of sample features from the sample video, and forming two sample feature value groups based on the feature values of the sample features;
and correspondingly inputting the two sample characteristic value groups into the two regression models respectively for training to obtain respective characteristic weight parameters of the two regression models.
13. The method of claim 1, further comprising:
if the video identification score and the score confidence coefficient meet a second preset threshold condition, determining the anti-counterfeiting product as a counterfeit product;
if the video identification score and the score confidence coefficient meet a third preset threshold condition, sending the video to a preset terminal so that the video is transferred to a manual review stage;
and if the video identification score and the score confidence coefficient meet a fourth preset threshold condition, sending the video to a preset terminal according to a preset probability so that the video is transferred to a manual review stage, otherwise, sending prompt information to the terminal so as to prompt the terminal to acquire the video of the anti-counterfeiting product again.
14. The method of claim 13, further comprising:
and obtaining an identification result which is returned by the preset terminal and is subjected to manual review, and optimizing the characteristic weight parameters of the two regression models based on the identification result.
15. A counterfeit deterrent authentication device, the device comprising:
the receiving module is used for receiving the video of the anti-counterfeiting product uploaded by the terminal;
the extraction module is used for extracting multiple frames of images which all contain the anti-counterfeiting product from the video;
the extraction module is used for extracting a characteristic value of a preset characteristic from the multi-frame image;
the grouping module is used for forming two characteristic value groups based on the characteristic values of the preset characteristics;
the prediction module is used for correspondingly inputting the two characteristic value groups into two regression models obtained through pre-training respectively to obtain a video identification score and a score confidence coefficient, wherein the video identification score is used for representing the probability that the anti-counterfeiting product is a genuine product, and the score confidence coefficient is used for representing the reliability of the video identification score;
and the identification module is used for judging whether the video identification score and the score confidence coefficient meet a first preset threshold condition or not, and if so, determining that the anti-counterfeiting product is a genuine product.
16. A computer device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement a method as claimed in any one of claims 1 to 14.
17. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 14.
CN201910521772.2A 2019-06-17 2019-06-17 Anti-counterfeiting identification method and device, computer equipment and storage medium Active CN110415424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910521772.2A CN110415424B (en) 2019-06-17 2019-06-17 Anti-counterfeiting identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910521772.2A CN110415424B (en) 2019-06-17 2019-06-17 Anti-counterfeiting identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110415424A CN110415424A (en) 2019-11-05
CN110415424B true CN110415424B (en) 2022-02-11

Family

ID=68359154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910521772.2A Active CN110415424B (en) 2019-06-17 2019-06-17 Anti-counterfeiting identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110415424B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381553A (en) * 2020-11-20 2021-02-19 王永攀 Product anti-counterfeiting method

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750772A (en) * 2012-06-05 2012-10-24 广东智华计算机科技有限公司 Paper money tracking system and method based on machine vision
US9137020B1 (en) * 2008-04-23 2015-09-15 Copilot Ventures Fund Iii Llc Authentication method and system
CN106504406A (en) * 2016-11-01 2017-03-15 深圳怡化电脑股份有限公司 A kind of method and device of identification bank note
WO2017094761A1 (en) * 2015-11-30 2017-06-08 凸版印刷株式会社 Identification method and identification medium
CN107798308A (en) * 2017-11-09 2018-03-13 石数字技术成都有限公司 A kind of face identification method based on short-sighted frequency coaching method
CN108292457A (en) * 2015-11-26 2018-07-17 凸版印刷株式会社 Identification device, recognition methods, recognizer and the computer-readable medium comprising recognizer
CN108399677A (en) * 2017-02-08 2018-08-14 深圳怡化电脑股份有限公司 A kind of bank note version recognition methods and device
CN108510640A (en) * 2018-03-02 2018-09-07 深圳怡化电脑股份有限公司 Banknote detection method, device, cash inspecting machine based on dynamic safety line and storage medium
CN108665603A (en) * 2018-04-11 2018-10-16 深圳怡化电脑股份有限公司 Identify the method, apparatus and electronic equipment of bank note currency type
CN109726710A (en) * 2018-12-27 2019-05-07 平安科技(深圳)有限公司 Invoice information acquisition method, electronic device and readable storage medium storing program for executing
CN109859373A (en) * 2018-12-15 2019-06-07 深圳壹账通智能科技有限公司 Bank note face amount calculation method, device and relevant device based on image recognition

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7113615B2 (en) * 1993-11-18 2006-09-26 Digimarc Corporation Watermark embedder and reader
CN102682512A (en) * 2011-03-10 2012-09-19 北京新岸线数字图像技术有限公司 Counterfeit detecting device for paper money
CN103208148B (en) * 2013-02-06 2014-12-10 深圳宝嘉电子设备有限公司 Currency verification system and method thereof
CN105184286A (en) * 2015-10-20 2015-12-23 深圳市华尊科技股份有限公司 Vehicle detection method and detection device
CN105447826B (en) * 2015-11-06 2018-09-07 东方通信股份有限公司 A kind of processing method of banknote image acquisition
CN105404861B (en) * 2015-11-13 2018-11-02 中国科学院重庆绿色智能技术研究院 Training, detection method and the system of face key feature points detection model
JP6858525B2 (en) * 2016-10-07 2021-04-14 グローリー株式会社 Money classification device and money classification method
CN108460775A (en) * 2017-02-17 2018-08-28 深圳怡化电脑股份有限公司 A kind of forge or true or paper money recognition methods and device
CN108197532B (en) * 2017-12-18 2019-08-16 深圳励飞科技有限公司 The method, apparatus and computer installation of recognition of face
CN109800747A (en) * 2018-12-14 2019-05-24 平安科技(深圳)有限公司 Medical invoice recognition methods, user equipment, storage medium and device
CN109785499B (en) * 2018-12-26 2021-08-31 佛山科学技术学院 Multifunctional paper money inspection system and method
CN109859245B (en) * 2019-01-22 2020-12-11 深圳大学 Multi-target tracking method and device for video target and storage medium
CN109871804A (en) * 2019-02-19 2019-06-11 上海宝尊电子商务有限公司 A kind of method and system of shop stream of people discriminance analysis

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9137020B1 (en) * 2008-04-23 2015-09-15 Copilot Ventures Fund Iii Llc Authentication method and system
CN102750772A (en) * 2012-06-05 2012-10-24 广东智华计算机科技有限公司 Paper money tracking system and method based on machine vision
CN108292457A (en) * 2015-11-26 2018-07-17 凸版印刷株式会社 Identification device, recognition methods, recognizer and the computer-readable medium comprising recognizer
WO2017094761A1 (en) * 2015-11-30 2017-06-08 凸版印刷株式会社 Identification method and identification medium
CN106504406A (en) * 2016-11-01 2017-03-15 深圳怡化电脑股份有限公司 A kind of method and device of identification bank note
CN108399677A (en) * 2017-02-08 2018-08-14 深圳怡化电脑股份有限公司 A kind of bank note version recognition methods and device
CN107798308A (en) * 2017-11-09 2018-03-13 石数字技术成都有限公司 A kind of face identification method based on short-sighted frequency coaching method
CN108510640A (en) * 2018-03-02 2018-09-07 深圳怡化电脑股份有限公司 Banknote detection method, device, cash inspecting machine based on dynamic safety line and storage medium
CN108665603A (en) * 2018-04-11 2018-10-16 深圳怡化电脑股份有限公司 Identify the method, apparatus and electronic equipment of bank note currency type
CN109859373A (en) * 2018-12-15 2019-06-07 深圳壹账通智能科技有限公司 Bank note face amount calculation method, device and relevant device based on image recognition
CN109726710A (en) * 2018-12-27 2019-05-07 平安科技(深圳)有限公司 Invoice information acquisition method, electronic device and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN110415424A (en) 2019-11-05

Similar Documents

Publication Publication Date Title
US10885531B2 (en) Artificial intelligence counterfeit detection
US10635946B2 (en) Eyeglass positioning method, apparatus and storage medium
US11087125B2 (en) Document authenticity determination
KR102406432B1 (en) Identity authentication methods and devices, electronic devices and storage media
CA3154393A1 (en) System and methods for authentication of documents
WO2016131083A1 (en) Identity verification. method and system for online users
Abburu et al. Currency recognition system using image processing
CN110427972B (en) Certificate video feature extraction method and device, computer equipment and storage medium
EP4109332A1 (en) Certificate authenticity identification method and apparatus, computer-readable medium, and electronic device
US10043071B1 (en) Automated document classification
WO2021179157A1 (en) Method and device for verifying product authenticity
Uddin et al. Image-based approach for the detection of counterfeit banknotes of Bangladesh
CN108230536A (en) One kind is to light variable security index identification method and device
Berenguel et al. e-Counterfeit: a mobile-server platform for document counterfeit detection
Berenguel et al. Evaluation of texture descriptors for validation of counterfeit documents
CN110415424B (en) Anti-counterfeiting identification method and device, computer equipment and storage medium
Rajan et al. An extensive study on currency recognition system using image processing
WO2021102770A1 (en) Method and device for verifying authenticity of product
CN113077355B (en) Insurance claim settlement method and device, electronic equipment and storage medium
Desai et al. Implementation of multiple kernel support vector machine for automatic recognition and classification of counterfeit notes
US20240112484A1 (en) Copy prevention of digital sample images
Saxena Identification of Fake Currency: A Case Study of Indian Scenario.
Vishnu et al. Principal component analysis on Indian currency recognition
Rupa et al. Integrity checking of physical currency with pattern matching: Coping with few data and the training sample order
Al-Hila et al. Fuzzy Logic Weighted Averaging Algorithm for Malaysian Banknotes Reader Featuring Counterfeit Detection: Manuscript Received: 27 February 2023, Accepted: 28 March 2023, Published: 15 September 2023, ORCiD: 0000-0003-1477-8449, https://doi. org/10.33093/jetap. 2023.5. 2.3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240306

Address after: Room 1179, W Zone, 11th Floor, Building 1, No. 158 Shuanglian Road, Qingpu District, Shanghai, 201702

Patentee after: Shanghai Zhongan Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: ZHONGAN INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

Country or region before: China

TR01 Transfer of patent right

Effective date of registration: 20240415

Address after: Room 1179, W Zone, 11th Floor, Building 1, No. 158 Shuanglian Road, Qingpu District, Shanghai, 201702

Patentee after: Shanghai Zhongan Information Technology Service Co.,Ltd.

Country or region after: China

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Patentee before: ZHONGAN INFORMATION TECHNOLOGY SERVICE Co.,Ltd.

Country or region before: China