WO2021102770A1 - 用于验证产品的真伪的方法和设备 - Google Patents

用于验证产品的真伪的方法和设备 Download PDF

Info

Publication number
WO2021102770A1
WO2021102770A1 PCT/CN2019/121446 CN2019121446W WO2021102770A1 WO 2021102770 A1 WO2021102770 A1 WO 2021102770A1 CN 2019121446 W CN2019121446 W CN 2019121446W WO 2021102770 A1 WO2021102770 A1 WO 2021102770A1
Authority
WO
WIPO (PCT)
Prior art keywords
product
micro
product identification
image
feature
Prior art date
Application number
PCT/CN2019/121446
Other languages
English (en)
French (fr)
Inventor
高煜
谢晖
杨莞琳
Original Assignee
罗伯特·博世有限公司
高煜
谢晖
杨莞琳
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 罗伯特·博世有限公司, 高煜, 谢晖, 杨莞琳 filed Critical 罗伯特·博世有限公司
Priority to PCT/CN2019/121446 priority Critical patent/WO2021102770A1/zh
Priority to CN201980102577.4A priority patent/CN114746864A/zh
Priority to DE112019007487.3T priority patent/DE112019007487T5/de
Publication of WO2021102770A1 publication Critical patent/WO2021102770A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/95Pattern authentication; Markers therefor; Forgery detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns

Definitions

  • the invention relates to a method and equipment for verifying the authenticity of a product.
  • the existing anti-counterfeiting technologies for products include digital anti-counterfeiting technology and texture anti-counterfeiting technology.
  • Digital anti-counterfeiting technology uses barcodes or two-dimensional codes to give products a unique identification (ID) for anti-counterfeiting verification and traceability functions. This digital anti-counterfeiting technology is easy to be copied and has poor security.
  • Texture anti-counterfeiting technology uses randomly generated natural textures as anti-counterfeiting features. This texture is physically non-reproducible and has non-reproducible features; however, the existing texture anti-counterfeiting technology lacks automatic identification capabilities for anti-counterfeiting features, and automatic identification capabilities require anti-counterfeiting Features are visually identifiable, or rely on the addition of fiber materials in the production process to form anti-counterfeiting features, which leads to increased costs of anti-counterfeit products and inconvenient production.
  • the embodiments of the present invention provide a method and device for verifying the authenticity of a product, which can improve the accuracy of verifying the authenticity of the product.
  • the embodiment of the present invention provides a method for verifying the authenticity of a product.
  • the product identification of the product has randomly distributed microdots.
  • the method includes: extracting the product identification image of the product to be verified.
  • the embodiment of the present invention provides a device for verifying the authenticity of a product, the product identification of the product has randomly distributed micro-dots, and the device includes: a micro-dot feature extraction module, which is used to extract the micro-dots from the product to be verified. Extracting the micro-dot features on the product identification from the image of the product identification; an image feature extraction module for extracting at least a part of the image features of the product identification from the image using a machine learning algorithm; and a verification module for The authenticity of the product identification of the verified product is verified based on the extracted micro-point feature and the image feature.
  • a micro-dot feature extraction module which is used to extract the micro-dots from the product to be verified. Extracting the micro-dot features on the product identification from the image of the product identification
  • an image feature extraction module for extracting at least a part of the image features of the product identification from the image using a machine learning algorithm
  • a verification module for The authenticity of the product identification of the verified
  • An embodiment of the present invention provides a device for verifying the authenticity of a product, with randomly distributed microdots on the product identification of the product, the device including: a memory for storing instructions; and a device coupled to the memory When the instructions are executed by the processor, the processor executes the method according to the foregoing embodiment.
  • An embodiment of the present invention also provides a computer-readable storage medium on which is stored executable instructions, which when executed by a computer cause the computer to execute the method of the above-mentioned embodiment.
  • Fig. 1 shows a flowchart of a method for verifying the authenticity of a product according to a first embodiment of the present invention
  • Figure 2 is a schematic diagram of embedding micro-dot features into a product two-dimensional code in an embodiment
  • Figures 3(a) and 3(b) show the image of the probability density function when the uniform distribution function is used as the random distribution function of the microdots, and the microdot distribution map obtained by sampling from the random distribution;
  • Fig. 4 shows a flowchart of a method for verifying the authenticity of a product according to a second embodiment of the present invention
  • 5A to 5C show three example convolutional neural network structures used in the method for extracting image features of product identification in an embodiment of the present invention.
  • Fig. 6 shows a structural block diagram of an apparatus for verifying the authenticity of a product according to an embodiment of the present invention.
  • Fig. 1 shows a flowchart of a method 100 for verifying the authenticity of a product according to a first embodiment of the present invention.
  • the method 100 shown in FIG. 1 can be implemented by any computing device having computing capabilities.
  • the computing device can be, but is not limited to, a desktop computer, a notebook computer, a tablet computer, a server, or a smart phone.
  • the verification method 100 includes: extracting the micro-dot features on the product identification from the image of the product identification of the product to be verified (step 101); extracting at least a part of the product identification from the image of the product identification using a machine learning algorithm (Step 102); and verify the authenticity of the product identification of the verified product based on the extracted micro-dot features and image features (Step 103).
  • the verification step 103 includes: using a classifier trained by a machine learning algorithm to verify the authenticity of the product identification of the product to be verified.
  • the machine learning algorithm used to extract image features is a convolutional neural network.
  • the machine learning algorithm used to train the classifier is a machine learning algorithm that can classify feature vectors, such as a support vector machine (SVM) or a boost tree (Boost Tree).
  • Convolutional neural network is a kind of feedforward neural network that includes convolution calculation and has a deep structure. It has characterization learning ability and can classify input information according to its hierarchical structure. Convolutional neural network imitates the construction of biological visual perception mechanism, which can carry out supervised learning and unsupervised learning.
  • the convolution kernel parameter sharing in the hidden layer and the sparsity of inter-layer connections enable the convolutional neural network to perform smaller calculations Quantities learn about lattice features (such as pixels and audio).
  • the verification method may further include: training a convolutional neural network and a classifier by using multiple authentic identification images as positive samples and using multiple fake identification images as negative samples.
  • the step 102 of extracting image features includes: extracting image features from the image using a trained convolutional neural network to output a feature vector describing the image features.
  • the classifier includes a first classifier, and the extracted image features include at least printed features related to the printing of at least a part of the product identification; the first classifier is based on the Print features to distinguish the authenticity of the product identification of the product being verified.
  • the first classifier can be trained using the positive samples and negative samples of the product identification.
  • the verification step 103 may include: based on the extracted printed features, using the trained first classifier to output the probability that the product identification of the verified product is true; and/or based on the extracted printed features, using the trained first classifier The first classifier to output the probability that the product identification of the product being verified is false.
  • the printing feature of the genuine product identification is a feature associated with at least one of the paper, ink, and printing equipment used in the printing process of the genuine product identification.
  • Product identification printing can be the printing of digital files on physical paper or other carrying items. When the same digital file is printed, due to different printer settings, different types of printing machines, different inks or toners or colorants, and different paper characteristics, these are complicated The combination of, may make the details of the same digital image different after printing out. This detail part reflects the printing characteristics. For example, due to the paper, ink, or printing equipment used, there will be subtle differences in the printed lines, such as fine jagged parts with different shapes or arrangements on the edges.
  • the two-dimensional code in the product identification contains multiple black blocks and white blocks, and all the black and white boundaries may be different under different printing conditions.
  • the printed color or gray scale will be different due to the influence of paper, ink or printing equipment. This printing difference is distributed throughout the printing area of the two-dimensional code. And by using this difference, the printed features of the product logo can be extracted.
  • the printed features of the product identification of the counterfeit product produced by the reproduction technology are different from the product identification of the genuine product. With enough positive samples and negative samples, the convolutional neural network and the first classifier can be trained.
  • the convolutional neural network can learn the printed features in the positive sample and the printed features in the negative sample that are different from the positive sample during the training process; the trained convolutional neural network will have the printed features extracted from the verified product identification Ability.
  • the trained first classifier can then compare the printed features contained in the image features extracted by the convolutional neural network with the printed features contained in the image features of the genuine product to output the product identification of the verified product Probability of authenticity.
  • the classifier may further include a second classifier, and the second classifier determines the authenticity of the product identification based on the authenticity probability and the micro-point characteristics of the verified product identification output by the first classifier.
  • the positive samples and negative samples of the product identification can be used to train the second classifier.
  • the verification step 103 may also include: comparing the extracted micro-dot features with the micro-dot features of the product identification stored in advance during or after the production of the product; based on the comparison result and the authenticity probability of the product identification output by the first classifier, Compose a description vector about the product identification; and based on the description vector, use the second classifier to determine the authenticity of the product identification of the product being verified.
  • the description vector includes data related to at least one of the following items: the matching rate between the extracted micro-point feature and the pre-saved micro-point feature, the micro-point on the matching is in the image
  • the step 101 of extracting micro-dot features may include: using image processing technology to extract at least one of the shape feature, position feature, gray-scale feature, and color feature of the micro-dot from the image.
  • the product identification may include at least one of a barcode and a two-dimensional graphic code.
  • Fig. 2 is a schematic diagram of embedding the micro-dot feature into the product two-dimensional code in an embodiment, and the micro-dot feature 202 is not shown in detail in the figure due to its too small size.
  • an algorithm is used to generate a specific high-dimensional random distribution map 201 of micro-points as the distribution characteristics of at least one of the location distribution, gray-scale distribution, color distribution and micro-morphology of all micro-point features.
  • Products in the same category or the same batch can follow a certain distribution characteristic, and each product has other different micro-point characteristics to show distinction. For example, different batches of products can use different random distribution maps, and different products of the same batch can use different microdots.
  • the algorithm is used to sample the random distribution map of the micro-dots, and a uniquely identifiable micro-dot feature 202 is generated for each product (or product identification or label), and then the generated micro-dot feature is combined according to a predetermined avoidance rule Embedded in the product's digital two-dimensional identification 203 (such as quick response matrix code, that is, two-dimensional code), and print the two-dimensional code embedded with micro-dot features on the surface of the product or the surface of the product packaging as the product identification, or print Manufactured on the surface of the product label to form a digital product identification (ID) with micro-dots.
  • a predetermined avoidance rule Embedded in the product's digital two-dimensional identification 203 (such as quick response matrix code, that is, two-dimensional code)
  • ID digital product identification
  • the avoidance rule can restrict at least one of the specific location distribution, gray distribution and color distribution of the micro-dots.
  • the location distribution avoidance rule can ensure that only black or dark microdots are generated in the white module of the QR code, and the gray distribution or color distribution avoidance rule can distribute to ensure that the grayscale or color of the microdots meets a certain grayscale and saturation Degree limit, will not interfere with the white module of the QR code.
  • These evasion rules work together to ensure that the reading of the two-dimensional code itself will not be affected by the embedded micro-dot features, and make the two-dimensional code still meet the corresponding national standards and/or international standards after the micro-dot features are added.
  • the white micro-dot feature 202 can also be embedded in the black module of the two-dimensional code 203.
  • the avoidance rule restricts the micro-dots to be generated to the black module of the two-dimensional code, so that the two-dimensional code is added
  • the micro-point feature still meets the corresponding national standards and/or international standards.
  • the white micro-dots maintain the highest contrast in the black module of the two-dimensional code, and the white micro-dots are generated by short pauses in the printing inkjet during the printing process.
  • the composition of micro-point features includes the most basic two-dimensional coordinates (X, Y) as location features, and can also include other optional features, such as color, grayscale, shape, and so on.
  • the non-reproducibility and anti-counterfeiting performance of the micro-dots are first realized by the random distribution of the two-dimensional positions of the micro-dots.
  • the color, gray, or shape characteristics of the micro-dots can be used to further improve the anti-counterfeiting performance of the product.
  • Randomly distributed micro-dot features can also form randomly distributed micro-dot texture features.
  • the micro-point feature information on the product identification needs to be stored in the database for subsequent product authenticity verification.
  • the saved feature information of the micro-points includes, for example, randomly distributed location features and other features such as its color, gray scale, or shape.
  • Figure 3 (a) and Figure 3 (b) show the image of the probability density function when the uniform distribution function is used as the random distribution function of the micro-point, and the micro-point sampled from the random distribution. Distribution.
  • the probability density function of the uniform distribution function is:
  • the Z coordinate is the probability density
  • the horizontal coordinate X and the vertical coordinate Y indicate the position (x, y) of the micro point.
  • the micro-point distribution map of Fig. 3(b) is obtained by sampling from the random distribution map of Fig. 3(a) when the micro-point coordinates (x, y) are generated.
  • FIG. 4 shows a flowchart of a method 400 for verifying the authenticity of a product according to a second embodiment of the present invention.
  • the image or picture of the product identification of the product to be verified is first obtained (step 401).
  • the user can take a photo of the product identification part containing the barcode or QR code, and transmit the image of the product identification obtained by the photo to the verifier or verification device, so that the obtained image of the product identification can be verified.
  • the processing of an image containing a two-dimensional code includes two parts, namely, a convolutional neural network algorithm processing part (including steps 402-405) and a micro-point processing part (including steps 406-409).
  • the image needs to be preprocessed (steps 402 and 406), such as adjusting the brightness, cutting the effective part, and enhancing the contrast of the part of the image that contains the effective features (such as the two-dimensional code).
  • preprocessing methods in image processing technologies such as sharpening and image normalization.
  • the image features are extracted using the convolutional neural network algorithm (step 403), in which the image processed by the preprocessing step 402 is used as input, and the convolutional neural network is used as An overall algorithm module finally outputs a feature vector, that is, the image containing the QR code is quantized into a feature vector.
  • the feature vector can include k floating-point numbers (for example, a sequence of 1x512 floating-point numbers), which is also called a k-dimensional feature vector, which is used to describe the printing characteristics, that is, the printed product identification is due to the use of physical The unique and subtle features that can be reflected in the image caused by the unique paper, ink, printing equipment, etc.
  • the first classifier can be trained by using positive samples and negative samples of multiple product identifications and the printing features extracted from them.
  • the pre-trained first classifier can analyze the product identification of the product to be verified against the image features extracted in step 403 (step 404).
  • a convolutional neural network algorithm can be used to separate the extracted image features with those of the positive sample.
  • the printed features and the printed features of the negative sample are compared and analyzed, thereby outputting the probability that the product identification of the product to be verified is true or false (step 405).
  • Convolutional neural networks can include layers such as linear1, ReLU, Dropout(), Linear2 and Linear3, and finally use the Linear3 layer output, because here only the authenticity classification is concerned, so the output is a 2-dimensional vector, which is used here p1 represents the probability of being true, and p2 represents the probability of being false.
  • y is the true value of the target [y1, y2], the true label is [1.0, 0.0], and the pseudo label is [0.0, 1.0].
  • the loss can be calculated according to the known true value of the sample.
  • the judgment loss ⁇ y-y' ⁇ , that is, the absolute value of the difference between the true value and the prediction result output by the classifier at this time
  • the judgment loss is less than the predetermined threshold
  • the parameters of the neural network and the first classifier can be updated according to the loss value.
  • the updated convolutional neural network and the updated first classifier continue to extract image features and determine its authenticity probability.
  • the loss value continues to decrease and stabilizes to a relatively low loss value (ie, the threshold), it can be considered that the first classifier has been trained.
  • the positive samples can be two-dimensional code labels of multiple genuine products, and the negative samples can be copies of these positive samples obtained in various ways.
  • the negative sample has the same QR code label, but its printed features are different from the genuine QR code label.
  • the convolutional neural network After the convolutional neural network is initialized and before the first classifier is trained, the printed features of the product identification cannot be extracted. When there are differences in printed features between positive and negative samples, and other image details are the same, use these positive and negative samples to train a convolutional neural network to identify the printed features of positive samples and the printed features of negative samples , Such as the type of printed features (line, color or gray scale, etc.), location, degree of difference, etc. After continuous training, the convolutional neural network has the ability to quickly and accurately extract the printed features of the verified product identification.
  • the micro-point extraction algorithm is used to extract the micro-point features in the image (step 407), in which image processing technology can be used to read the random area where the product identification is verified.
  • the distributed micro-dot features include statistical data based on at least one of the location, size, color, or gray level of the micro-dots. For example, by counting the size of each micro-dot area (such as the number of pixels contained in each micro-dot), or counting the average RGB three-channel value of each area, the gray information of the micro-dot area in the image can be obtained.
  • step 409 extract the corresponding micro-point features of the authentic product of the pre-saved product identification in the database, and compare the read micro-point features with the micro-point features in the database (step 408), thereby outputting the micro-point features
  • step 409 is used as one of the basis for judging the authenticity of the product identification.
  • a description vector X is formed (step 410). For example, take the authenticity probability of the output product identification as a feature dimension such as x1.
  • the result of the micro-point feature comparison can include several statistical data obtained after the matching and quantification of the micro-point feature in the comparison, such as the micro-point found in the target two-dimensional code and the corresponding micro-point in the database.
  • the statistical parameters (such as mean and variance) of the pixel distance between the matched micro-points in the image coordinate system and the micro-points in the database are used as x3 and x4, and there is no penalty for matching micro-points (mismatch) As x5 etc.
  • the above information can be used to form a 5-dimensional description vector about the identity of the product being verified.
  • all collected images of product identifications of positive samples and negative samples can be processed to obtain corresponding authenticity probabilities and micro-point statistical features, and then corresponding description vectors can be obtained as sample data sets, where One part (such as 80%) can be used as a training set for training the second classifier; the other part (such as 20%) can be used as a test set. Based on this sample data set, a more popular machine learning algorithm can be used to train and test the second classifier.
  • the types of classifiers that can be selected include support vector machine (SVM), boost tree, decision tree, and shallow Layer neural network, k-nearest neighbor algorithm, random forest, etc.
  • the pre-trained second classifier can identify the verified product based on the descriptive features in the description vector obtained in step 410 (including: the authenticity probability of the verified product identification output by the first classifier and the micro-point statistical features) Perform discrimination classification (step 411), thereby outputting the authenticity determination result regarding the product identification to be verified (step 412).
  • the accumulated product identification samples can be used to continuously train the first classifier and the second classifier, which can be used to give a more accurate authenticity judgment.
  • FIGS 5A to 5C show three examples of convolutional neural network structures used in the method for extracting image features of product identification in an embodiment of the present invention.
  • Figure 5A shows the VGG network
  • Figure 5B shows the ResNet network structure
  • Figure 5C shows the Inception network structure, respectively showing three ways of extracting image features using convolutional neural networks.
  • the present invention is not limited to these three network structures.
  • 501 denotes the input layer, in which the image of the preprocessed product identification is input.
  • 502 represents the convolutional layer.
  • the function of the convolutional layer is to extract the features of the input image data. It contains multiple convolution kernels. Each element of the convolution kernel corresponds to a weight coefficient and a deviation, which is similar to a The neurons of a feedforward neural network. Each neuron in the convolutional layer is connected to multiple neurons in the area close to the previous layer.
  • feature extraction is performed on each small region in the input image, and multiple filters are used to perform convolution respectively to obtain multiple feature maps.
  • 503 represents the pooling layer.
  • the output feature map will be passed to the pooling layer 503 for feature selection and information filtering.
  • the pooling layer 503 contains a preset pooling function, and its function is to replace the result of a single point in the feature map with the feature map statistics of its neighboring regions.
  • 504 represents the fully connected layer, which is equivalent to the hidden layer in the traditional feedforward neural network.
  • the fully connected layer is located in the last part of the hidden layer of the convolutional neural network and only transmits signals to other fully connected layers.
  • the function of the fully connected layer is to non-linearly combine the extracted features to obtain the output.
  • the fully connected layer itself does not have feature extraction capabilities, but tries to use the existing high-level features to complete the learning goal.
  • 505 denotes a residual network module, which includes a combination of multiple convolutional layers connected by jumps, as a building unit of the ResNet network structure.
  • 506 represents the Inception module.
  • the Inception module is a hidden layer construction obtained by stacking multiple convolutional layers and pooling layers. Specifically, an Inception module will contain multiple different types of convolution and pooling operations at the same time, and use the same filling to make the above operations get feature maps of the same size, and then superimpose the channels of these feature maps in the array and pass Motivation function.
  • 510 represents the output layer, which outputs the image features extracted by the convolutional neural network.
  • the device for verifying the authenticity of a product may include: a memory for storing instructions; and a processor coupled to the memory, and the processor can execute the instructions according to the present invention when the stored instructions are executed.
  • a database may also be stored in the memory, and the database contains the micro-point features of the authentic product identifiers that are saved during or after the product is made.
  • the micro-dot feature may include at least one of the shape feature, position feature, gray-scale feature, and color feature of the micro-dot.
  • the memory of this embodiment may also store a sample library, which includes a plurality of authentic identification images as positive samples and a plurality of fake identification images as negative samples.
  • the processor is configured to use at least a part of the samples in the sample library to train a convolutional neural network, and a first classifier and a second classifier for verifying product identification.
  • Fig. 6 shows a structural block diagram of an apparatus 600 for verifying the authenticity of a product according to an embodiment of the present invention.
  • the device 600 includes: a micro-dot feature extraction module 601, used to extract the micro-dot feature on the product identification from the image of the product identification of the verified product; an image feature extraction module 602, used to extract the product identification from the image using a machine learning algorithm And a verification module 603 for verifying the authenticity of the product identification of the verified product based on the extracted micro-dot features and image features.
  • the apparatus 600 shown in FIG. 6 can be implemented by software, hardware, or a combination of software and hardware, and can be designed to include corresponding modules to implement the foregoing method embodiments of the present invention for verifying product authenticity.
  • the combination of, for example, product identification including two-dimensional codes or barcodes, micro-dot features, and image features are used to greatly improve the accuracy of verification of authentic products. Users who purchase products are allowed to use various mobile phones or cameras to take product identification images and perform accurate verification under various lighting conditions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Accounting & Taxation (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Finance (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及用于验证产品真伪的方法和设备,产品的产品标识上具有随机分布的微点,该方法包括:从被验证产品的产品标识的图像中提取产品标识上的微点特征;使用机器学习算法从图像中提取产品标识的至少一部分的图像特征;以及基于所提取的微点特征和图像特征来验证被验证产品的产品标识的真伪。本发明的实施例提供的方法和设备可以改善验证产品真伪的准确度。

Description

用于验证产品的真伪的方法和设备 技术领域
本发明涉及一种用于验证产品的真伪的方法和设备。
背景技术
假冒伪劣产品对于生产者和消费都造成巨大的损失,因此需要通过使用安全可靠的防伪技术来加以控制。现有的针对产品的防伪技术包括数字防伪技术和纹理防伪技术。
数字防伪技术利用条形码或二维码来给产品一个唯一的身份识别(ID)用于防伪验证和可追溯性功能,这种数字防伪技术易于被复制,且安全性差。
纹理防伪技术使用随机生成的自然纹理作为防伪特征,这种纹理是物理不可复制的,具有不可再现的特征;但是现有的纹理防伪技术缺乏对防伪特征的自动鉴别能力,而自动鉴别能力要求防伪特征具有可视识别性,或者依赖于在生产过程中添加纤维材料来形成防伪特征,从而导致防伪产品的成本上升以及不便于生产。
目前已经出现了将条形码或二维码与印制的微点特征相结合的新技术,用于进一步改善产品标识的防伪性能,同时又能简化防伪产品的生产工艺并降低生产成本。然而,在产品标识的验证过程中首先需要获取被验证产品的产品标识的图像,例如需要用户使用手机或数码相机拍摄产品标识的图像,而由于手机或相机的拍摄功能、拍摄环境(例如光线)以及拍摄水平(例如拍摄角度、拍摄距离、相机稳定性)等因素存在差异,使得图像质量受到不同程度的影响,从而使验证结果出现偏差,例如将作为真品的产品标识的图像验证为伪品,或 者将作为伪品的产品标识的图像验证为真品。
发明内容
针对现有技术的以上问题中至少之一,本发明的实施例提供用于验证产品的真伪的方法和设备,其能够改善验证产品真伪的准确度。
本发明的实施例提供一种用于验证产品的真伪的方法,所述产品的产品标识上具有随机分布的微点,所述方法包括:从被验证产品的产品标识的图像中提取所述产品标识上的微点特征;使用机器学习算法从所述图像中提取所述产品标识的至少一部分的图像特征;以及基于所提取的所述微点特征和所述图像特征来验证所述被验证产品的产品标识的真伪。
本发明的实施例提供一种用于验证产品的真伪的装置,所述产品的产品标识上具有随机分布的微点,所述装置包括:微点特征提取模块,用于从被验证产品的产品标识的图像中提取所述产品标识上的微点特征;图像特征提取模块,用于使用机器学习算法从所述图像中提取所述产品标识的至少一部分的图像特征;以及验证模块,用于基于所提取的所述微点特征和所述图像特征来验证所述被验证产品的产品标识的真伪。
本发明的实施例提供一种用于验证产品的真伪的设备,所述产品的产品标识上具有随机分布的微点,所述设备包括:用于存储指令的存储器;以及耦合到所述存储器的处理器,所述指令在由所述处理器执行时使得所述处理器执行根据上述实施例的方法。
本发明的实施例还提供一种计算机可读存储介质,其上存储由可执行指令,所述可执行指令在由计算机执行时使得所述计算机执行上述实施例的方法。
根据本发明的实施例的方案,在验证产品标识的真伪时不仅利用产品标识上的微点特征,而且还要利用通过机器学习算法从产品标识 的图像中提取的图像特征,由此可以提高验证产品标识的真伪的准确度。
附图说明
本发明的其它特征、特点、益处和优点通过结合以下附图的详细描述将变得更加显而易见,其中:
图1示出根据本发明的第一实施例的用于验证产品真伪的方法流程图;
图2是在实施例中将微点特征嵌入到产品二维码中的示意图;
图3(a)和图3(b)示出采用均匀分布函数作为微点的随机分布函数时的概率密度函数的图像、以及从随机分布中抽样得到的微点分布图;
图4示出根据本发明的第二实施例的用于验证产品真伪的方法流程图;
图5A至图5C示出在本发明的实施例中用于提取产品标识的图像特征的方法使用的三个示例的卷积神经网络结构;以及
图6示出根据本发明的实施例的用于验证产品的真伪的装置的结构方框图。
具体实施方式
以下结合附图进一步描述本发明的实施例。
图1示出根据本发明的第一实施例的用于验证产品真伪的方法100的流程图。图1所示的方法100可以由具有计算能力的任何计算设备来实现。该计算设备可以是但不局限于台式计算机、笔记本电脑、平板电脑、服务器或智能手机等。
如图1所示,验证方法100包括:从被验证产品的产品标识的图像中提取产品标识上的微点特征(步骤101);使用机器学习算法从 产品标识的图像中提取产品标识的至少一部分的图像特征(步骤102);以及基于所提取的微点特征和图像特征来验证被验证产品的产品标识的真伪(步骤103)。
在本发明的实施例中,验证步骤103包括:使用经机器学习算法训练的分类器来验证被验证产品的产品标识的真伪。
在本发明的实施例,用于提取图像特征的机器学习算法是卷积神经网络。用于训练分类器的机器学习算法是可对特征向量进行分类的机器学习算法,例如支持向量机(SVM)或提升树(Boost Tree)等。卷积神经网络是一类包含卷积计算且具有深度结构的前馈神经网络,具有表征学习能力,能够按其阶层结构对输入信息进行平移不变分类。卷积神经网络仿造生物的视知觉机制构建,可以进行监督学习和非监督学习,其隐含层内的卷积核参数共享和层间连接的稀疏性使得卷积神经网络能够以较小的计算量对格点化特征(如像素和音频)进行学习。
在本发明的实施例中,验证方法还可包括:通过采用多个真品标识图像作为正样本、以及采用多个伪品标识图像作为负样本来训练卷积神经网络和分类器。
在本发明的实施例中,提取图像特征的步骤102包括:使用训练后的卷积神经网络从图像中提取图像特征,以输出描述图像特征的特征向量。
在本发明的实施例中,分类器包括第一分类器,所提取的图像特征至少包括与所述产品标识中的至少一部分的印制相关的印制特征;所述第一分类器基于所述印制特征来区分被验证产品的产品标识的真伪。可以利用产品标识的正样本和负样本来训练第一分类器。验证步骤103可包括:基于提取的印制特征,使用被训练后的第一分类器来输出被验证产品的产品标识的为真的概率;和/或基于提取的印制特征,使用被训练后的第一分类器来输出被验证产品的产品标识的为 伪的概率。
在本发明的实施例中,真品的产品标识的印制特征是与真品的产品标识的印制过程中使用的纸张、油墨、印制设备中的至少一项相关联的特征。产品标识印刷可以是将数字文件印刷到物理纸张或其他承载物品上,同样的数字文件打印时由于打印机设置不同,印刷机种类不同,油墨或者碳粉或者上色剂不同,纸张特性不同等这些复杂的组合,都可能会使得将同样的数字图像打印出来以后,其细节部分是不一样的。这种细节部分就体现了印制特征。例如,由于所采用的纸张、油墨或印制设备的不同,所印制出来的线条会存在细微的差别,如在边缘出现形状或排列不同的细微锯齿部分。例如,在产品标识中的二维码包含多个黑色块和白色块,其中所有的黑白分界处在不同的印制情况下都可能是有差别的。另外,受到纸张、油墨或印制设备的影响,印制的颜色或灰度也会有差别。这种印制差别分布在整个二维码的印刷区域。而利用这种差别就可以提取产品标识的印制特征。通过复制技术制作的伪品的产品标识所具有的印制特征与真品的产品标识是不同的。通过足够多个的正样本和负样本,可以对卷积神经网络和第一分类器进行训练。卷积神经网络可以在训练过程中学习正样本中的印制特征、以及负样本中与正样本不同的印制特征;训练好的卷积神经网络将具有提取被验证产品标识中的印制特征的能力。训练好的第一分类器就可以根据卷积神经网络所提取的图像特征中包含的印制特征,将其与真品的图像特征中包含的印制特征进行比较,以输出被验证产品的产品标识的真伪概率。
在本发明的实施例中,分类器还可包括第二分类器,第二分类器基于第一分类器输出的被验证产品标识的真伪概率和微点特征来判断产品标识的真伪。可以利用产品标识的正样本和负样本来训练第二分类器。验证步骤103还可包括:将所提取的微点特征与在制作产品期间或之后预先保存的产品标识的微点特征进行比较;基于比较结果 和第一分类器输出的产品标识的真伪概率,组成关于产品标识的描述向量;以及基于描述向量,使用第二分类器来判断被验证产品的产品标识的真伪。
在本发明的实施例中,描述向量包括与以下各项中的至少一项相关的数据:所提取的微点特征与预先保存的微点特征之间的匹配率、匹配上的微点在图像坐标系中距离预先保存的微点的像素距离的统计参数、被验证产品的产品标识的图像中与预先保存的微点特征不匹配的微点数目、以及所获取的产品标识的图像质量。
在本发明的实施例中,提取微点特征的步骤101可包括:使用图像处理技术从图像中提取微点的形状特征、位置特征、灰度特征、颜色特征中的至少一项。
在本发明的实施例中,产品标识可包括条形码和二维图形码中的至少一种。
图2是在实施例中将微点特征嵌入到产品二维码中的示意图,其中的微点特征202因其尺寸太小而在图中未详细示出。在微点特征的生成过程中,首先通过算法生成微点的特定高维随机分布图201,作为所有微点特征的位置分布、灰度分布、颜色分布和微观形态中至少之一的分布特性,在相同类或相同批次的产品可以共同遵循某一分布特性,其中各个产品又具有其它不同的微点特征以示区分。例如,不同批次的产品可采用不同的随机分布图,相同批次的不同产品则采用不同的微点。然后,利用该算法对微点的随机分布图进行采样,针对每个产品(或产品标识或标签)生成具有唯一标识性的微点特征202,再根据预定的回避规则将所生成的微点特征嵌入到产品的数字二维标识203(如快速响应矩阵码,即二维码)中,并将嵌入有微点特征的二维码印制在产品表面或产品包装的表面作为产品标识,或印制在产品标签的表面上,形成具有微点的数字式产品标识(ID)。回避规则可以限制微点的特定位置分布、灰度分布和颜色分布中的至少一 项。例如,位置分布回避规则可确保仅在二维码的白色模块中生成黑色或深色微点,灰度分布或颜色分布回避规则可分布确保微点的灰度或颜色满足某种灰度和饱和度限制,不会干扰二维码的白色模块。这些回避规则共同作用而确保二维码本身的读取不会受到嵌入微点特征的影响,并且使得二维码在添加了微点特征之后仍然满足相应的国家标准和/或国际标准。
在一些实施例中,也可以将白色微点特征202嵌入到二维码203的黑色模块中,回避规则将要生成的微点仅限制在二维码的黑色模块中,使得二维码在添加了微点特征之后仍然满足相应的国家标准和/或国际标准。白色微点在二维码的黑色模块中保持最高的对比度,并且在印制过程中通过印刷喷墨中的短停顿来产生白色微点。
微点特征的构成包括最基本的二维坐标(X,Y)作为位置特征,还可以包括其他可选特征,如颜色、灰度、形状等等。通常,微点特征的不可再现性和防伪性能首先是通过微点的二维位置的随机分布来实现。而微点的颜色、灰度或形状特征可用于进一步提高产品的防伪性能。随机分布微点特征也可以形成随机分布的微点纹理特征。
在完成产品标识的制作后或是在制作过程中,需要将产品标识上的微点特征信息保存数据库中,以用于后续的产品真伪验证。保存的微点特征信息例如包括随机分布的位置特征、以及如其颜色、灰度或形状等其它特征。
作为微点特征的一个示例,图3(a)和图3(b)示出采用均匀分布函数作为微点的随机分布函数时的概率密度函数的图像、以及从随机分布中抽样得到的微点分布图。其中均匀分布函数的概率密度函数为:
PDF(x,y)=const
在图3(a)的概率密度函数的图像中,Z向坐标为概率密度,横向坐标X和纵向坐标Y指示微点的位置(x,y)。图3(b)的微点分布图是生成微点坐标(x,y)时从图3(a)的随机分布图中抽样得到的。
图4示出根据本发明的第二实施例的用于验证产品真伪的方法400的流程图。在方法400中,首先获取被验证产品的产品标识的图像或图片(步骤401)。例如,用户在购买产品后,可以对包含条形码或二维码的产品标识部分进行拍照,将拍照获得的产品标识的图像传送给验证方或验证设备,以便针对所获得的产品标识的图像进行真伪验证。在本实施例中,对于包含二维码的图像的处理包括两个部分,即,卷积神经网络算法处理部分(包括步骤402-405)、以及微点处理部分(包括步骤406-409)。在这两个处理部分中,首先需要对图像进行预处理(步骤402和406),例如对图像中包含有效特征(例如二维码)的部分区域进行明暗调节、有效部分截取、对比度增强,图像锐化、图片归一化等图像处理技术中的常用预处理方法。
在卷积神经网络算法处理部分中,在预处理步骤402之后,利用卷积神经网络算法提取图像特征(步骤403),其中使用预处理步骤402处理后的图像作为输入,使用卷积神经网络作为一个整体算法模块,最终输出一个特征向量,也就是将包含二维码的图像量化为一个特征向量。该特征向量可包括k个浮点数(例如是1x512的一个浮点数列),也称为k维特征向量,其用来描述印制特征,即印制的产品标识在印刷过程中由于使用了物理的纸张、油墨、印刷设备等而导致的独一无二的、且可以反映在图像中的细微特征。在本发明的实施例中,可以利用多个产品标识的正样本和负样本及其从中提取的印刷特征来训练第一分类器。预先训练好的第一分类器可以针对步骤403中所提取的图像特征来分析被验证产品的产品标识(步骤404),例如可以利用卷积神经网络算法将所提取的图像特征分别与正样本的印制特征、负样本的印制特征进行比较分析处理,由此输出被验证产品的产品标识为真或为伪的概率(步骤405)。
在卷积神经网络中可包括linear1、ReLU、Dropout()、Linear2和Linear3等层,最终使用的是Linear3层输出,因为在这里只关注真伪 分类,所以输出的是一个2维向量,这里用p1表示为真的概率,p2表示为伪的概率。
训练使用交叉熵损失函数:
H(y,p)=-∑ iy ilog(p i)
其中,y为目标的真值[y1,y2],真标签为[1.0,0.0],伪标签为[0.0,1.0]。
在第一分类器的训练过程中,可根据样本已知的真值计算损失。当判断损失(‖y-y’‖,即真值与分类器此时输出的预测结果之差的绝对值)小于预定阈值时,停止卷积神经网络训练;当判断损失大于预定阈值时,则可根据损失值更新神经网络参数及第一分类器的参数。然后,再由更新的卷积神经网络和更新的第一分类器继续进行图像特征的提取和判别其真伪概率。当损失值持续下降并稳定到一个比较低的损失值(即阈值)时,可以认为第一分类器已被训练好了。
正样本可以是多个真品的二维码标签,负样本可以是使用各种方式获得这些正样本的复制品。负样本具有相同的二维码标签,但其印制特征与真品的二维码标签相比存在细节差异。在卷积神经网络初始化后、在对第一分类器进行训练之前尚无法提取产品标识的印制特征。当正样本和负样本存在印制特征的差异、且其它图像细节都相同时,利用这些正样本和负样本可以训练卷积神经网络来识别正样本的印制特征、以及负样本的印制特征,例如印制特征的类型(线条、颜色或灰度等)、所处位置、差异程度等。卷积神经网络通过不断的训练后具备快速准确提取被验证产品标识的印制特征的能力。
在微点处理部分中,在预处理步骤406之后,利用微点提取算法提取图像中的微点特征(步骤407),其中可利用图像处理技术来读取被验证的产品标识所处区域内随机分布的微点特征,包含基于微点的位置、大小、颜色或灰度等中至少之一的统计数据。例如,通过统计每个微点区域的大小(如每个微点包含的像素数目),或者统计每 个区域的平均RGB三通道值,得到图像中微点区域的灰度信息。然后,在数据库中提取预先保存的产品标识真品所具有的相应微点特征,并将所读取的微点特征与数据库中的微点特征进行比对(步骤408),由此输出微点特征对比的结果(步骤409),作为判断产品标识的真伪的依据之一。
然后,利用步骤405输出的产品标识的真伪概率、以及步骤409输出的微点特征对比的结果(其中不同的统计数据做归一化处理),组成描述向量X(步骤410)。例如,将输出的产品标识的真伪概率作为一个特征维度如x1。微点特征对比的结果可包括在对比中微点特征的匹配量化处理后得到的若干个统计数据,如在目标二维码中找到的微点与数据库中的对应二维码的微点匹配上的百分比作为x2,匹配上的微点在图像坐标系中距离数据库中的微点的像素距离的统计参数(如均值和方差)作为x3和x4,没有匹配上的微点的惩罚(误匹配)作为x5等。利用以上这些信息可以组成关于被验证产品标识的一个5维的描述向量。
在本发明的实施例中,可以针对所有收集的正样本和负样本的产品标识的图像进行处理而得到相应的真伪概率和微点统计特征,进而得到相应的描述向量作为样本数据集合,其中一部分(如80%)可作为训练集,用于训练第二分类器;另一部分(如20%)可作为测试集。基于该样本数据集合,可以采用比较流行的机器学习算法对第二分类器进行训练和测试,可选择的分类器种类例如有支持向量机(SVM)、提升树(Boost Tree)、决策树、浅层神经网络、k近邻算法、随机森林等。预先训练好的第二分类器可以基于步骤410得到的描述向量中的描述特征(包括:第一分类器输出的关于被验证产品标识的真伪概率,以及微点统计特征)对被验证产品标识进行判别分类(步骤411),由此输出关于被验证产品标识的真伪判定结果(步骤412)。
在训练图像样本准备过程中,可以使用市面上的多种不同手机对 若干真品产品标识在不同的光照环境下拍照,将其所得图像作为正样本;使用市面上的多种不同手机对制作的非真标签在不同的光照环境下拍照,将其所得图像作为负样本;然后,随机地把正负样本按照比例划分为训练样本集和测试样本集。
可以利用积累的产品标识样本对第一分类器和第二分类器进行不断的训练,从而可以用于给出更加精准的真伪判别。
图5A至图5C示出在本发明的实施例中用于提取产品标识的图像特征的方法使用的卷积神经网络结构的三个示例。图5A示出VGG网络,图5B示出ResNet网络结构,图5C示出Inception网络结构,分别显示了利用卷积神经网络提取图像特征的三种方式。但本发明并不限于这三种网络结构。
在图5A至图5C中,501表示输入层,其中输入预处理后的产品标识的图像。502表示卷积层,卷积层的功能是对输入图像数据进行特征提取,其内部包含多个卷积核,组成卷积核的每个元素都对应一个权重系数和一个偏差量,类似于一个前馈神经网络的神经元。卷积层内每个神经元都与前一层中位置接近的区域的多个神经元相连。在卷积层中,对输入图像中的每一个小区域进行特征的提取,使用多个滤波器分别进行卷积,得到多个特征图。503表示池化层,在卷积层502进行特征提取后,输出的特征图会被传递至池化层503进行特征选择和信息过滤。池化层503包含预设定的池化函数,其功能是将特征图中单个点的结果替换为其相邻区域的特征图统计量。504表示全连接层,全连接层等价于传统前馈神经网络中的隐含层。全连接层位于卷积神经网络隐含层的最后部分,并只向其它全连接层传递信号。全连接层的作用是对提取的特征进行非线性组合以得到输出,全连接层本身不具有特征提取能力,而是试图利用现有的高阶特征完成学习目标。505表示残差网络模块,其包含跳跃连接的多个卷积层的组合,作为ResNet网络结构的构筑单元。506表示Inception模块,Inception 模块是对多个卷积层和池化层进行堆叠所得的隐含层构筑。具体而言,一个Inception模块会同时包含多个不同类型的卷积和池化操作,并使用相同填充使上述操作得到相同尺寸的特征图,随后在数组中将这些特征图的通道进行叠加并通过激励函数。510表示输出层,其输出卷积神经网络提取的图像特征。
根据本发明的实施例所提供的用于验证产品的真伪的设备可包括:存储器,用于存储指令;以及耦合到存储器的处理器,处理器执行所存储的指令时可执行根据本发明的上述实施例的方法。存储器中还可存储有数据库,该数据库包含在制作产品期间或之后保存的真品产品标识的微点特征。微点特征可包括微点的形状特征、位置特征、灰度特征、颜色特征中的至少一项。
在该实施例的存储器中还可存储样本库,该样本库包括作为正样本的多个真品标识图像、以及作为负样本的多个伪品标识图像。处理器被配置用于利用样本库中的至少一部分样本来训练卷积神经网络、以及用于验证产品标识的第一分类器和第二分类器。
图6示出根据本发明的实施例的用于验证产品的真伪的装置600的结构方框图。装置600包括:微点特征提取模块601,用于从被验证产品的产品标识的图像中提取产品标识上的微点特征;图像特征提取模块602,用于使用机器学习算法从图像中提取产品标识的至少一部分的图像特征;以及验证模块603,用于基于所提取的微点特征和图像特征来验证该被验证产品的产品标识的真伪。图6所示的装置600可以利用软件、硬件或软硬件结合的方式来实现,并且可以被设计成包括相应的模块以实施本发明的上述用于验证产品真伪的各方法实施例。
根据本发明的上述实施例所提供的产品验证方法和设备,采用例如包含二维码或条形码的产品标识、微点特征、以及利用图像特征的组合,大大改善了真伪产品的验证准确度,允许购买产品的用户在各 种光照条件下使用各种手机或相机拍摄产品标识图像并进行准确验证。
以上公开的本发明的实施例均为示例性的,而非限制性的。本领域技术人员应当理解,上以上公开的各个实施例可以在不偏离发明实质的情况下做出各种变型、修改和改变,这些变型、修改和改变都应当落入在本发明的保护范围之内。因此,本发明的保护范围应由所附的权利要求书来限定。

Claims (16)

  1. 一种用于验证产品的真伪的方法,所述产品的产品标识上具有随机分布的微点,所述方法包括:
    从被验证产品的产品标识的图像中提取所述产品标识上的微点特征;
    使用机器学习算法从所述图像中提取所述产品标识的至少一部分的图像特征;以及
    基于所提取的所述微点特征和所述图像特征来验证所述被验证产品的产品标识的真伪。
  2. 根据权利要求1的方法,其中,所述验证步骤包括:
    使用经机器学习算法训练的分类器来验证被验证产品的产品标识的真伪。
  3. 根据权利要求2的方法,其中,用于提取所述图像特征的机器学习算法是卷积神经网络;用于训练所述分类器的机器学习算法是可对特征向量进行分类的机器学习算法。
  4. 根据权利要求3的方法,还包括:
    通过采用多个真品标识图像作为正样本、以及采用多个伪品标识图像作为负样本来训练所述卷积神经网络和所述分类器。
  5. 根据权利要求4的方法,其中,所述提取图像特征的步骤包括:
    使用训练后的卷积神经网络从所述图像中提取所述图像特征,以输出描述所述图像特征的特征向量。
  6. 根据权利要求2的方法,其中,所提取的图像特征至少包括与所述产品标识中的至少一部分的印制相关的印制特征;所述分类器包括第一分类器,所述第一分类器基于所述印制特征来区分被验证产品的产品标识的真伪;其中,利用产品标识的正样本和负样本来训练所述第一分类器;
    其中,所述验证步骤包括:
    基于所述印制特征,使用被训练后的所述第一分类器来输出所述被验证产品的产品标识的为真的概率;和/或
    基于所述印制特征,使用被训练后的所述第一分类器来输出所述被验证产品的产品标识的为伪的概率。
  7. 根据权利要求6的方法,其中,所述印制特征是与正样本的产品标识的印制过程中使用的纸张、油墨、印制设备中的至少一项相关联的特征。
  8. 根据权利要求6的方法,其中,所述分类器还包括第二分类器,所述第二分类器基于所述第一分类器输出的所述被验证产品的产品标识的真伪概率和所述微点特征来判断所述产品标识的真伪;其中,利用产品标识的正样本和负样本来训练所述第二分类器;
    其中,所述验证步骤还包括:
    将所提取的微点特征与在制作所述防伪产品期间或之后预先保存的所述产品标识的微点特征进行比较;
    基于所述比较结果和所述第一分类器输出的所述产品标识的真伪概率,组成关于所述产品标识的描述向量;以及
    基于所述描述向量,使用所述第二分类器判断所述被验证产品的产品标识的真伪。
  9. 根据权利要求8的方法,其中,所述描述向量包括与以下各项中的至少一项相关的数据:所提取的微点特征与预先保存的微点特征之间的匹配率、匹配上的微点在图像坐标系中距离预先保存的微点的像素距离的统计参数、所述被验证产品的产品标识的图像中与预先保存的微点特征不匹配的微点数目、以及所述图像的质量。
  10. 根据权利要求1的方法,其中,所述提取微点特征的步骤包括:
    使用图像处理技术从所述图像中提取所述微点的形状特征、位置特征、灰度特征、颜色特征中的至少一项。
  11. 根据权利要求1的方法,其中,所述产品标识包括条形码和二维图形码中的至少一种。
  12. 一种用于验证产品的真伪的装置,所述产品的产品标识上具有随机分布的微点,所述装置包括:
    微点特征提取模块,用于从被验证产品的产品标识的图像中提取所述产品标识上的微点特征;
    图像特征提取模块,用于使用机器学习算法从所述图像中提取所述产品标识的至少一部分的图像特征;以及
    验证模块,用于基于所提取的所述微点特征和所述图像特征来验证所述被验证产品的产品标识的真伪。
  13. 一种用于验证产品的真伪的设备,所述产品的产品标识上具有随机分布的微点,所述设备包括:
    用于存储指令的存储器;以及
    耦合到所述存储器的处理器,所述指令在由所述处理器执行时使得所述处理器执行根据权利要求1至11中任一项所述的方法。
  14. 根据权利要求13的设备,其中,所述存储器中还存储有数据库;所述数据库包含在制作所述产品期间或之后保存的产品标识的微点特征,所述微点特征包括:所述微点的形状特征、位置特征、灰度特征、颜色特征中的至少一项。
  15. 根据权利要求13的设备,其中,所述存储器中还存储样本库,所述样本库包括作为正样本的多个真品标识图像、以及作为负样本的多个伪品标识图像;所述处理器被配置用于利用所述样本库中的至少一部分样本来训练卷积神经网络、以及用于验证产品标识的第一分类器和第二分类器。
  16. 一种计算机可读存储介质,其上存储由可执行指令,所述可执行指令在由计算机执行时使得所述计算机执行根据权利要求1至11中任一项所述的方法。
PCT/CN2019/121446 2019-11-28 2019-11-28 用于验证产品的真伪的方法和设备 WO2021102770A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2019/121446 WO2021102770A1 (zh) 2019-11-28 2019-11-28 用于验证产品的真伪的方法和设备
CN201980102577.4A CN114746864A (zh) 2019-11-28 2019-11-28 用于验证产品的真伪的方法和设备
DE112019007487.3T DE112019007487T5 (de) 2019-11-28 2019-11-28 Verfahren und gerät zur verifizierung der echtheit von produkten

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/121446 WO2021102770A1 (zh) 2019-11-28 2019-11-28 用于验证产品的真伪的方法和设备

Publications (1)

Publication Number Publication Date
WO2021102770A1 true WO2021102770A1 (zh) 2021-06-03

Family

ID=76129807

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/121446 WO2021102770A1 (zh) 2019-11-28 2019-11-28 用于验证产品的真伪的方法和设备

Country Status (3)

Country Link
CN (1) CN114746864A (zh)
DE (1) DE112019007487T5 (zh)
WO (1) WO2021102770A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435219A (zh) * 2021-06-25 2021-09-24 上海中商网络股份有限公司 防伪检测方法、装置、电子设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253262B (zh) * 2023-11-15 2024-01-30 南京信息工程大学 一种基于共性特征学习的伪造指纹检测方法及装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208067A (zh) * 2013-03-13 2013-07-17 张小北 防伪系统及其标签的形成、嵌入、解读、鉴别及权属改变方法
CN108470201A (zh) * 2018-01-24 2018-08-31 重庆延伸科技开发有限公司 一种随机彩色点阵标签防伪系统
CN108509965A (zh) * 2017-02-27 2018-09-07 顾泽苍 一种超深度强对抗学习的机器学习方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208067A (zh) * 2013-03-13 2013-07-17 张小北 防伪系统及其标签的形成、嵌入、解读、鉴别及权属改变方法
CN108509965A (zh) * 2017-02-27 2018-09-07 顾泽苍 一种超深度强对抗学习的机器学习方法
CN108470201A (zh) * 2018-01-24 2018-08-31 重庆延伸科技开发有限公司 一种随机彩色点阵标签防伪系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113435219A (zh) * 2021-06-25 2021-09-24 上海中商网络股份有限公司 防伪检测方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
DE112019007487T5 (de) 2022-03-31
CN114746864A (zh) 2022-07-12

Similar Documents

Publication Publication Date Title
US10885531B2 (en) Artificial intelligence counterfeit detection
US11625805B2 (en) Learning systems and methods
US11450152B2 (en) Detection of manipulated images
Fatemifar et al. Combining multiple one-class classifiers for anomaly based face spoofing attack detection
WO2021179157A1 (zh) 用于验证产品真伪的方法和装置
KR20080033486A (ko) 서포트 벡터 머신 및 얼굴 인식에 기초한 자동 생체 식별
CN110427972B (zh) 证件视频特征提取方法、装置、计算机设备和存储介质
WO2021102770A1 (zh) 用于验证产品的真伪的方法和设备
Uddin et al. Image-based approach for the detection of counterfeit banknotes of Bangladesh
Sowmya et al. Significance of processing chrominance information for scene classification: a review
Khuspe et al. Robust image forgery localization and recognition in copy-move using bag of features and SVM
EP3982289A1 (en) Method for validation of authenticity of an image present in an object, object with increased security level and method for preparation thereof, computer equipment, computer program and appropriate reading means
US20240112484A1 (en) Copy prevention of digital sample images
CN110415424B (zh) 一种防伪鉴定方法、装置、计算机设备和存储介质
Akram et al. Weber Law Based Approach forMulti-Class Image Forgery Detection.
Sabeena et al. Digital image forgery detection using local binary pattern (LBP) and Harlick transform with classification
Abraham Digital image forgery detection approaches: A review and analysis
Theresia et al. Image Forgery Detection of Spliced Image Class in Instant Messaging Applications
Prabu et al. Robust Attack Identification Strategy to Prevent Document Image Forgeries by using Enhanced Learning Methodology
Harris et al. An Improved Signature Forgery Detection using Modified CNN in Siamese Network
Gakhar Local Image Patterns for Counterfeit Coin Detection and Automatic Coin Grading
Al-Frajat Selection of Robust Features for Coin Recognition and Counterfeit Coin Detection
Prasad et al. Influence of Standalone and Ensemble Classifiers in Face Spoofing Detection using LBP and CNN Models
Abdullakutty Unmasking the imposters: towards improving the generalisation of deep learning methods for face presentation attack detection.
CN115775409A (zh) 一种人脸图像防篡改融合检测方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19954436

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 19954436

Country of ref document: EP

Kind code of ref document: A1