CN112036902A - Product authentication method and device based on deep learning, server and storage medium - Google Patents

Product authentication method and device based on deep learning, server and storage medium Download PDF

Info

Publication number
CN112036902A
CN112036902A CN202010673651.2A CN202010673651A CN112036902A CN 112036902 A CN112036902 A CN 112036902A CN 202010673651 A CN202010673651 A CN 202010673651A CN 112036902 A CN112036902 A CN 112036902A
Authority
CN
China
Prior art keywords
texture
product
feature
model
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010673651.2A
Other languages
Chinese (zh)
Inventor
陈昌盛
蔡素到
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN202010673651.2A priority Critical patent/CN112036902A/en
Publication of CN112036902A publication Critical patent/CN112036902A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/018Certifying business or products
    • G06Q30/0185Product, service or business identity fraud
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10861Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices sensing of data fields affixed to objects or articles, e.g. coded labels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention discloses a product authentication method, device and equipment based on deep learning and a storage medium. The method comprises the following steps: obtaining a verification texture picture of a product; identifying a first texture feature of the verification texture picture based on a first model; and comparing the first texture characteristic with a second texture characteristic based on a second model to determine the authenticity of the product, wherein the second texture characteristic is obtained by decoding a texture characteristic two-dimensional code built in the product. The method judges the similarity by extracting the textural features and comparing the textural features, thereby identifying the authenticity of the product, providing better identification effect when a new product appears, and not needing to frequently update the existing class of the product according to the new product.

Description

Product authentication method and device based on deep learning, server and storage medium
Technical Field
The invention relates to the technical field of product anti-counterfeiting, in particular to a product counterfeit identification method and device based on deep learning, a server and a storage medium.
Background
The establishment of a low-cost, convenient and universal product anti-counterfeiting system to uniformly solve the problem of counterfeit products has very important research significance.
Among them, texture is an important visual cue, which is a feature that is ubiquitous and difficult to describe in images, and is also an inherent attribute of products. The texture is closely related to the product material, and the product material is closely related to the authenticity of the product. Therefore, the depth texture of the product can be used for verifying the authenticity of the product from the perspective of computer vision.
In addition, the manufacturer will continually update its own products iteratively. After a new product appears, manufacturers, merchants and consumers need to distinguish the authenticity of the new product. Namely, when new products and other new types appear, the whole network does not need to be retrained to update the products, the authenticity of the new products and the old products can be effectively identified, and the scheme of the anti-counterfeiting system has practical significance. Therefore, a new anti-counterfeiting method which can distinguish the unseen products and is low in cost, effective, universal and strong in attack resistance has to be provided.
In the prior art, a pioneer company named entry in new york proposed a method for preventing counterfeiting of a product by using microscopic features of surface texture of the product. It has been used in real life for authenticating products with a pseudo-heel texture. Specifically, a microscopic device is used for collecting 3 million microscopic pictures of 20 leather products, and the 3 million microscopic pictures are stored in an online database and used for training a classification network of classical deep learning. Wherein, the texture image collected firstly passes through four convolution layers and then passes through two full-connected layers, and 4096-dimensional depth texture features are extracted from the texture image. And finally, performing multi-classification on the 20 classes of leather texture pictures through a softmax classifier. When a merchant and a consumer use the anti-fake method, the merchant and the consumer need to purchase a microscopic scanner of the company first, scan and upload a microscopic image of the purchased product to be compared with an online stored database, if the comparison is successful, the product is identified as a genuine product, and if the comparison is not successful, the product is identified as a fake product.
In the prior art, a method for anti-counterfeiting a medicine by using a texture image of a medicine packing box is provided in 'Real or Fake: Mobile Device Drug Packaging Authentication'. Specifically, pictures of the front and back surfaces of 45 different medicine boxes prepared by 28 factories and three modes of external packing of medicines and pictures of medicine codes of the medicines are collected by using mobile equipment. And extracting shallow features of the drug outer package and the drug box picture from the acquired image by using traditional texture extraction methods such as Local Binary Pattern (LBP), Scale Invariant Feature Transform (SIFT), Gray-level Co-objective Matrix (CLCM) and the like, and finally, respectively putting the extracted features into a linear Support Vector Machine (SVM) classifier for classification. Therefore, the existing genuine drugs can be classified, and only the genuine drugs have a classification effect in the counterfeit identification process, so that counterfeit products can be identified.
Disclosure of Invention
In view of this, embodiments of the present invention provide a product authentication method, apparatus, server and storage medium based on deep learning, so as to solve the problem of difficulty in identifying a new product.
In a first aspect, an embodiment of the present invention provides a product authentication method based on deep learning, including:
obtaining a verification texture picture of a product;
identifying a first texture feature of the verification texture picture based on a first model;
and comparing the first texture characteristic with a second texture characteristic based on a second model to determine the authenticity of the product, wherein the second texture characteristic is obtained by decoding a texture characteristic two-dimensional code built in the product.
In a second aspect, an embodiment of the present invention further provides a product authentication device based on deep learning, including:
the image acquisition module is used for acquiring a verification texture image of the product;
the first texture feature extraction module is used for identifying first texture features of the verification texture picture based on a first model;
and the identification module is used for comparing the first texture characteristic with the second texture characteristic based on the second model to determine the authenticity of the product, wherein the second texture characteristic is obtained by decoding a texture characteristic two-dimensional code built in the product.
In a third aspect, an embodiment of the present invention further provides a server, including a memory and a processor, where the memory stores a computer program executable by the processor, and the processor executes the computer program to implement the product authentication method based on deep learning as described above.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the product authentication method based on deep learning as described above.
According to the technical scheme provided by the embodiment of the invention, based on the verification texture picture of the product shot by a user, the first texture feature of the verification texture picture is identified through the first model, the texture two-dimensional code attached to the product is decoded to obtain the second texture feature, the first texture feature and the second texture feature are compared based on the second model, the authenticity of the product is judged according to the comparison result, the first model and the second model do not simply search or classify the product based on the existing class, but compare and judge the similarity through extracting the texture feature, so that the authenticity of the product is identified, a good identification effect can be provided when the new class of products appears, and the existing class of the product does not need to be frequently updated according to the new class of products.
Drawings
FIG. 1 is a flowchart illustrating a method for product authentication based on deep learning according to an embodiment of the present invention;
FIG. 2 is a sub-flowchart of a product authentication method based on deep learning according to an embodiment of the present invention;
FIG. 3 is a sub-flowchart of a deep learning based product authentication method according to an embodiment of the present invention;
FIG. 4 is a sub-flowchart of a product authentication method based on deep learning according to a second embodiment of the present invention;
FIG. 5 is a flowchart of model training according to a second embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a product authentication device based on deep learning according to a third embodiment of the present invention;
fig. 7 is a schematic structural diagram of a server in the fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It is to be further noted that, for the convenience of description, only a part of the structure relating to the present invention is shown in the drawings, not the whole structure.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. Processing may correspond to methods, functions, procedures, subroutines, and the like.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but these orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "plurality" or "batch" is intended to mean at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of a product authentication method based on deep learning according to an embodiment of the present invention, where the method may be executed by a terminal or a server, and the embodiment takes the server as an example, and the method specifically includes:
and S110, obtaining a verification texture picture of the product.
The verification texture picture is a picture which is shot by a user through an image acquisition device and comprises product texture information when the user verifies the authenticity of a product.
When a user receives a product to be verified, the user can use a mobile device such as a mobile phone to shoot a verification texture picture of the product, and the server receives the verification texture picture sent by the mobile device through a network and the like.
And S120, identifying a first texture feature of the verification texture picture based on the first model.
The first model is a trained deep learning neural network and is used for extracting deep texture features in the image. The first texture feature is extracted by the first model according to a verification texture picture shot by a user.
Specifically, a pre-trained first model is arranged in the server, and after the server receives the verification texture picture, first texture features of the verification texture picture are extracted through the first model.
S130, comparing the first texture feature with the second texture feature based on the second model to determine the authenticity of the product, wherein the second texture feature is obtained by decoding a texture feature two-dimensional code built in the product.
The second model is a deep learning neural network for comparing texture features, and is used for comparing the similarity between different texture features. The texture feature two-dimensional code is a two-dimensional code which is arranged in the product and used for anti-counterfeiting, is generated by a merchant/a manufacturer and is arranged on the product. The second texture feature is obtained by decoding the texture feature two-dimensional code, and the decoding process of the texture feature two-dimensional code is completed by the server.
Specifically, the texture feature two-dimensional code is shot by a user through mobile equipment and uploaded to a server, after the texture feature two-dimensional code is received by the server, a second texture feature is obtained by decoding the texture feature two-dimensional code according to a set decoding algorithm, the first texture feature and the second texture feature are input into a second model, the first texture feature and the second texture feature are compared through the second model, if the comparison result shows that the similarity reaches a similarity threshold value, the comparison result is regarded as consistent, the product is represented as a genuine product, and if the comparison result shows that the similarity does not reach the similarity threshold value, the product is represented as a counterfeit product.
More specifically, in the embodiment, the first model is different from the traditional deep learning classification network, and the last softmax layer of the classical neural network is discarded, and the feature vectors are directly output for the second model to perform similarity measurement.
More specifically, in an embodiment, as shown in fig. 2, the process of generating the texture feature two-dimensional code includes steps S101 to 103:
s101, collecting a sample texture picture of the product.
The sample texture picture is a picture acquired based on product textures under different angles, illumination and brightness. The sample texture picture is acquired based on various environments, so that errors of comparison results caused by the influence of shooting environments of users in subsequent comparison can be avoided.
And S102, identifying a second texture feature of the sample texture picture based on the first model.
The first model used here is also trained, and the corresponding second texture features are obtained after the sample texture picture is input into the first model.
The first model has strong generalization capability and is used for extracting general features and distinguishing features, wherein the general features represent texture features common to different texture pictures, and the distinguishing features represent texture features specific to different texture pictures. Specifically, the extracted second texture features comprise second common features and second distinguishing features, the second common features identify texture features common to the plurality of sample texture pictures, and the second distinguishing features identify texture features specific to each sample texture picture.
S103, generating a texture feature two-dimensional code based on the second texture feature, and attaching the texture feature two-dimensional code to a product.
The texture features obtained by the first model are feature vectors, and the feature vectors can be stored in the two-dimensional code through a preset coding algorithm. After the second texture feature is obtained, the second texture feature is stored in the two-dimensional code according to a preset coding algorithm to obtain a texture feature two-dimensional code, and the second texture feature can be obtained through a decoding algorithm corresponding to the preset coding algorithm according to the texture feature two-dimensional code. The texture two-dimensional code is attached to the product, so that the connection between the texture two-dimensional code and the product is directly established, and the texture two-dimensional code is used as an anti-counterfeiting label, so that the cost of producing counterfeit and shoddy products is increased.
More specifically, in one embodiment, step S102 shown in FIG. 3 includes steps S1021-1022:
and S1021, extracting depth texture features based on the sample texture picture.
And S1022, performing post-processing on the depth texture feature to obtain a second low-dimensional compact texture feature.
Similarly, the first texture feature obtained by identifying the texture picture based on the first model is also obtained by post-processing.
According to the technical scheme, the texture picture is verified based on the verification texture picture of the product shot by the user, the first texture feature of the texture picture is identified and verified through the first model, the texture two-dimensional code attached to the product is decoded to obtain the second texture feature, the first texture feature and the second texture feature are compared based on the second model, the authenticity of the product is judged according to the comparison result, the first model and the second model do not simply search or classify the product based on the existing category, but compare and judge the similarity through extracting the texture feature, the authenticity of the product is further identified, a good identification effect can be provided when a new product appears, and the existing category of the product does not need to be frequently updated according to the new product.
Example two
The embodiment further supplements the part of the content on the basis of the first embodiment to further explain part of the steps, and specifically includes:
as shown in fig. 4, step S130 specifically includes steps S131-132:
s131, mapping the first texture feature and the second texture feature to a distance space, and determining the similarity and the distance between the first texture feature and the second texture feature.
And S132, if the similarity is greater than or equal to the similarity threshold and the distance is less than or equal to the distance threshold, the product is true, and if the similarity is less than the similarity threshold or the distance is greater than the distance threshold, the product is false.
In machine learning and data mining, we often need to know the magnitude of the differences between individuals to evaluate the similarity and category of individuals. Most common are correlation analysis in data analysis, classification and clustering algorithms in data mining, such as K Nearest Neighbors (KNN) and K Means (K-Means), among others. In this embodiment, the step of identifying whether the product is genuine or not is to analyze the difference between the products corresponding to the first texture feature and the second texture feature according to the first texture feature and the second texture feature, and when the difference is too large, the product may be counterfeit. Distance measurement and similarity measurement are main ways of measuring quantity difference, the smaller the similarity is, the larger the difference between two products is, and the farther the distance is, the larger the difference between two products is represented. The specific distance metric is not limited herein.
In the training process of the model, the first model and the second model form a complete network framework for training, as shown in fig. 5, the training process of the complete network framework includes steps S210-240, since the second model needs to compare two texture features, the training process is learned in the form of image pairs:
s210, obtaining sample pictures under different illumination, different spatial resolutions and different rotation angles according to a sampling strategy to be used as a target data set, and taking one half of the target data set as a test set and the other half as a training set.
The sampling strategy can mine the texture information in class and between classes of known class sample pairs (image pairs) and is used for mining sample pairs with abundant texture information in sample pictures, such as similar sample pairs of different classes or dissimilar sample pairs of the same class, so as to obtain a target data set for training a network framework. The target dataset includes a plurality of sample pairs.
And S220, training the designed initial model through a cross validation mode based on the training set to obtain a trained model.
Specifically, the number of initial models is generally multiple, and a single initial model is denoted as MiIn the cross validation training process based on the training set, the samples in the training set S are equally divided into k parts: s1, S2, …, Sk; selecting one Sj (j is more than or equal to 1 and less than or equal to k) as a verification set, using other sample pairs in a training set as training samples, and using the training samples to initiate the model MiTraining is carried out to obtain M'iUsing Sj pairs of corresponding M'iVerifying to obtain corresponding error E for each MiAre corresponding k M'iThere are k errors E, and the average value of the k errors E is taken as MiGeneralized error of (E)i(ii) a Selecting a generalization error EiMinimum initial model MiAnd training the model by using the training set S to obtain a trained model.
And S230, testing the trained model based on the test set, taking the trained model as the trained model if the test result reaches a preset expectation, and returning to the step S210 for retraining if the test result does not reach the preset expectation.
The test set is used for measuring the actual application effect of the trained model, the measurement index of the test set is P @1, namely whether all test samples search the nearest samples or the nearest samples are the same, the MAP @ R and the RP also consider the retrieval effect of the retrieval sequence, the preset expectation is set according to the measurement index, and if the measurement indexes are different, the preset expectation is correspondingly adjusted. The trained model can complete extraction of texture features and comparison of the texture features, namely the trained model comprises a first model and a second model, and the output of the first model in the trained model is used as the input of the second model.
Specifically, in one embodiment, the dimension of the finally extracted depth Texture feature is 128 dimensions, and canvas (class 46) in the Oulu Texture (Outex) dataset is used as the target dataset, which is a dataset with a large number of Texture types currently found. The data set is acquired under 3 different lighting lights, 6 different spatial resolutions and 9 rotation angles in a laboratory environment, and has a texture data set with certain intra-class differences. Half of the data sets of the categories (23 classes) were used for training and the remaining half of the target data sets of the categories were used for testing. The training uses four-fold cross validation, the training set and the validation set are mutually disjoint categories, the sampling strategy is to select all sample pairs to carry out experiments, the feature extraction network adopts a BN-incorporation network which is pre-trained on ImageNet, the last softmax layer is removed, and the output features are spliced into a full connection layer to be used as the extracted depth features. The Loss function uses contrast Loss, triple MarkingLoss, multisilicity Loss, Normalized Softmax Loss, ArcFace Loss, and ProxyNCA Loss, etc., and the parameters of the Loss function are optimized using 50 Bayes. And (4) a verification and test process, namely using KNN and K-Means to measure the effect of metric learning. The use of P @1 as a measure index means whether all test samples search the nearest sample or the same class, the precision of the method reaches 99.93 percent to 99.99 percent, and the method can obviously show that the network is effective to the nearest search of the new class. The MAP @ R and the RP also take the retrieval effect of the retrieval order into consideration. When all the new and old classes are tested, the precision of the P @1 of the new and old classes is about 98% on average, which shows that the network result can indeed identify the new and old classes.
The product authentication method based on deep learning provided by the embodiment further supplements the fact that the similarity and the distance are used for determining the authenticity of a product on the basis of the first embodiment, and provides a specific training process of the first model and the second model, wherein the training process is not simple for training the retrieval or classification of an existing class sample, but for training the recognition capability of the depth texture features and the capability of distance comparison and similarity comparison of the texture features, so that the first model provided by the embodiment can extract the depth texture features from a new class of texture picture and an old class of texture picture, and the second model can compare the similarity and the distance of the corresponding texture features, so that the product authentication can be quickly and effectively performed when the new class of texture picture is received.
EXAMPLE III
Fig. 6 is a third embodiment of a product authentication device 300 based on deep learning, which specifically includes the following modules:
the picture acquiring module 310 is configured to acquire a verification texture picture of a product.
A first texture feature extraction module 320, configured to identify a first texture feature of the verification texture picture based on a first model.
And the identifying module 330 is configured to compare the first texture feature with a second texture feature based on a second model to determine whether the product is authentic, where the second texture feature is obtained by decoding a two-dimensional code of the texture feature embedded in the product.
More specifically, in an embodiment, the system further includes a sample texture picture acquisition module, a second texture feature extraction module, and a two-dimensional code generation module:
the sample texture picture acquisition module is used for acquiring a sample texture picture of a product.
The second texture feature extraction module is used for identifying second texture features of the sample texture picture based on the first model.
The two-dimension code generating module is used for generating a texture feature two-dimension code based on the second texture feature and attaching the texture feature two-dimension code to a product.
More specifically, in an embodiment, the second texture feature extraction module includes a depth texture feature extraction unit and a post-processing unit:
the depth texture feature extraction unit is used for extracting depth texture features based on the sample texture picture.
And the post-processing unit is used for post-processing the depth texture features to obtain second low-dimensional compact texture features.
More specifically, in an embodiment, the first model is configured to extract a common feature and a distinguishing feature of the texture picture, where the common feature represents a texture feature common to different texture pictures, and the distinguishing feature represents a texture feature specific to different texture pictures.
More specifically, in one embodiment, the authentication module includes:
and the similarity and distance determining unit is used for mapping the first texture feature and the second texture feature to a distance space and determining the similarity and the distance between the first texture feature and the second texture feature.
And the authenticity identification unit is used for determining that the product is true if the similarity is greater than or equal to a similarity threshold value and the distance is less than or equal to a distance threshold value, and determining that the product is false if the similarity is less than the similarity threshold value or the distance is greater than the distance threshold value.
More specifically, in an embodiment, the method further includes:
and the sampling unit is used for acquiring sample pictures under different illumination, different spatial resolutions and different rotation angles according to a sampling strategy to be used as a target data set, and taking one half of the target data set as a test set and the other half as a training set.
And the training unit is used for training the designed initial model through a cross validation mode based on the training set to obtain a trained model.
And the test unit is used for testing the trained model based on the test set, taking the trained model as the trained model if the test result reaches a preset expectation value, and returning to the sampling unit for retraining if the test result does not reach the preset expectation value.
The product authentication device based on deep learning provided by the embodiment is based on a texture picture of a product shot by a user, a first texture feature of the texture picture is identified and verified through a first model, a texture two-dimensional code attached to the product is decoded to obtain a second texture feature, the first texture feature and the second texture feature are compared based on a second model, the authenticity of the product is judged according to a comparison result, the first model and the second model do not simply search or classify the product based on the existing category, but compare and judge the similarity through extracting the texture feature, and then identify the authenticity of the product.
Example four
Fig. 7 is a schematic structural diagram of a server according to a fourth embodiment of the present invention, as shown in fig. 7, the server includes a processor 70, a memory 71, an input device 72, and an output device 73; the number of the processors 70 in the server may be one or more, and one processor 70 is taken as an example in the figure; the processor 70, the memory 71, the input device 72 and the output device 73 in the server may be connected by a bus or other means, and the bus connection is exemplified in fig. 7.
The memory 71 serves as a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the deep learning-based product authentication method in the embodiment of the present invention (for example, the picture acquiring module 310, the first texture feature extracting module 320, the authentication module 330, and the like in the deep learning-based product authentication method). The processor 70 executes various functional applications and data processing of the terminal/server by executing software programs, instructions and modules stored in the memory 71, that is, implements the deep learning-based product authentication method described above.
The memory 71 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 71 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 71 may further include memory remotely located from the processor 70, which may be connected to the terminal/server through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 72 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal/server. The output device 73 may include a display device such as a display screen.
The server can execute the product authentication method based on deep learning provided by the first embodiment or the second embodiment of the invention, and has functional modules and beneficial effects corresponding to the execution method.
EXAMPLE five
An embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a product authentication method based on deep learning according to any embodiment of the present invention, where the method may include:
obtaining a verification texture picture of a product;
identifying a first texture feature of the verification texture picture based on a first model;
and comparing the first texture characteristic with a second texture characteristic based on a second model to determine the authenticity of the product, wherein the second texture characteristic is obtained by decoding a texture characteristic two-dimensional code built in the product.
The computer-readable storage media of embodiments of the invention may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a storage medium may be transmitted over any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. Those skilled in the art will appreciate that the present invention is not limited to the particular embodiments disclosed, but is capable of numerous rearrangements, modifications, and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in more detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. A product authentication method based on deep learning is characterized by comprising the following steps:
obtaining a verification texture picture of a product;
identifying a first texture feature of the verification texture picture based on a first model;
and comparing the first texture characteristic with a second texture characteristic based on a second model to determine the authenticity of the product, wherein the second texture characteristic is obtained by decoding a texture characteristic two-dimensional code built in the product.
2. The method for authenticating a product based on deep learning of claim 1, wherein before obtaining the verification texture picture of the product, the method further comprises:
collecting a sample texture picture of a product;
identifying a second texture feature of the sample texture picture based on the first model;
and generating a texture feature two-dimensional code based on the second texture feature, and attaching the texture feature two-dimensional code to a product.
3. The deep learning based product authentication method as claimed in claim 2, wherein the identifying the second texture feature of the sample texture picture based on the first model comprises:
extracting depth texture features based on the sample texture picture;
and carrying out post-processing on the depth texture features to obtain second texture features with low dimension compactness.
4. The product authentication method based on deep learning of claim 1, wherein the first model is used for extracting a common feature and a distinguishing feature of the texture pictures, the common feature represents a common texture feature of the different texture pictures, and the distinguishing feature represents a unique texture feature of the different texture pictures.
5. The method as claimed in claim 1, wherein the determining whether the product is authentic based on the second model comparing the first texture feature and the second texture feature comprises:
mapping the first texture feature and the second texture feature to a distance space, and determining the similarity and the distance between the first texture feature and the second texture feature;
if the similarity is greater than or equal to the similarity threshold and the distance is less than or equal to the distance threshold, the product is true, and if the similarity is less than the similarity threshold or the distance is greater than the distance threshold, the product is false.
6. The product authentication method based on deep learning of claim 1, further comprising:
a. acquiring sample pictures under different illumination, different spatial resolutions and different rotation angles according to a sampling strategy to be used as a target data set, and taking one half of the target data set as a test set and the other half as a training set;
b. training the designed initial model in a cross validation mode based on the training set to obtain a trained model;
c. and testing the trained model based on the test set, taking the trained model as the trained model if the test result reaches a preset expectation value, and returning to the step a for retraining if the test result does not reach the preset expectation value.
7. A product authentication device based on deep learning, comprising:
the image acquisition module is used for acquiring a verification texture image of the product;
the first texture feature extraction module is used for identifying first texture features of the verification texture picture based on a first model;
and the identification module is used for comparing the first texture characteristic with the second texture characteristic based on the second model to determine the authenticity of the product, wherein the second texture characteristic is obtained by decoding a texture characteristic two-dimensional code built in the product.
8. The deep learning based product authentication device as claimed in claim 7, further comprising:
the sample texture picture acquisition module is used for acquiring a sample texture picture of a product;
the second texture feature extraction module is used for identifying second texture features of the sample texture picture based on the first model;
and the two-dimension code generating module is used for generating a texture feature two-dimension code based on the second texture feature and attaching the texture feature two-dimension code to a product.
9. A server, comprising a memory and a processor, wherein the memory stores a computer program operable on the processor, and the processor executes the computer program to implement the deep learning based product authentication method according to any one of claims 1 to 6.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, implements the deep learning based product authentication method according to any one of claims 1 to 6.
CN202010673651.2A 2020-07-14 2020-07-14 Product authentication method and device based on deep learning, server and storage medium Pending CN112036902A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010673651.2A CN112036902A (en) 2020-07-14 2020-07-14 Product authentication method and device based on deep learning, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010673651.2A CN112036902A (en) 2020-07-14 2020-07-14 Product authentication method and device based on deep learning, server and storage medium

Publications (1)

Publication Number Publication Date
CN112036902A true CN112036902A (en) 2020-12-04

Family

ID=73579450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010673651.2A Pending CN112036902A (en) 2020-07-14 2020-07-14 Product authentication method and device based on deep learning, server and storage medium

Country Status (1)

Country Link
CN (1) CN112036902A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204478A (en) * 2021-04-06 2021-08-03 北京百度网讯科技有限公司 Method, device and equipment for running test unit and storage medium
CN113326400A (en) * 2021-06-29 2021-08-31 合肥高维数据技术有限公司 Model evaluation method and system based on depth counterfeit video detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960849A (en) * 2018-05-30 2018-12-07 于东升 For the method for anti-counterfeit and device of paper products, anti-fake traceability system
CN110209863A (en) * 2019-06-03 2019-09-06 上海蜜度信息技术有限公司 Method and apparatus for similar pictures retrieval
CN111368342A (en) * 2020-03-13 2020-07-03 众安信息技术服务有限公司 Image tampering identification model training method, image tampering identification method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960849A (en) * 2018-05-30 2018-12-07 于东升 For the method for anti-counterfeit and device of paper products, anti-fake traceability system
CN110209863A (en) * 2019-06-03 2019-09-06 上海蜜度信息技术有限公司 Method and apparatus for similar pictures retrieval
CN111368342A (en) * 2020-03-13 2020-07-03 众安信息技术服务有限公司 Image tampering identification model training method, image tampering identification method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204478A (en) * 2021-04-06 2021-08-03 北京百度网讯科技有限公司 Method, device and equipment for running test unit and storage medium
CN113326400A (en) * 2021-06-29 2021-08-31 合肥高维数据技术有限公司 Model evaluation method and system based on depth counterfeit video detection
CN113326400B (en) * 2021-06-29 2024-01-12 合肥高维数据技术有限公司 Evaluation method and system of model based on depth fake video detection

Similar Documents

Publication Publication Date Title
Mayer et al. Forensic similarity for digital images
AU2017209231B2 (en) Method, system, device and readable storage medium for realizing insurance claim fraud prevention based on consistency between multiple images
US10402448B2 (en) Image retrieval with deep local feature descriptors and attention-based keypoint descriptors
TWI677852B (en) A method and apparatus, electronic equipment, computer readable storage medium for extracting image feature
CN108416326B (en) Face recognition method and device
Wang et al. Expression of Concern: Facial feature discovery for ethnicity recognition
JP5214760B2 (en) Learning apparatus, method and program
US11847661B2 (en) Image based counterfeit detection
CN112036902A (en) Product authentication method and device based on deep learning, server and storage medium
CN110781925A (en) Software page classification method and device, electronic equipment and storage medium
CN112313718A (en) Image-based novelty detection of material samples
WO2013181695A1 (en) Biometric verification
Tapia et al. Single morphing attack detection using feature selection and visualization based on mutual information
CN113254687B (en) Image retrieval and image quantification model training method, device and storage medium
Chen et al. Large-scale indoor/outdoor image classification via expert decision fusion (edf)
Najafi Khanbebin et al. Genetic‐based feature fusion in face recognition using arithmetic coded local binary patterns
CN112380369B (en) Training method, device, equipment and storage medium of image retrieval model
Kobs et al. InDiReCT: language-guided zero-shot deep metric learning for images
Malalur et al. Alignment based matching networks for one-shot classification and open-set recognition
Murshed et al. Deep Age-Invariant Fingerprint Segmentation System
CN114005005B (en) Double-batch standardized zero-instance image classification method
Prokott et al. Identifying specular highlights: Insights from deep learning
CN116152885B (en) Cross-modal heterogeneous face recognition and prototype restoration method based on feature decoupling
Salama et al. Face-image source generator identification
Varghese et al. An AI-Based Fake Products Identification System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination