EP4328879A1 - Systeme und verfahren zur vorhersage der authentifizierungserkennbarkeit gefälschter artikel - Google Patents
Systeme und verfahren zur vorhersage der authentifizierungserkennbarkeit gefälschter artikel Download PDFInfo
- Publication number
- EP4328879A1 EP4328879A1 EP22192508.4A EP22192508A EP4328879A1 EP 4328879 A1 EP4328879 A1 EP 4328879A1 EP 22192508 A EP22192508 A EP 22192508A EP 4328879 A1 EP4328879 A1 EP 4328879A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- identified
- genuine
- genuineness
- digital signal
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 173
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 155
- 238000010801 machine learning Methods 0.000 claims abstract description 143
- 238000001514 detection method Methods 0.000 claims abstract description 104
- 238000012549 training Methods 0.000 claims description 60
- 239000000126 substance Substances 0.000 claims description 11
- GNFTZDOKVXKIBK-UHFFFAOYSA-N 3-(2-methoxyethoxy)benzohydrazide Chemical compound COCCOC1=CC=CC(C(=O)NN)=C1 GNFTZDOKVXKIBK-UHFFFAOYSA-N 0.000 claims description 2
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 description 26
- 238000012545 processing Methods 0.000 description 24
- 238000005516 engineering process Methods 0.000 description 17
- 238000005259 measurement Methods 0.000 description 15
- 238000013527 convolutional neural network Methods 0.000 description 13
- 238000003384 imaging method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000013528 artificial neural network Methods 0.000 description 9
- 239000000976 ink Substances 0.000 description 8
- 238000009826 distribution Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000001131 transforming effect Effects 0.000 description 3
- VAYOSLLFUXYJDT-RDTXWAMCSA-N Lysergic acid diethylamide Chemical compound C1=CC(C=2[C@H](N(C)C[C@@H](C=2)C(=O)N(CC)CC)C2)=C3C2=CNC3=C1 VAYOSLLFUXYJDT-RDTXWAMCSA-N 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000010835 comparative analysis Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000013434 data augmentation Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012880 independent component analysis Methods 0.000 description 2
- 238000002620 method output Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000005452 bending Methods 0.000 description 1
- 230000001680 brushing effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013499 data model Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000000295 emission spectrum Methods 0.000 description 1
- 239000000796 flavoring agent Substances 0.000 description 1
- 235000019634 flavors Nutrition 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 229910052754 neon Inorganic materials 0.000 description 1
- GKAOGPIIYCISHV-UHFFFAOYSA-N neon atom Chemical compound [Ne] GKAOGPIIYCISHV-UHFFFAOYSA-N 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000003647 oxidation Effects 0.000 description 1
- 238000007254 oxidation reaction Methods 0.000 description 1
- 230000010287 polarization Effects 0.000 description 1
- 239000010970 precious metal Substances 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 230000035939 shock Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 238000011895 specific detection Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000004753 textile Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 239000002966 varnish Substances 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/2008—Testing patterns thereon using pre-processing, e.g. de-blurring, averaging, normalisation or rotation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/06—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using wave or particle radiation
- G07D7/12—Visible light, infrared or ultraviolet radiation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/2016—Testing patterns thereon using feature extraction, e.g. segmentation, edge detection or Hough-transformation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/2075—Setting acceptance levels or parameters
- G07D7/2083—Learning
Definitions
- the present invention relates to the machine learning systems, methods and processes in the field of anti-counterfeiting and authentication of manufactured items, products, security documents and banknotes.
- Anticounterfeiting technology guide European Union Intellectual Property Office, 2021.
- these technologies add an element onto the item that is difficult to duplicate, or copy, or they characterize a specific physical or chemical feature of the item, similar to a fingerprint of the item.
- the challenge may be either technical, for instance on the reproduction of holograms, or require products which are not readily available on the market, such as rare isotopes or special inks.
- anti-counterfeiting features may be classified as overt technologies (visible, or more generally perceptible by the end user with his own body senses, without the need for specific detection equipment) or covert technologies (invisible/imperceptible, but detectable with a dedicated equipment).
- overt technologies visible, or more generally perceptible by the end user with his own body senses, without the need for specific detection equipment
- covert technologies invisible/imperceptible, but detectable with a dedicated equipment.
- covert technologies include:
- the latter method employs training sets comprising both genuine objects and fake objects, in combination with data augmentation to facilitate the training.
- the latter method requires the brand owner to collect multiple fake samples which are representative enough of the ability of counterfeiters to reproduce the original products. This creates additional burden to organize and maintain in anticounterfeiting long-term operations. There is still a risk of wrong classifications of genuine objects as fake ones (false negative classification) or more generally, too many doubtful cases.
- a digital authentication detection method applied to a perfect digital signal representation of a genuine object will always enable to detect it as genuine.
- the genuineness would be "100% detectable” or “always detectable” by applying the digital authentication detection method on a perfect digital signal representation of a genuine object.
- the detectability of the object genuineness depends on the quality of the digital signal representation of the object. This quality depends itself on multiple, variable digital signal capture factors. For instance, in the case of an imaging capture (but not limited to):
- the present invention is based on the finding that the use of a predictive machine learning model to predict a detectability value of the genuineness of an object in combination with a genuineness detection algorithm (also referred herein as an authentication algorithm) allows to identify genuine and counterfeited objects. Further, the present invention is based on the development of a specific training protocol for obtaining a predictive machine learning model, wherein the training data consists of sets of digital signal representations of genuine objects with their associated detectability value of the genuineness of an object.
- the predictive machine learning model allows to predict a detectability value of the genuineness of an object to be identified and the genuineness detection algorithm allows to determine from the predicted detectability value of the genuineness of an object if the genuineness detection algorithm can or cannot detect the object as a genuine object.
- the objects to be identified can be detected as a genuine object than if the genuineness detection algorithm does not identify this object as genuine, it can be determined that this is counterfeited.
- a computer-implemented method for predicting a detectability value of the genuineness of an object comprising:
- a computer-implemented method for identifying if an object is a genuine or a counterfeited object comprising:
- object or “item” or “object item” (used interchangeably) refer to something material that may be perceived by the senses. It can be a manufactured object or an artisanal object. Examples of objects include, but are not limited to a security document, a precious metal, a banknote, a watch, a leather product such as a bag, a part of an object such as a component, a label, a package, a printed surface, an embossed surface, a metallized surface, and the like. The objects that have the same characteristics belong to the same / one " type of objects " or "class of objects”.
- genuine object or “real object” or “authentic object” (used interchangeably) refer to an object that is exactly what it appears to be, and is not false or an imitation.
- a genuine object is an original, a real, an authentic, not fake or counterfeit.
- the genuine object may have added or integrated security features that can be detected by an authentication algorithm.
- counterfeit object or “fake object” or “not authentic object” (used interchangeably) refer to an object or item that is made to be an imitation of something genuine and meant to be taken as genuine, and is false or an imitation. In other words, a counterfeit object is a forgery, copy, imitation, or fake.
- the "genuine/counterfeit object” can be characterised as having " a detectability property ", which allows to identify if an object is detectable as genuine/counterfeit or not detectable as genuine/counterfeit.
- the detectability property can be identified or recognised based as a detectability value of an object.
- a detectability value of the genuineness of an object or "a detectability value of a genuineness” is used.
- the “detectability value of the genuineness of an object” or “a detectability value” refers to a value identified from a digital signal representation of this object.
- detectability value of the genuineness of an object examples include but are not limited to a binary label such as detectable or not detectable, a ternary label such as detectable or not detectable or unknown, or a scalar value such as a signal to noise ratio (SNR) measurement, a difference measurement, a distance metrics, or the like known in the art.
- the label may be 0 for non-detectable and 1 for detectable.
- the label may be 1 for non-detectable and 0 for detectable.
- the detectability value may be a scalar value.
- the detectability value may be a signal processing metrics.
- a signal processing metrics may be for instance the signal-to-noise ratio (SNR) of a cross-correlation of the captured digital signal representation with a template digital signal representation reference for the object to be authenticated.
- the detectability metrics may also be a simple distance measurement (for instance, a difference) between one extracted feature from the captured digital signal representation and the matching reference feature from a digital signal representation template.
- the detectability metrics may be a composite distance measurement between a set of features from the captured digital signal representation and the matching set of reference features from a digital signal representation template. Examples of composite distance measurement include the L0, L1, L2 norms and other ways of measuring distances between sets of values in statistic modelling.
- a digital signal representation or "a digital representation” of an object refer to a representation of on object in the form of digital data.
- Examples of a digital signal representations of an object include but are not limited to a binary image, a digital sound record, chemical composition, spectral representation of a wave acquired by a spectrometer hardware, being electromagnetic as in a case of images, or mechanical/pressure as in the case of sound or the like, or a combination of the above in the case of multi-modal capture.
- the digital signal representation of an object may be obtained from acquiring a signal captured with a sensor.
- a digital signal representation of an object is a binary image.
- prediction refers to inferring, with a statistical analysis model or a predictive machine learning model, a detectability value from a digital signal representation of an object. Prediction may be defined as a mean to output a value of potentially multiple dimensions, from a potentially multi-dimensional input value never seen before, by using a model.
- the model can come from a set of acquired observations or it can be an analytical/a priori model defined from a set of known relationships.
- a "mach learning model” refers to a data model or a data classifier which has been trained using a supervised, semi-supervised or unsupervised learning technique as known in the data science art, as opposed to an explicit statistical model.
- the data input may be represented as a 1D signal (vector), a 2D signal (matrix), or more generally a multidimensional array signal (for instance a tensor, or a RGB color image represented as 3 ⁇ 2D signals of its Red, Green and Blue color decomposition planes - 3 matrices), and/or a combination thereof.
- a multidimensional array is mathematically defined by a data structure arranged along at least two dimensions, each dimension recording more than 1 value.
- the data input is further processed through a series of data processing layers to implicitly capture the hidden data structures, the data signatures and underlying patterns. Thanks to the use of multiple data processing layers, deep learning facilitates the generalization of automated data processing to a diversity of complex pattern detection and data analysis tasks.
- the machine learning model may be trained within a supervised, semi-supervised or unsupervised learning framework. Within a supervised learning framework, a model learns a function to map an output result from an input data set, based on example pairs of inputs and matching outputs.
- Examples of machine learning models used for supervised learning include Support Vector Machines (SVM), regression analysis, linear regression, logistic regression, naive Bayes, linear discriminant analysis, decision trees, k-nearest neighbor algorithms, random forest, artificial neural networks (ANN) such as convolutional neural networks (CNN), recurrent neural networks (RNN), fully-connected neural networks, long short-term memory memory (LSTM) models, and others; and/or a combination thereof.
- SVM Support Vector Machines
- ANN artificial neural networks
- CNN convolutional neural networks
- RNN recurrent neural networks
- LSTM long short-term memory memory
- a model trained within an unsupervised learning framework infers a function that identifies the hidden structure of a data set, without requiring prior knowledge on the data.
- unsupervised machine learning models examples include clustering such as k-means clustering, mixture model clustering, hierarchical clustering; anomaly detection methods; principal component analysis (PCA), independent component analysis (ICA), T-distributed Stochastic Neighbor Embedding (t-SNE); generative models; and/or unsupervised neural networks; autoencoders; and/or a combination thereof.
- Semi-supervised learning (SSL) is a machine learning framework within which one can train a model using both labeled and unlabeled data. Data augmentation methods can be optionally used to produce artificial data samples out of a scarce set of real data samples and increase the number and diversity of data used for model training. Unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy compared to other frameworks. This approach is particularly interesting when only part of the available data is labeled.
- a “convolutional neural network” or “ CNN” refers to a machine learning model which uses multiple data processing layers, known as convolutional layers, to represent the input data in a way which is best suited to solve a classification or regression task.
- weight parameters are optimized for each CNN layer using optimization algorithms known in the art such as the backpropagation algorithm to perform a stochastic gradient descent.
- the resulting trained CNN may then process very efficiently the input data, for instance to classify it into the right data output labels with as little false positives and false negatives as possible in the case of a learnt classification task.
- Convolutional neural networks may also be combined with recurrent neural networks to produce a deep learning classifier.
- the term "genuineness detection algorithm” in the context of this invention refers to an algorithm that produces a detectability value of the genuineness of a type of object from a digital signal representation of this object and/or also returns as an output a genuineness decision.
- the genuineness decision can for example be decided from a pre-defined threshold value of the detectability value, or an aggregate of multiple detectability values computed from different subsampling of the digital representation, like in the case of multiple crops on the same acquired input image.
- the genuineness detection algorithm may also be referred to as an "authentication algorithm”.
- the "genuineness detection algorithm” or “authentication algorithm” may be any authentication algorithm which takes as input a digital signal representation of the object to be identified/authenticated.
- prior art authentication algorithms usually return as output either a genuineness decision as a binary label "authenticated genuine”/ “non authenticated/detected genuine” or a ternary label "authenticated genuine”/ “non authenticated genuine”/ "unknown”.
- a label of "non authenticated/detected genuine” is not necessarily equal to a definitive label “fake” (dashed line in figure 1 ).
- a label of "not detected” is not equal to a definitive label “unknown”.
- Prior art authentication algorithms may provide label 'fake' only if acquisition conditions are totally controlled, like it is the case for a flatbed scanner, so that one shot acquisition is sure to provide a detectable digital representation of the object, or if they were trained with the use of fake items or synthetically generated representation of fake items.
- Prior art authentication algorithms may also comprise internal signal processing algorithms to calculate a measurement of how the digital signal representation differs from a reference template digital signal representation.
- the detectability value of the genuineness of an object is available, it enables to quantify the detectability value of the genuineness of an object as a scalar value (for instance ranging from 0 not detectable at all as a genuine object to 100 perfectly detectable).
- the resulting scalar value may then be further used by the authentication algorithm to classify the genuineness of the object to be authenticated, for instance using a predefined threshold to discriminate between measurement values corresponding respectively to "non authenticated genuine" (lower range below the threshold) and "authenticated genuine” (higher range above the threshold) decisions.
- FIG 1 shows an example of a processing workflow of a prior art authentication algorithm (or genuineness detection algorithm) (100) in line with the authentication methods described for instance in WO0225599 , WO04028140 , cloud of micro-holes WO06087351 , or US10332247 .
- a prior art authentication algorithm or genuineness detection algorithm
- Such an authentication algorithm (100) takes as input one or more digital signal representations of an object as may be captured with a sensor such as for instance an image sensor.
- the genuineness detection algorithm (100) may optionally pre-process the captured digital signal representations, for instance by using geometrical transforms (e.g. scaling, rotating, translating, downsampling, upsampling, cropping, etc.), frequency domain transforms (e.g.
- the genuineness detection algorithm (100) may output a measured scalar value of the genuineness (for instance, a Signal to Noise Ratio SNR scalar value out of the cross-correlation calculation).
- the genuineness detection further comprises a decision module to determine from the latter value (for instance by comparing it to a pre-determined threshold) if the object can be detected as a genuine one with high confidence, or if it cannot be detected.
- a non-detection event occurs either because the object is actually a fake (but one cannot identify it as such), or because the digital signal representation of the object does not enable the genuineness detection algorithm (100) to detect the genuineness of the object with high enough confidence.
- a predictive machine learning model or “predictive model” or “a predictive machine learning model to predict a detectability value of the genuineness of an object” in the context of this invention refers to a machine learning model that can be trained to predict a detectability value of the genuineness of an object from at least one digital signal representations of an object.
- a predictive machine learning model is trained according to the methods of the invention.
- pre-processing refers to a set of digital operation leading to the transformation of raw data or a signal captured with a sensor or a digital signal representation to a a digital signal representation that can be used for example by the predictive machine learning algorithm or genuineness detection algorithm (authentication algorithm).
- Examples of the known method of pre-processing include but are not limited to geometrical transforms (e.g., scaling, rotating, translating, downsampling, upsampling, cropping, etc.), frequency domain transforms or other domain transform (e.g. Fourier transform, Discrete Cosine Transform DCT, Wavelet transforms, etc), filters (for instance, low-pass filters, high-pass filters, equalizers, etc.) and the like.
- Background suppression algorithms or image registration algorithms can be applied as pre-processing steps.
- Fully convolutional neural networks, UNet networks and spatial transformers network can even be trained to optimize correlation between reference images. High dynamic range pre-processing combining multiple digital representations.
- ML machine learning
- a training with the at least one digital signal representation of counterfeit objects might not be of interest since there is no definitive number of such objects and new counterfeit objects can be produced resulting in the algorithms that are made redundant and require continuous updates. Therefore, and according to the methods of the invention, performing the training on genuine objects allows for the reliable further classification of objects. In other words, the methods of the invention are based on positive detection.
- the training may be performed with use of genuine objects in conjunction with a small amount of genuine object without embedded security/ authentication features, which therefore act as potential representations of counterfeited objects.
- These representations are positioned in the exact same point of views as a similar genuine object, and are mapped to the corresponding genuine object observed detectability value of genuineness. This may have an effect in an increased learning efficiency.
- a computer-implemented methods for training a predictive machine learning model allow to obtain a predictive machine learning model to predict a detectability value of the genuineness of an object.
- a computer-implemented method for training a predictive machine learning model includes a step of obtaining a training data set comprising a set of digital signal representations of each of the used genuine objects, wherein each digital signal representation has the associated detectability value of the genuineness of an object.
- a computer-implemented method for training a predictive machine learning model to predict a detectability value of the genuineness of an object comprising:
- a predictive machine learning model that is trained according to the methods of the invention is for use in a computer-implemented method to identify if an object is a genuine or counterfeited object.
- a computer-implemented method for training a predictive machine learning model wherein the selected genuineness detection algorithm and a predictive machine learning model to be trained are chosen to be able to process the digital signal representations of a specific type of genuine object.
- the selected genuineness detection algorithm can be any suitable algorithm known in the art, such as selected from detectors of surface fingerprints and detectors of product markings. In particular an AlpVision fingerprint detector, an AlpVision cryptoglyph detector, a taggant detector, a Scantrust secure graphic detector, a SICPA security ink detector and the like.
- a method for training a predictive machine learning model uses a genuineness detection algorithm (e.g., obtained in step a)) that produces the detectability value of the genuineness of an obj ect, wherein the value may be a scalar value, a label, a hash, a vector, a multidimensional vector/a tensor, an image, a matrix, a distribution curve and the like.
- a genuineness detection algorithm e.g., obtained in step a)
- the value may be a scalar value, a label, a hash, a vector, a multidimensional vector/a tensor, an image, a matrix, a distribution curve and the like.
- a method for training a predictive machine learning model uses a genuineness detection algorithm (e.g., obtained in step a)) that produces the detectability value of the genuineness of an object, wherein the value is a scalar value, such as selected from a signal to noise ratio (SNR) measurement, a difference measurement, and a distance metrics.
- the detectability value of the genuineness of an object is a scalar value, preferably SNR, which quantifies the strength of the security feature signal.
- a method for training a predictive machine learning model uses a genuineness detection algorithm (e.g., obtained in step a)) that produces the detectability value of the genuineness of an object, wherein the value is a label, such as selected from a binary label (such as detectable or not detectable), and a ternary label (such as detectable or not detectable or unknown) or any categorization of a continuous variable, such as one hot encoding of the value for each integer steps.
- a genuineness detection algorithm e.g., obtained in step a)
- the value is a label, such as selected from a binary label (such as detectable or not detectable), and a ternary label (such as detectable or not detectable or unknown) or any categorization of a continuous variable, such as one hot encoding of the value for each integer steps.
- a method for training a predictive machine learning model according to the invention is based on selected one type of object (e.g., in step b). Therefore, different types of objects may require different predictive machine learning model that are trained separately considering differences in properties of types of objects.
- a method for training a predictive machine learning model according to the invention is based on 1, at least 1, at least 2, at least 100, at least 200 or at least 500 genuine objects (e.g., in step b), preferably at least 100.
- the effect of using more than one genuine object is increased size of a training set, which in turn allows for increased predictive power of obtained predictive machine learning model of the invention. All the objects used in the training methods belong to one class or type of objects.
- a method for training a predictive machine learning model uses a set of digital signal representations of each of the genuine objects (e.g., in step c)), wherein the set of digital signal representations of each of the genuine objects may consists of or comprise one, at least 1, at least 100, at least 1000, at least 10000, at least 20000 or at least 40000 digital signal representations, preferably at least 100, more preferably at least 20000 digital signal representations.
- the effect of using increased number of digital signal representations is increased size of a training set, which in turn allows for increased predictive power of obtained predictive machine learning model of the invention.
- each of digital signal representation of each of the genuine object is obtained from acquiring a signal captured with a sensor, such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor, a microphone, a microtext reader, a barcode reader, a QR-code reader, a laser-based sensor, a code reader, a RFID reader, an infrared sensor, a UV sensor, a digital camera, and a smartphone camera.
- a sensor such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor, a microphone, a microtext reader, a barcode reader, a QR-code reader, a laser-based sensor, a code reader, a RFID reader, an infrared sensor, a UV sensor, a digital camera, and a smartphone camera.
- a sensor is a camera.
- a signal captured with a sensor may be optionally further subjected to a step of signal pre-processing with a signal pre-processing algorithm, so that to obtain a digital signal representation.
- obtaining a set of digital signal representations of each of the genuine objects further comprises transforming, with a signal pre-processing method, the acquired digital signal representation into a digital signal representation suitable to be inputted to the genuineness detection algorithm.
- a pre-processing of a signal captured with a sensor may include a step of pre-processing by a genuineness detection algorithm (authentication algorithm) according to known methods with the use of known systems as described herein.
- a pre-processing methods include but are not limited to geometrical transforms such as scaling, rotating, translating, down-sampling, up-sampling, cropping, and the like; frequency domain transforms such as Fourier transform, Discrete Cosine Transform DCT, and the like; and filters such as low-pass filters, high-pass filters, equalizers, and the like.
- a computer-implemented method for training a predictive machine learning model may use two sets of digital signal representations of each of the genuine objects, wherein one set is an input to the genuineness detection algorithm, and another set is an input to the predictive machine learning model. This different sets may be obtained based on different pre-processing steps.
- a computer-implemented method for training a predictive machine learning model according to the invention may use a AlpVision Cryptoglyph detector as selected genuineness detection algorithm, wherein the signal captured with a sensor is cropped to a larger field of view than the one used for the Cryptoglyph detector and downsampled so that to obtain a digital signal representation suitable for further processing.
- a computer-implemented method for training a predictive machine learning model according to the invention may use a surface fingerprint detector as selected genuineness detection algorithm, wherein the surface fingerprint detector does not perform downsampling of a signal captured with a sensor and obtain a digital signal representation suitable for further processing. Since a microstructure with comparable distribution is also present on counterfeit objects, this will not prevent the model from identifying a digital representation of a counterfeit as detectable, while improving the rejection of unrelated objects.
- each of digital signal representation of each of the genuine object is acquired with the sensor at different sensor position and/or orientation.
- the different sensor position and/or orientation is in relation to the genuine object.
- the different sensor position and/or orientation is selected from a range of possible sensor's positions and/or orientation.
- the different sensor positions and/or orientations are pre-determined.
- the different sensor positions and/or orientations are the same for each genuine object.
- the different sensor position and/or orientation is controlled (or provided) by a robot arm.
- a robot arm In an embodiment where two sensors are used, at least one or at least two robot arms may be used. Examples of use of a robot arm can be seen on figures 3 and 4 .
- the different sensor position and/or orientation is controlled (provided) by a human operator that positions the sensor. Human operator may position this sensor manually or with a suitable device.
- each of digital signal representation of each of the genuine object is acquired at different genuine object position and/or orientation.
- the different genuine object position and/or orientation is in relation to the sensor.
- the different genuine object position and/or orientation is selected from a range of possible genuine object's positions and/or orientation.
- the different genuine object position positions and/or orientations are pre-determined.
- the different genuine object positions and/or orientations are the same for each genuine object.
- the different genuine object position and/or orientation is controlled (provided) by a robot arm that positions the genuine object. Examples of use of a robot arm can be seen figures 3 and 4 .
- the different genuine object position and/or orientation is controlled (provided) by a conveyor that positions the genuine object.
- a conveyor An example of use of a conveyor is seen figure 4b ).
- the different genuine object position and/or orientation is controlled (provided) by a human operator that positions the genuine obj ect. Human operator may position this genuine object manually or with a suitable device.
- each of digital signal representation of each of the genuine object is acquired under at least one predetermined physical environment parameter value, such that the value of the physical environment parameter changes the digital signal representation of each of the genuine object at the predetermined object position and orientation and/or at the predetermined sensor position and orientation.
- the at least one predetermined physical environment parameter value is selected from a range of possible values.
- various system setups may be used to control, with an object positioner, at least one variable position and/or orientation parameter for the training item at capture time.
- a mechanical setup with an automation control may be used, such as a robot arm or a conveyor with their software controllers, to precisely manipulate the training item and control the training item variable position and/or orientation parameter.
- the training item may be placed in a fixed position and orientation and at least one other variable (such as the sensor position) in the physical environment around the training item may be varied.
- various system setups may be used to control, with a sensor positioner, at least one variable position and/or orientation parameter for the sensor at capture time.
- a mechanical setup with an automation control may be used, such as a robot arm.
- a set of several sensors may be placed at different fixed positions and orientations around the training item, and each sensor may be sequentially controlled to take a different capture of the training item, each capture corresponding to a different position and orientation of the training item relative to the fixed training item.
- at least one physical environment parameter may be automatically setup by a physical environment parameter controller.
- a dedicated smartphone app may be developed which controls the smartphone lighting towards the object, for instance using the smartphone flashlight in torch mode.
- the physical environment around the items to be captured for training may be adapted with at least one physical environment control device such as a lamp, a speaker, and/or a mechanical actuator.
- Examples of a physical environment lamp variable characteristics include, but are not limited to: color temperature, polarization, emission spectrum, intensity, beam shape, impulsion shape, lamp orientation, lamp distance towards the object, etc.
- Examples of an actuator variable characteristics include, but are not limited to: the volume of water or air projected towards the object; the force applied to brushing it, knocking on it, shearing it, bending it; the temperature of heating or cooling it; the distance of approaching a magnet towards it; the variable placement of movable light reflectors, blockers, diffusers or filters, such as for example Wratten color filters, or interference filters; etc.
- a series of digital representations of a training item may be captured with an imaging device as a series of image acquisitions over time under an orientable neon lamp with variable intensity, then under an orientable LED lamp with variable intensity, then under direct sunlight at different times of the day.
- the set of digital representations will inherently represent several light sources with different spectra and variable intensities and variable positions as input to the machine learning classifier production engine.
- a series of digital representations of a training item may be captured with the Apple iPhone with a version equal to the iPhone 7 or more recent, by independently controlling the two torch LEDs, each of which has a different, adjustable, variable color temperature.
- the resulting set of digital representations will inherently represent variable spectral reflectance, transmittance and radiance environments as input to the machine learning classifier production engine.
- each of the three different variables in the physical environment which may impact the digital representation signal of a physical item as captured with at least one sensor may be varied step-by-step in different possible ranges to produce a diversity of digital representations of each input item, so that the machine learning can better anticipate the diversity of end user physical environments that its produced classifier solution will encounter at the time of detection:
- the trained predictive machine learning model may be a machine learning classifier. In an alternative embodiment, the trained predictive machine learning model may be a machine learning regressor.
- the trained predictive machine learning model may be an artificial neural network tool (ANN) like a deep learning model, and a convolution neural network (CNN) or any equivalent model, preferably a CNN model.
- ANN artificial neural network tool
- CNN convolution neural network
- a pre-trained convolutional neural network (CNN) such as AlexNet, VGG, GoogleNet, UNet, Vnet, ResNet or others may be used to further train the predictive machine learning model, but other embodiments are also possible.
- the trained predictive machine learning model is a supervised machine learning algorithm, wherein the training set comprises or consists of the digital signal representations of each of the genuine objects, wherein each of the digital signal representation has the associated detectability value of the genuineness of an obj ect.
- a computer-implemented method for predicting a detectability value of the genuineness of an object comprising:
- a computer-implemented method for predicting a detectability value of the genuineness of an object comprising:
- a computer-implemented method for predicting a detectability value of the genuineness of an object with the use of a predictive machine learning model wherein the method may comprise an algorithm referred herein as detectability prediction algorithm (200) ( figure 2 ).
- a computer-implemented method for predicting a detectability value of the genuineness of an object uses more than one digital signal representations of the object to be detected, wherein these digital signal representations are obtained and processes sequentially or in parallel. This enhances the confidence of the prediction. For example, if the predictions are considered to be independent for different digital signal representations, then if N prediction above a given detectable prediction threshold are needed to consider the object as detectable, the false detectable prediction rate is effectively divided by N.
- a computer-implemented method for predicting a detectability value of the genuineness of an object based on the at least two, at least 10, at least 25, at least 50, at least 100 or at least 250 digital signal representations of such an object to be detected, preferably at least 50.
- a computer-implemented method for predicting a detectability value of the genuineness of an object is for use in a computer-implemented method to identify if an object is a genuine or counterfeited object.
- a computer-implemented method for predicting a detectability value of the genuineness of an object wherein the selected predictive machine learning model is chosen to be able to process the digital signal representations of a specific type of an object to be detected. It is understood that a predictive machine learning model is trained based on the same type of object as an object to be identified in method for predicting a detectability value of the genuineness of an object.
- a computer-implemented method for predicting a detectability value of the genuineness of an object output a predicted detectability value of the genuineness of an object to be detected, wherein the value may be a continuous value, a categorical value, a hash, a vector, a tensor, an image, a matrix and the like.
- a computer-implemented method for predicting a detectability value of the genuineness of an object output a predicted detectability value of the genuineness of an object to be detected, wherein the value is a scalar value, such as selected from a signal to noise ratio (SNR) measurement, a difference measurement, and a distance metrics.
- the detectability value of the genuineness of an object is a continuous scalar value such as SNR.
- a computer-implemented method for predicting a detectability value of the genuineness of an object output a predicted detectability value of the genuineness of an object to be detected, wherein the value is a label, such as selected from a binary label (such as detectable or not detectable), and a ternary label (such as detectable or not detectable or unknown).
- a computer-implemented method for predicting a detectability value of the genuineness of an object is suitable for one type of object (e.g., in step a). Therefore, different types of objects may require different methods for predicting a detectability value that may use different predictive machine learning models that are trained separately considering differences in properties of types of objects.
- a computer-implemented method for predicting a detectability value of the genuineness of an object wherein the digital signal representation of the object to be detected is obtained from acquiring a signal captured with a sensor, such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor, a microphone, a microtext reader, a barcode reader, a QR-code reader, a laser-based sensor, a code reader, a RFID reader, an infrared sensor, a UV sensor, a digital camera, and a smartphone camera.
- a sensor such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor, a microphone, a microtext reader, a barcode reader, a QR-code reader, a laser-based sensor, a code reader, a RFID reader, an infrared sensor, a UV sensor, a digital camera, and a smartphone camera.
- a sensor such as selected from an image sensor, a digital olfactory sensor, a digital
- a signal captured with a sensor may be optionally further subjected to a step of signal pre-processing with a signal pre-processing algorithm, so that to obtain a digital signal representation.
- This pre-processing step may be included in a detectability prediction algorithm (200).
- a computer-implemented method for predicting a detectability value of the genuineness of an object wherein obtaining a digital signal representation of the object to be identified further comprises transforming, with a signal pre-processing method, the acquired digital signal representation into a digital signal representation suitable to be inputted to a predictive machine learning model. Similarly, this step may be included in a detectability prediction algorithm (200).
- a pre-processing of a signal captured with a sensor may include a step of pre-processing by a predictive machine learning model according to known methods with the use of known systems as described herein.
- Figure 2 shows an example of a processing workflow of a method for predicting a detectability value of the genuineness of an object with the use of the predictive machine learning model (noted on figure as a "ML model"), wherein this method includes an algorithm that may be referred herein as detectability prediction algorithm (200).
- Figure 2 shows an exemplary detectability prediction algorithm 200 according to certain embodiments of the present disclosure.
- a detectability prediction algorithm (200) takes as input one or more digital signal representations of an object as may be captured with a sensor such as for instance an image sensor.
- the detectability prediction algorithm (200) may optionally pre-process the captured digital signal representations, for instance by using geometrical transforms (e.g.
- the detectability prediction algorithm (200) may then output a predicted value of the detectability of the genuineness of the object according to its captured digital signal representations.
- a computer-implemented method for identifying if an object is a genuine or a counterfeited object comprising:
- a computer-implemented method for identifying if an object is a counterfeited object according to the invention, wherein to identify an object to be a counterfeited object
- the security feature is present in the object to be identified in order to determine from the predicted detectability value of the genuineness of an object that the genuineness detection algorithm can or cannot detect the object as a genuine object.
- the methods of the invention allow to determine from the predicted detectability value of the genuineness of an object that the genuineness detection algorithm could or could not detect the object as a genuine obj ect.
- a computer-implemented method for identifying if an object is a genuine or a counterfeited object uses a combination of output from the predictive machine learning model and the genuineness detection algorithm, wherein the two algorithms can be connected in series, in parallel, or may also be combined according to any logical operation such as Addition, Subtraction, Multiplier, Divider, Convolution, And, Or, Xor, Not, Comparison above, Comparison below, Comparison equal and Comparison differs.
- Exemplary embodiments of the algorithms connected in series i.e., the result of the upstream one is passed to the downstream one, are presented herein as embodiment 1 ( figure 5 ) or embodiment 2 ( figure 6 ), wherein these are not limiting examples.
- a method for identifying if an object is a genuine or a counterfeited object that with uses of a predictive machine learning model (such as within a genuineness detection algorithm (100)) and a detectability prediction algorithm (200), wherein the combination of the two algorithms provides that the method comprises an algorithm referred herein as a "fake authentication algorithm" (500) on figure 5 or (600) on figure 6 .
- the methods and systems of the invention allow to detect of counterfeited objects due to the combination of use of a genuineness detection algorithm and a predictive machine learning model trained according to the methods of the invention.
- a method for identifying if an object is a genuine or a counterfeited object wherein the selected genuineness detection algorithm and a predictive machine learning model are chosen to be able to process the digital signal representations of a specific type of object.
- the selected genuineness detection algorithm can be any suitable algorithm known in the art, such as selected from detectors of surface fingerprints and detectors of product markings. In particular an AlpVision fingerprint detector, an AlpVision cryptoglyph detector, a taggant detector, a Scantrust secure graphic detector, a SICPA security ink detector and the like.
- any authentication algorithm output a label 'genuine object' that may include true positive or false positive cases, or a label 'counterfeit object' that may include true negative or false negative cases.
- the effect in decreasing the number of false negative cases within a 'counterfeit object' label is achieved by using further statistical methods and parameters within or in combination with a genuineness detection algorithm. For example, several predictions of detectability for different acquisitions of digital representations of the same object can be made at runtime in order to form a distribution and select meaningful quantities such as maximum or average prediction, or confidence interval, and the like known in the art.
- a computer-implemented method for identifying if an object is a genuine or a counterfeited object based on the at least one digital signal representation of such an object to be detected uses more than one digital signal representations of the object to be detected, wherein these digital signal representations are obtained and processes sequentially or in parallel. This enhances the confidence of the prediction. For example, if the predictions are considered to be independent for different digital signal representations, then if N prediction above a given detectable prediction threshold are needed to consider the object as detectable, the false detectable prediction rate is effectively divided by N.
- Using a plurality of digital signal representations of an object to be detected allows to provide for example statistical distribution of these representations. This may have the effect in decreasing the number of false negative cases within a 'counterfeit object' label and increasing confidence level in true negative cases within a 'counterfeit object' label.
- the at least two digital signal representations of an object to be detected may be obtained for example by obtaining a sensor capture for about 5-15s such as about 10s, minimum about 25-150 frames such as about 50 frames.
- a predictive machine learning model is trained based on the same type of object as an object to be identified in methods for identifying if an object is a genuine or a counterfeited object according to the invention.
- this methods output a predicted detectability value of the genuineness of an object to be detected, wherein the value may be a scalar value, a label, a ground truth, a regressor, a hash, a vector, a multidimensional vector, an image, a matrix, or any categorization of a continuous variable, such as one hot encoding of the value for each integer steps and the like.
- this method outputs a predicted detectability value of the genuineness of an object to be detected, wherein the value is a scalar value, such as selected from a signal to noise ratio (SNR) measurement, a difference measurement, and a distance metrics.
- the detectability value of the genuineness of an object is a scalar value, such as SNR.
- this method outputs a predicted detectability value of the genuineness of an object to be detected, wherein the value is a label, such as selected from a binary label (such as detectable or not detectable), and a ternary label (such as detectable or not detectable or unknown).
- a method for identifying if an object is a genuine or a counterfeited object according to the invention is suitable for one type of object (e.g., in step a). Therefore, different types of objects may require different methods for identifying if an object is a genuine or a counterfeited object that may use different predictive machine learning models that are trained separately considering differences in properties of types of objects.
- a method for identifying if an object is a genuine or a counterfeited object wherein the digital signal representation of the object to be detected is obtained from acquiring a signal captured with a sensor, such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor, a microphone, a microtext reader, a barcode reader, a QR-code reader, a laser-based sensor, a code reader, a RFID reader, an infrared sensor, a UV sensor, a digital camera, and a smartphone camera.
- a sensor such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor, a microphone, a microtext reader, a barcode reader, a QR-code reader, a laser-based sensor, a code reader, a RFID reader, an infrared sensor, a UV sensor, a digital camera, and a smartphone camera.
- a sensor such as selected from an image sensor, a digital olfactory sensor, a digital chemical sensor
- a signal captured with a sensor may be optionally further subjected to a step of signal pre-processing with a signal pre-processing algorithm, so that to obtain a digital signal representation.
- This pre-processing step may be included in a genuineness detection algorithm (100) and/or a detectability prediction algorithm (200).
- a method for identifying if an object is a genuine or a counterfeited object further comprises transforming, with a signal pre-processing method, the acquired digital signal representation into a digital signal representation suitable to be inputted to a predictive machine learning model and/or a genuineness detection algorithm. Similarly, this step may be included in a genuineness detection algorithm (100) and/or a detectability prediction algorithm (200).
- a pre-processing of a signal captured with a sensor may include a step of pre-processing by a predictive machine learning model according to known methods with the use of known systems as described herein.
- a pre-processing of a signal captured with a sensor may include a step of pre-processing by a genuineness detection algorithm according to known methods with the use of known systems as described herein.
- a method for identifying if an object is a genuine or a counterfeited object may use two sets of digital signal representations of each of the genuine objects, wherein one set is an input to the genuineness detection algorithm, and another set is an input to the predictive machine learning model. This different sets may be obtained based on different pre-processing steps.
- a method for identifying if an object is a genuine or a counterfeited object may use a Cryptoglyph detector as selected genuineness detection algorithm, wherein the signal captured with a sensor is cropped to a larger field of view than the one used for the Cryptoglyph detector.
- a method for identifying if an object is a genuine or a counterfeited object may use a surface fingerprint detector as selected genuineness detection algorithm, wherein the surface fingerprint detector does not perform downsampling of a signal captured with a sensor and obtain a digital signal representation suitable for further processing. Since a microstructure with comparable distribution is also present on counterfeit objects, this will not prevent the model from identifying a digital representation of a counterfeit as detectable, while improving the rejection of unrelated objects.
- exemplary embodiment 1 a computer-implemented method for identifying if an object is a genuine or a counterfeited object, the method comprising:
- exemplary embodiment 1 a computer-implemented method for identifying a counterfeited object, the method comprising:
- exemplary embodiment 1 a computer-implemented method for identifying if an object is a genuine or a counterfeited object, the method comprising:
- Figure 5 shows a processing workflow of a method for identifying if an object is a genuine or a counterfeited object according to exemplary embodiment 1.
- exemplary embodiment 2 a computer-implemented method for identifying if an object is a genuine or a counterfeited object, the method comprising:
- a computer-implemented method for identifying a counterfeited object comprising:
- Figure 6 shows a processing workflow of a method for identifying if an object is a genuine or a counterfeited object according to exemplary embodiment 2.
- the object to be identified is labelled as not-detectable, at least 2, at least 5, at least 10 or at least 50 further (or another) digital signal representations of the object to be identified may be obtained, preferably at least 50, and processed again by the methods (as seen on figures 5 and 6 ).
- the methods for identifying if an object is a genuine or a counterfeited object repeat processing of further digital signal representations of the object to be identified until counterfeited object is identified or until maximum allowed number of loops have been reached (time-out). In one embodiment, a maximum allowed number of loops is reached when in at least 20% of trials the object is predicted detectable.
- the methods for identifying if an object is a genuine or a counterfeited object repeat processing of further digital signal representations of the object to be identified until genuine object is identified or until a maximum allowed number of loops is reached.
- the maximum allowed number of loops is reached when in at least 20% of trials the object is predicted detectable.
- a data processing apparatus comprising means for carrying out the methods of the invention as described herein.
- a data processing apparatus comprising instructions which, when the program is executed by the apparatus, cause the apparatus to carry out the methods of the invention as described herein.
- a computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the methods of the invention as described herein.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22192508.4A EP4328879A1 (de) | 2022-08-26 | 2022-08-26 | Systeme und verfahren zur vorhersage der authentifizierungserkennbarkeit gefälschter artikel |
PCT/EP2023/073442 WO2024042240A1 (en) | 2022-08-26 | 2023-08-25 | Systems and methods for predicting the authentication detectability of counterfeited items |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22192508.4A EP4328879A1 (de) | 2022-08-26 | 2022-08-26 | Systeme und verfahren zur vorhersage der authentifizierungserkennbarkeit gefälschter artikel |
Publications (1)
Publication Number | Publication Date |
---|---|
EP4328879A1 true EP4328879A1 (de) | 2024-02-28 |
Family
ID=83361015
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22192508.4A Pending EP4328879A1 (de) | 2022-08-26 | 2022-08-26 | Systeme und verfahren zur vorhersage der authentifizierungserkennbarkeit gefälschter artikel |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP4328879A1 (de) |
WO (1) | WO2024042240A1 (de) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002025599A1 (fr) | 2000-09-20 | 2002-03-28 | Alpvision Sa | Procede destine a prevenir la contrefaçon ou l'alteration d'une surface imprimee ou gravee |
EP1295263A1 (de) | 2000-06-28 | 2003-03-26 | Sicpa Holding S.A. | Gebrauch einer kommunikationsausrüstung und verfahren für das beglaubigen eines gegenstands, systemeinheit für das beglaubigen von gegenständen und authentifizierungsgerät |
WO2004028140A1 (fr) | 2002-09-20 | 2004-04-01 | Alpvision S.A. | Procédé de marquage spatial à modulation asymétrique robuste à un sous échantillonnage spatial |
US6903342B2 (en) | 2002-06-21 | 2005-06-07 | International Currency Technologies Corporation | Banknote acceptor |
WO2006087351A2 (en) | 2005-02-15 | 2006-08-24 | Alpvision S.A. | Method to apply an invisible mark on a media |
US20090152357A1 (en) * | 2007-12-12 | 2009-06-18 | 3M Innovative Properties Company | Document verification using dynamic document identification framework |
WO2012131474A1 (en) | 2011-03-29 | 2012-10-04 | Jura Trade, Limited | Method and apparatus for generating and authenticating security documents |
WO2014201099A1 (en) | 2013-06-11 | 2014-12-18 | University Of Houston | Fixed and portable coating apparatuses and methods |
WO2015157526A1 (en) | 2014-04-09 | 2015-10-15 | Entrupy Inc. | Authenticating physical objects using machine learning from microscopic variations |
US10332247B2 (en) | 2005-09-05 | 2019-06-25 | Alpvision, S.A. | Means for using microstructure of materials surface as a unique identifier |
WO2020160377A1 (en) | 2019-01-31 | 2020-08-06 | C2Sense, Inc. | Gas sensing identification |
US20220139143A1 (en) * | 2020-11-03 | 2022-05-05 | Au10Tix Ltd. | System, method and computer program product for ascertaining document liveness |
-
2022
- 2022-08-26 EP EP22192508.4A patent/EP4328879A1/de active Pending
-
2023
- 2023-08-25 WO PCT/EP2023/073442 patent/WO2024042240A1/en unknown
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1295263A1 (de) | 2000-06-28 | 2003-03-26 | Sicpa Holding S.A. | Gebrauch einer kommunikationsausrüstung und verfahren für das beglaubigen eines gegenstands, systemeinheit für das beglaubigen von gegenständen und authentifizierungsgerät |
WO2002025599A1 (fr) | 2000-09-20 | 2002-03-28 | Alpvision Sa | Procede destine a prevenir la contrefaçon ou l'alteration d'une surface imprimee ou gravee |
US6903342B2 (en) | 2002-06-21 | 2005-06-07 | International Currency Technologies Corporation | Banknote acceptor |
WO2004028140A1 (fr) | 2002-09-20 | 2004-04-01 | Alpvision S.A. | Procédé de marquage spatial à modulation asymétrique robuste à un sous échantillonnage spatial |
WO2006087351A2 (en) | 2005-02-15 | 2006-08-24 | Alpvision S.A. | Method to apply an invisible mark on a media |
US10332247B2 (en) | 2005-09-05 | 2019-06-25 | Alpvision, S.A. | Means for using microstructure of materials surface as a unique identifier |
US20090152357A1 (en) * | 2007-12-12 | 2009-06-18 | 3M Innovative Properties Company | Document verification using dynamic document identification framework |
WO2012131474A1 (en) | 2011-03-29 | 2012-10-04 | Jura Trade, Limited | Method and apparatus for generating and authenticating security documents |
WO2014201099A1 (en) | 2013-06-11 | 2014-12-18 | University Of Houston | Fixed and portable coating apparatuses and methods |
WO2015157526A1 (en) | 2014-04-09 | 2015-10-15 | Entrupy Inc. | Authenticating physical objects using machine learning from microscopic variations |
WO2020160377A1 (en) | 2019-01-31 | 2020-08-06 | C2Sense, Inc. | Gas sensing identification |
US20220139143A1 (en) * | 2020-11-03 | 2022-05-05 | Au10Tix Ltd. | System, method and computer program product for ascertaining document liveness |
Also Published As
Publication number | Publication date |
---|---|
WO2024042240A1 (en) | 2024-02-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11989961B2 (en) | Authentication machine learning from multiple digital presentations | |
Zanardelli et al. | Image forgery detection: a survey of recent deep-learning approaches | |
US10885531B2 (en) | Artificial intelligence counterfeit detection | |
Laavanya et al. | Real time fake currency note detection using deep learning | |
Shamrat et al. | A deep learning approach for face detection using max pooling | |
CN116664961B (zh) | 基于信码的防伪标签智能识别方法及系统 | |
Hilal et al. | Copy-move forgery detection using principal component analysis and discrete cosine transform | |
CN112313718A (zh) | 材料样品的基于图像的新颖性检测 | |
Shirvaikar et al. | Developing texture-based image clutter measures for object detection | |
Agarwal et al. | The advent of deep learning-based image forgery detection techniques | |
Geradts et al. | Interpol review of forensic video analysis, 2019–2022 | |
Kumar et al. | Detection of Fake Currency Notes using Image Processing Technique | |
CN116542610B (zh) | 一种非接触式柜内资产自动盘点装置、方法和存储介质 | |
EP4328879A1 (de) | Systeme und verfahren zur vorhersage der authentifizierungserkennbarkeit gefälschter artikel | |
Waleed et al. | Comprehensive display of digital image copy-move forensics techniques | |
Kumar et al. | Towards recent developments in the field of digital image forgery detection | |
M Salman et al. | Smart door for handicapped people via face recognition and voice command technique | |
Saire et al. | Documents counterfeit detection through a deep learning approach | |
Mitra et al. | Machine learning approach for signature recognition by harris and surf features detector | |
Zhao et al. | An Efficient USM Weak Sharpening Detection Method for Small Size Image Forensics | |
Mohibullah et al. | Face Detection and Recognition from Real Time Video or Recoded Video using Haar Features with Viola Jones Algorithm and Eigenface Approach with PCA | |
Katika et al. | Spoofing Face Identification Using Higher order Descriptors | |
Bhargava et al. | Deep-Fake Finder: Uncovering Forgery Image Through Neural Network Analysis | |
Timofeeva | Object detection in thermal imagery for crowd density estimation | |
Subashree et al. | A Comprehensive Evaluation of Fake Face Recognition Scheme using Artificial Intelligence Oriented Learning Scheme |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |