EP4320603A1 - Dispositif et procédé de vérification d'un marquage d'un produit - Google Patents

Dispositif et procédé de vérification d'un marquage d'un produit

Info

Publication number
EP4320603A1
EP4320603A1 EP22720458.3A EP22720458A EP4320603A1 EP 4320603 A1 EP4320603 A1 EP 4320603A1 EP 22720458 A EP22720458 A EP 22720458A EP 4320603 A1 EP4320603 A1 EP 4320603A1
Authority
EP
European Patent Office
Prior art keywords
layout
image
recorded
marking
product information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22720458.3A
Other languages
German (de)
English (en)
Inventor
Michael Neuschäfer
Stephan STRELEN
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
REA Elektronik GmbH
Original Assignee
REA Elektronik GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by REA Elektronik GmbH filed Critical REA Elektronik GmbH
Publication of EP4320603A1 publication Critical patent/EP4320603A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/141Control of illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/752Contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/12Detection or correction of errors, e.g. by rescanning the pattern
    • G06V30/133Evaluation of quality of the acquired characters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/28Character recognition specially adapted to the type of the alphabet, e.g. Latin alphabet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/412Layout analysis of documents structured with printed lines or input boxes, e.g. business forms or tables
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K5/00Methods or arrangements for verifying the correctness of markings on a record carrier; Column detection devices
    • G06K5/04Verifying the alignment of markings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Definitions

  • the invention relates to a method for checking a marking on a product, with at least one image of the marking arranged on a surface of the product being captured in an image capturing step using a calibrated inspection camera, and with the quality of the on The marking applied to the surface is checked with regard to static data such as position and machine readability.
  • markings are used, for example in the form of labeled labels
  • Printing or engraving used. On the one hand, they serve to clearly identify a finished product and can also enable identification and tracking of products even after individual production steps. Markings in the form of labels have an upper side provided with information and a lower side, the lower side of the labels being applied to the product to be marked. There is also the possibility of printing the marking directly onto the product to be marked or its packaging using one or more working heads, spraying it, or burning it in using a laser. In addition to the identification of a finished product by the consumer, labels are also used in a closed internal system. For example, they are common in production, whereby the product must be clearly assignable and traceable after each production step in order to be able to act quickly in the event of errors or complaints.
  • the check is usually carried out using an optical method, ie by recording at least one image of the marking with a calibrated inspection camera, with the marking being checked for static data such as the position and its machine readability.
  • Checking for machine readability includes, among other things, checking for contrast, ie how well the mark contrasts with the surface bearing the mark.
  • Checking the quality includes, among other things, that all elements of the labeling are present and executed correctly.
  • the actual layout to be checked is compared with reference data of a target layout from a reference database.
  • a label regularly has variable data such as manufacturing information or information about the type or content of the product in question, and this information can be collectively referred to as product information.
  • the at least one piece of product information can be arranged spatially in at least one product information region in the marking or also separately on the product.
  • the product information is usually required at different points in time during the transport, storage and sale of the products in question and is read out at a given time in a code checking step, interpreted and, if necessary, compared with reference data.
  • another second device can be used, which is capable of recognizing and interpreting machine-readable code, such as is possible with a barcode reader or with a QR code scanner.
  • machine-readable product information can be applied in encrypted form, and usually consists of a binary system built.
  • This includes optoelectronically readable product information such as one-dimensional bars or barcodes. They consist of spaced and parallel lines of different thicknesses which can be read out and interpreted by machines.
  • one-dimensional codes are open systems such as the price labeling of goods in a shop.
  • two-dimensional codes such as QR codes are known, which consist of a square matrix of black and white dots that encode information in a binary system. Two-dimensional codes can, for example, be notifications of information such as links or websites to customers in the form of a QR code.
  • ISO/IEC standards with different quality parameters such as ISO/IEC 15416, ISO/IEC 15415, ISO/IEC 15416-1, ISO/IEC 15416-2, ISO/IEC 15420, ISO/IEC 15417, ISO/ IEC 16022, and ISO/IEC 30116 available.
  • unencrypted product information in the form of fonts, characters or languages can also be checked. Recognizing the language of a product labeling is becoming increasingly important due to ever-increasing networking within Europe, but also due to growing global exports, and not every employee is able to immediately and unequivocally recognize every language used in the labeling of a product. Furthermore, security elements such as holograms or the like applied to the goods can also be recognized and checked in the code checking step.
  • variable product information contained in the marking is recorded in a code checking step with the at least one image recorded with the inspection camera, and the variable product information recorded is compared with reference information from a reference information database , wherein a match parameter is determined with the comparison and the identification is successfully checked if the match parameter exceeds a predeterminable success threshold value.
  • the calibrated inspection camera which is for example an APS (active pixel sensor) or a CCD camera
  • at least one image of the identification to be inspected and arranged on a product can be recorded.
  • the inspection camera is arranged in such a way that it is aimed at the marking at a certain specified distance and angle.
  • an actual layout is created from the at least one image recorded with the checking camera.
  • the actual layout can be generated by extracting the edges of the image using a suitable digital filter and storing them in a suitable data structure as an edge model. This The edge model generated can then be checked with the target layout stored in the reference information database with regard to static data such as position and machine readability.
  • the correspondence parameter is determined, whereby if a specified threshold value of the correspondence parameter is exceeded, i.e. if the edge model corresponds to the specification in advance, the test areas defined on the labeling, which contain variable product information data, are cut out and the Code review can be supplied.
  • a specified threshold value of the correspondence parameter i.e. if the edge model corresponds to the specification in advance
  • the test areas defined on the labeling which contain variable product information data
  • the Code review can be supplied.
  • evaluation algorithms adapted to this can be applied in order to analyze the variable data recorded
  • the match parameter determined with the method according to the invention is also of the
  • the language of the identification is determined by comparing the recorded variable product information with language information from the reference information database.
  • the check can be carried out promptly after labeling or can also be used to check stock levels.
  • the language information in the reference information database can consist of predefined, isolated keywords which are made available to the reference information database via an interface or by means of an input device such as a keyboard. It is also possible to recognize the language by comparing the recorded language with dictionary entries using an AI-based OCR algorithm or with the help of an artificial neural network.
  • Product information is captured and by comparing the captured variable product information with reference product information from the
  • variable product information of the marking to be checked in the code check step can be encrypted in one-dimensional codes such as a bar code or with the help of two-dimensional codes such as QR codes.
  • variable product information can be encrypted in a security element.
  • a security element is understood to mean all features of an identification which serve to prevent or at least make more difficult counterfeiting and/or unauthorized duplication of the identification.
  • graphics are used as security elements, they can also be converted into an edge model using an edge acquisition algorithm described at the outset and compared with reference product information.
  • Holograms or also optically variable features can be used.
  • a hologram is understood to mean a feature of the security element which can generate a representation by means of holography.
  • Optically variable features are understood to mean, for example, kinegrams, in which case the security feature of the kinegram can show different representations from different viewing angles.
  • the verification can be made possible by two or more recorded images showing the security feature from different perspectives relative to the surface of the marking, a map extraction algorithm then being used as in the verification of graphics.
  • the inspection camera is set up to detect electromagnetic radiation, in particular ultraviolet light and/or visible light and/or infrared light, which is emitted by a light source the marking is directed and reflected by it, captured by the inspection camera and in a
  • the calibrated test camera is set up so that part of the light spectrum, which can extend from ultraviolet light to infrared light, can be recorded and stored in the data processing system and processed further.
  • the light source can be designed, for example, as a line light source, which preferably has a length that is greater than the largest extent of the marking to be illuminated.
  • the light source is preferably configured as an LED light source.
  • a further second light source can be arranged in such a way that it illuminates the marking at a different angle from the first light source. In this way, shadows cast and reflections that occur can be reduced, particularly with glossy or painted surfaces of the marking, and machine readability can be increased at the same time.
  • the second light source can have the same or a different emitted light spectrum than the first light source.
  • another inspection camera can also be used, which is aimed at the marking at an angle that differs from the angle of the first inspection camera.
  • the further second inspection camera can be suitable for being able to capture a light spectrum that differs from that of the first inspection camera.
  • the marking itself or individual sections or elements can be designed in such a way that they do not reflect in the visible range of the light spectrum, but with a longer or shorter wavelength.
  • a variant is conceivable in which the marking or parts of the marking only emit a light spectrum that can be recorded by the test camera after irradiation.
  • the at least one image recorded by the test camera in the image capturing step can be stored in a memory device of the data processing system.
  • the storage device is designed in such a way that the image information of the at least one recorded image can be stored in a suitable data format, read out and, if necessary, transmitted via a suitable interface.
  • Electronic memories are used here, in particular semiconductor memories such as SSD memories or flash memories, magnetic storage devices such as HDD memories, optical memories, for example DVDs, or else magneto-optical memories.
  • a target layout is generated with the desired static data of the marking, the target layout being stored in the reference information database, the static data of the target layout being compared with the static data of an actual layout, and with the comparison a Match parameter is determined.
  • the target layout can be sent directly via an interface as a digital file
  • Reference information database can be made available, or there is the possibility of creating the target layout using an optical process.
  • the target layout is created using an optical method
  • at least one image of the identification can first be recorded analogously to the recording of an image in the image capture step of the actual layout take place.
  • the edges of the at least one recorded image can be extracted using a suitable digital filter and stored in the storage device as an edge model.
  • the at least one to be checked can now be in the edge model
  • Product information region which contains at least one variable piece of product information, are identified, the positions of the at least one product information region being stored as a function of the edge model. It can thus be achieved that the position of the at least one product information region is precisely defined, even if the identification is rotated or shifted when the edge models are compared with one another. Furthermore, the type of product information to be checked or the multiple product information contained there can be stored for each of the at least one defined product information region. These include, in particular, an edge model, a text field, a date field and/or a one-dimensional or multi-dimensional code. The edge model of the marking, the defined at least one product information region and its relative position in the marking can then be saved and clearly assigned using a key.
  • the target layouts stored in the reference information database and clearly assignable via a key can be compared with the static data of the actual layout.
  • the layout checking step the edge model of the at least one image generated in the image recording step is checked for a possible match with the edge model of the target layout.
  • the edge model of the actual layout is used until the best possible match with help rotated, scaled and translated using a suitable algorithm until the match is maximized.
  • the correspondence parameter determined in this way indicates the degree of correspondence between the two edge models.
  • the layout verification step is considered passed and the actual layout is submitted to code verification.
  • the method described for determining the correspondence parameter can be carried out with the complete marking, but also with parts of the marking. It is also possible to carry out the code check step with only a section of the identification. This step increases the certainty of the determination in the case of comparing very similar markings where the
  • the actual layout is generated from the static data from the at least one recorded image, with the actual layout being stored in the storage device, and with the static data of the actual layout being combined with the static data of the Target layouts are compared, with a match parameter being determined with the comparison.
  • the actual layout can be generated from the recorded image of the marking to be checked, with the recorded image being converted into an edge model using a suitable digital filter.
  • the edge model of the actual layout can be stored in the storage device and included in the layout check be compared to the edge model of the target layout for a possible match.
  • the target layout is a generated edge model of the desired marking, the target layout being an edge model in the
  • Reference information database is stored.
  • the edges of the image can be extracted using a suitable digital filter and the recorded image of the marking can be converted into an edge model.
  • This edge model can be used in the database creation step in the
  • Reference information database are stored. During the check, the actual layout is moved and/or rotated until the match between the two layouts is maximized.
  • the images to be compared can also be compared with one another using other algorithms. This includes, for example, dividing the images into brightness levels pixel by pixel and checking the brightness levels against reference data for maximum agreement, or comparing the recorded images with one another as a whole.
  • the invention optionally provides that the actual layout is an edge model, the edge model being generated from the at least one image recorded in the image recording step, and the actual layout being stored as an edge model in the storage device for comparison with the target layout.
  • the conversion of an image of the marking to be checked into an edge model offers the advantage that Even complex markings can be reduced to the necessary features and stored in a storage device quickly and in a space-saving manner, compared with one another and further processed. A large throughput of edge model comparisons per unit of time can be achieved here.
  • the correspondence parameter determined in the layout checking step is cut out of the at least one product information region from the actual layout if a value of the match is exceeded, with the variable product information of the cut out product information regions being checked in the code check. If the layout check step is successful, i.e. if a predetermined value of the correspondence parameter is exceeded, the at least one product information region defined in the target layout is cut out in the actual layout and fed to the code check step.
  • the code checking step the type of product information to be expected can be recognized, the information contained therein can be read out and compared with the reference information of the product information database.
  • the invention also relates to a device for checking an identification of a product, wherein in an image capturing step with a calibrated inspection camera at least one image of the identification arranged on a surface of the product is captured, and in a layout checking step the at least one recorded image is used to determine the quality of the applied on the surface Labeling is checked in relation to static data such as position and machine readability.
  • a first device is used to check an identification of a product in relation to static data such as position and machine readability, the static data being compared in a layout check step with reference data from a reference database.
  • a further second device is always used, which can read out the product information in a code checking step and compare it with reference information from a reference information database.
  • the device has a digital data processing system which is set up to use the at least one image recorded with the inspection camera to record variable product information contained in the identification in a code inspection step in addition to checking the quality of the marking in the layout checking step and the recorded variable product information
  • Reference information from a reference information database is compared, with a match parameter being determined with the comparison.
  • the inspection camera used is implemented in particular as a CMOS or CCD camera, with the inspection camera preferably being arranged and fixed at a fixed angle and distance from the marking to be checked, or it can be moved from outside to the marking of the product.
  • the inspection camera is suitable for taking and forwarding high-resolution images of the markings to be checked.
  • the test camera can be connected to a digital data processing system in a signal-transmitting manner, which makes it possible for recorded images to be stored in the digital data processing system, compared with one another and processed further.
  • test camera In addition to the use of a test camera, it is possible that, in addition to the first test camera, other test cameras are used, which are aimed at the marking at a different angle from the first camera. In this way, images of the marking can be generated from different perspectives, as may be necessary, for example, when checking security elements.
  • the match parameter can be used to decide whether the marking also contains the correct product information, or whether a correction or at least a more detailed review of the product information in the product marking is necessary.
  • the correspondence parameter is compared with a predeterminable success threshold value and it is assumed that the content is correct and of sufficient quality if the match score is above the success threshold.
  • the data processing system can have a storage device connected to the test camera, it being possible for images recorded with the test camera in the image recording step to be stored in the storage device.
  • the memory device can be implemented as an electronic memory, in particular a semiconductor memory such as an SSD memory or flash memory, a magnetic memory device such as an HDD memory, an optical memory, for example DVDs, or also a magneto-optical memory.
  • the storage device is connected to the test camera in a signal-transmitting manner, so that images recorded with the test camera can be stored in the storage device and retrieved again.
  • the data processing system can have a database, the database being connected to the storage device in a signal-transmitting manner, so that images stored in the storage device can be compared with images in the database.
  • the database can preferably have a database management system and a database.
  • the database includes the amount of stored data, in particular images and edge models of the actual and target layouts in suitable file formats, with the database management system being able to define and control access to and storage of the data.
  • the inspection camera has at least one light source with which the marking to be checked can be illuminated.
  • the light source is preferably a beam light source, in particular an LED lamp, which is aimed at the marking to be illuminated and checked and can illuminate it completely.
  • the light source can be arranged and fixed on the inspection camera, for example as a ring light, with a variant also being conceivable in which the light source is arranged at a distance from the inspection camera and can be moved if necessary.
  • at least one additional light source is arranged, which preferably illuminates the marking at an angle that differs from the first light source, in order to prevent or minimize shadows and/or reflections of the marking.
  • the light source can also be arranged at a distance from the identification to be checked, in which case the light can be directed onto the identification by means of a light guide.
  • the light source can preferably emit a predetermined spectrum of ultraviolet light and/or visible light and/or infrared light, whereby light sources with different electromagnetic spectrums can be used.
  • the inspection camera is set up to be able to capture the light spectrum of ultraviolet light and/or visible light and/or infrared light.
  • the test camera can preferably capture the same light spectrum that can be emitted by the at least one light source used. This includes in particular visible light and ultraviolet light.
  • Ultraviolet light can be emitted, for example, by a fluorescent or phosphorescent security feature of the marking or the marking itself when using a light-emitting imprint.
  • FIG. 1 shows a schematic overview of the testing method with a layout testing step and a subsequent code testing step.
  • FIG. 1 shows a schematic overview of the testing method for testing an identification of a product.
  • the method has a layout check step 1 and one following the passed
  • Layout check step 1 feasible code check step 2.
  • static data of the identification are checked for a possible match, while variable product information within the identification is checked in the subsequent code check step 2.
  • the image capture step 3 the actual layout of the marking to be checked is created.
  • an image of the marking 4 is recorded with a calibrated inspection camera.
  • the edges of the image are extracted with an appropriate digital filter and placed in a for Edge models appropriate data format stored as an edge model of the actual layout 5.
  • the target layout is generated by taking an image of the desired marking 7 in a reference database generation step 6, preferably with the same calibrated inspection camera or with a comparable calibrated inspection camera, and converting it into an edge model 8 analogously to the actual layout.
  • the at least one region is marked as product information region 9, which contains at least one piece of variable product information. Their position is stored as a function of the position of the edge model 8.
  • the edge model 8 of the marking as well as the relative position of the at least one
  • Product information region 9 is assigned a key for unique assignment and all data is stored in the reference information database.
  • the generation of the desired layout or the reference database generation step 6 usually only has to be carried out once before the layout check step 1 is carried out for a large number of identifications of a corresponding number of products.
  • the edge model of the actual layout 5 is then compared with the edge model of the target layout 8 for a possible match 10 .
  • the edge models 5, 8 are superimposed and shifted and rotated until maximum agreement is reached.
  • a match parameter is assigned to this match. If a defined value of the match parameter is exceeded and thus a predetermined minimum match of the actual and target layouts 5, 8, following the now completed layout test step 1 the code check step 2 is connected. If the minimum value is not reached
  • the layout test step 1 is regarded as not passed.
  • code check step 2 which is connected to layout check step 1
  • the actual layout created in layout check step 1 is checked for variable product information that is present and encrypted in the marking.
  • the at least one product information region whose position relative to the marking in the
  • Product information contained in the product information region is read out 12 and compared with reference information from the product information database or a manual entry 13.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé de vérification d'un marquage d'un produit. Dans une étape de capture d'images (3), au moins une image (4) du marquage disposé sur une surface du produit est capturé par une caméra de vérification étalonnée, et dans une étape de vérification de disposition (1), par l'intermédiaire de la ou des images enregistrées (4), la qualité du marquage appliqué sur la surface est vérifiée par rapport à des données statiques telles que la position et la lisibilité de la machine. La ou les images (4) enregistrées par la caméra de vérification, en plus de la vérification de la qualité du marquage dans l'étape de vérification d'agencement (1), dans une étape de vérification de code (2), des informations de produit variable (12) contenues dans le marquage sont capturées et les informations de produit variable capturées sont comparées à des informations de référence provenant d'une base de données d'informations de référence (13). Une grandeur caractéristique de correspondance est déterminée au moyen de la comparaison.
EP22720458.3A 2021-04-09 2022-04-08 Dispositif et procédé de vérification d'un marquage d'un produit Pending EP4320603A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021108925.5A DE102021108925A1 (de) 2021-04-09 2021-04-09 Vorrichtung und Verfahren für eine Prüfung einer Kennzeichnung eines Produkts
PCT/EP2022/059400 WO2022214647A1 (fr) 2021-04-09 2022-04-08 Dispositif et procédé de vérification d'un marquage d'un produit

Publications (1)

Publication Number Publication Date
EP4320603A1 true EP4320603A1 (fr) 2024-02-14

Family

ID=81454776

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22720458.3A Pending EP4320603A1 (fr) 2021-04-09 2022-04-08 Dispositif et procédé de vérification d'un marquage d'un produit

Country Status (4)

Country Link
US (1) US20240185596A1 (fr)
EP (1) EP4320603A1 (fr)
DE (1) DE102021108925A1 (fr)
WO (1) WO2022214647A1 (fr)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020087574A1 (en) * 2000-12-15 2002-07-04 Walsh Terrence P. Method for automating inspecting labels
CA2926436A1 (fr) * 2013-10-07 2015-04-16 Judith Murrah Lecteur multimode d'image et spectral
GB201803795D0 (en) * 2018-03-09 2018-04-25 Prisymid Ltd Label data processing system

Also Published As

Publication number Publication date
US20240185596A1 (en) 2024-06-06
WO2022214647A1 (fr) 2022-10-13
DE102021108925A1 (de) 2022-10-13

Similar Documents

Publication Publication Date Title
DE102018109392A1 (de) Verfahren zum erfassen optischer codes, automatisierungssystem und computerprogrammprodukt zum durchführen des verfahrens
EP2417561B1 (fr) Code a deux dimensions et procede
DE19910226B4 (de) Vorrichtung und Verfahren zur Kennzeichnung und Identifizierung eines Probenfläschchens
EP2463101B1 (fr) Système et procédé de production et d'inspection d'impressions dotés de contenus statiques et variables
DE102017114081B4 (de) Vorrichtung und Verfahren zum Rundum-Inspizieren von Behältnissen am Transportband
EP3746992B1 (fr) Procédé pour le contrôle de l'authenticité et/ou de l'intégrité d'un document de sécurité comprenant une caractéristique de sécurité imprimée, caractéristique de sécurité et dispositif pour la vérification
EP2558976A1 (fr) Procédé pour identifier un substrat
CH710713B1 (de) Authentifizierungsverfahren unter Verwendung von Oberflächenpapiertextur.
DE19957390A1 (de) Individualisierungssystem für einen Gegenstand
DE102006011143A1 (de) Sicherheitsmarkierungssystem
DE102021112659A1 (de) Verwenden von Strichcodes zur Bestimmung der Dimensionen eines Gegenstandes
DE102019118954A1 (de) Automatisches lesen von formdaten
EP4320603A1 (fr) Dispositif et procédé de vérification d'un marquage d'un produit
EP2394250B1 (fr) Procédé et dispositif pour vérifier des documents par utilisation d'une transformation en ondelettes
DE102007050691A1 (de) Vorrichtungen, Verfahren und Prozess zur stochastischen Markierung und Rückverfolgung von Druckprodukten
DE19822751A1 (de) System und Verfahren zur Identifizierung und Authentifizierung von Zubehör, Hilfs- und/oder Betriebsstoffen für technische Geräte
WO2012175542A1 (fr) Procédé et dispositif de création d'un ensemble de données de référence d'un document à partir d'un document
WO2020126926A1 (fr) Procédé d'étalonnage pour améliorer la vérification de motifs d'authentification par l'intermédiaire d'appareils d'acquisition d'images numériques
EP1139285B1 (fr) Procédé et dispositif pour contrôle ou inspection d'objets
DE102008007731A1 (de) Verfahren und Vorrichtung zur Identifizierung und Authentifizierung von Objekten
DE102015102994A1 (de) Sicherheitsetikett und Verfahren zu seinem Betrieb
DE102019201529A1 (de) Steuerungsvorrichtung für eine Fertigungsanlage, Nachverfolgungsvorrichtung, Fertigungsdatenbestimmer, Methoden und Computerprogramm
EP4205091B1 (fr) Procédé pour produire un identifiant numérique d'un exemplaire d'un produit d'impression, ledit exemplaire présentant au moins une image d'impression, téléphone intelligent ou tablette doté(e) d'un tel dispositif et procédé d'utilisation de ce dispositif
DE102020120962B4 (de) Erkennungsverfahren und Erkennungssystem zum eindeutigen Erkennen eines Objekts
DE102015208121A1 (de) Verfahren zur Gewinnung von Informationen aus einem Kodierkörper, System mit einem Kodierkörper, Computerprogrammprodukt und Datenspeichermittel

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230922

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR