WO2016157168A2 - Support imprimé sur lequel est imprimée une image lisible par machine, et système et procédé de balayage d'image lisible par machine - Google Patents

Support imprimé sur lequel est imprimée une image lisible par machine, et système et procédé de balayage d'image lisible par machine Download PDF

Info

Publication number
WO2016157168A2
WO2016157168A2 PCT/IL2016/050274 IL2016050274W WO2016157168A2 WO 2016157168 A2 WO2016157168 A2 WO 2016157168A2 IL 2016050274 W IL2016050274 W IL 2016050274W WO 2016157168 A2 WO2016157168 A2 WO 2016157168A2
Authority
WO
WIPO (PCT)
Prior art keywords
visual
machine
readable image
code
visual element
Prior art date
Application number
PCT/IL2016/050274
Other languages
English (en)
Other versions
WO2016157168A3 (fr
Inventor
Itamar FRIEDMAN
Original Assignee
Eyeconit Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eyeconit Ltd. filed Critical Eyeconit Ltd.
Publication of WO2016157168A2 publication Critical patent/WO2016157168A2/fr
Publication of WO2016157168A3 publication Critical patent/WO2016157168A3/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B42BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
    • B42DBOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
    • B42D25/00Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof
    • B42D25/20Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof characterised by a particular use or purpose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/08Payment architectures
    • G06Q20/20Point-of-sale [POS] network systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B42BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
    • B42DBOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
    • B42D25/00Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof
    • B42D25/20Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof characterised by a particular use or purpose
    • B42D25/22Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof characterised by a particular use or purpose for use in combination with accessories specially adapted for information-bearing cards
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B42BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
    • B42DBOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
    • B42D25/00Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof
    • B42D25/30Identification or security features, e.g. for preventing forgery
    • B42D25/324Reliefs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/327Short range or proximity payments by means of M-devices
    • G06Q20/3276Short range or proximity payments by means of M-devices using a pictured code, e.g. barcode or QR-code, being read by the M-device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06046Constructional details

Definitions

  • the presently disclosed subject matter relates, in general, to the field of a machine-readable image and, more particularly, to a printed medium having a machine- readable image printed thereon, and system and method of scanning the machine- readable image.
  • One-dimensional barcodes and two-dimensional codes have been developed as machine-readable image representations of information. Many two-dimensional codes represent data in a way of dots distribution or patterns in a certain grid, such as matrix code.
  • QR Quick Response Code
  • a QR Code comprises an array of black cells (square dark dots) and white cells (square light dots). The black cells are arranged in a square pattern on a white background. In some other cases, a negative option where the background is black and the cells are white, is valid as well.
  • three distinctive squares, known as finder patterns are located at the corners of the matrix code. Image size, orientation, and angle of viewing can be normalized. Other functional patterns, such as the alignment and timing patterns, enhance this process.
  • Two-dimensional codes are used in product authentication system.
  • a standard QR Code is positioned on the packaging of a product, identifying it as a genuine product.
  • a possible implementation of an anti -counterfeit system can enable a customer to scan the QR Code and inform the customer if the product is estimated to be genuine or fake.
  • a standard printed two-dimensional code can be easily photocopied and its printed copy can be positioned on a fake product.
  • Holograms are also used for product authentication . Holograms can be produced in such way that they are harder to copy than a standard printed image. Holograms are often attached to a product packaging, and can be an indication that a product is genuine. However, without the knowledge to tell a real hologram from fake ones, for the naked eyes of a customer, a fake hologram, even if not copied exactly, may look similar or give the feeling of a genuine hologram to the customer. GENERAL DESCRIPTION
  • One of the technical problems to be solved herein relates to how to provide information related to a certain entity, such product, etc, to customers in a better and more secured way in particular for authentication and anti-counterfeit purposes, especially in comparison with the present art in which a standard barcode or a two-dimensional code being printed on the packaging of a product can be very easily photocopied and positioned on a fake product.
  • a computerized method of scanning a machine-readable image printed on a medium by a scanning device capable of presenting a plurality of views of the machine -readable image when being observ ed from different v iewpoints, each of the plurality of views having a respective visual element embedded therein, wherein at least one of the plurality of views is embedded with a visual element being a visual code having data encoded therein, the method comprising: i) sequentially, for each of the plurality of views, detecting the visual element embedded therein; and analyzing the detected visual element to obtain information associated therewith, and ii) determining whether the scanning process is successful at least based on a matching relationship between the information associated with the detected visual elements.
  • a computerized system of scanning a machine-readable image printed on a medium by a scanning device the m edium capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the plurality of views having a respective visual element embedded therein, wherein at least one of the plurality of views is embedded with a visual elem ent being a visual code having data encoded therein
  • the system comprising a processor configured to: i) sequentially, for each of the plurality of views, detect the visual element embedded therein; and analyze the detected visual element to obtain information associated therewith, and ii) determine whether the scanning process is successful at least based on a matching relationship between the information associated with the detected visual elements.
  • a non-transitory computer readable storage medium tangibly embodying a program of instructions executable by a m achine to scan a m achine -readable image printed on a medium by a scanning device, the printed medium capable of presenting a plurality of views of the machine-readable image when being obsen'ed from different viewpoints, each of the plurality of views having a respective visual element embedded therein, wherein at least one of the plurality of views is embedded with a visual elem ent being a visual code having data encoded therein, comprising the steps of the following: i) sequentially, for each of the plurality of views, detecting the visual element embedded therein; and analyzing the detected visual element to obtain information associated therewith, and ii) determining whether the scanning process is successful at least based on a matching relationship between the information associated with the detected visual elements.
  • a computerized method of scanning a machine -readable image printed on a medium by a scanning device capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the plurality of views being embedded with a respective graphic including a visual feature
  • the method compri sing: i) sequentially, for each of the plurality of views, a) detecting the visual feature included in the graphic embedded in the view; and b) calculating a descriptor based on the detected visual feature; giving rise to a plurality of descriptors of respective visual features each for a view; ii) determining whether the scanning process is successful based on the plurality of descriptors or information associated therewith.
  • a computerized method of generating a machine -readable image printed on a medium capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, the method comprising: embedding a plurality of visual elements each in a view of the plurality of views on the medium constituting the machine-readable image, the embedding including embedding a visual code having data encoded therein in at least one of the plurality of views; the machine -readable image being adapted to be scanned by a scanning device in a scanning process, the process including sequential detection and analysis of the visual elements to obtain information associated therewith and verification of a successful scanning process at least based on a matching relationship between the information associated with the visual elements.
  • a printed medium having a machine -readable image printed thereon, the printed medium capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the plurality of views having a respective visual element embedded therein, wherein at least one of the plurality of views is embedded with a visual element being a visual code having data encoded therein, the machine-readable image being adapted to be scanned by a scanning device in a scanning process, the scanning process including sequential detection and analysis of the visual elements to obtain information associated therewith and verification of a successful scanning process at least based on a matching relationship between the information associated with the visual elements.
  • a printed medium having a machine-readable image printed thereon, the medium capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the plurality of views being embedded with a respective graphic including a visual feature giving rise to a plurality of graphics including respective visual features, the machine -readable image being adapted to be scanned by a scanning device in a scanning process, the scanning process including sequential detection of the visual features and calculation of descriptors each based a detected visual feature and verification of a successful scanning process based on the descriptors and the information associated therewith.
  • the visual element can be selected from the following: i) a visual code having data encoded therein, and ii) a graphic including a visual feature.
  • the visual code can be a two-dimensional code having an input image embedded therein.
  • the visual code can be a two- dimensional code having an input image embedded therein, wherein decoded values of cells in the two-dimensional code that corresponds to the encoded data are determined such thai the appearance of the two-dimensional code complies with a visual similarity criterion when compared with the input image.
  • the visual code can be a two-dimensional code having an input image embedded therein, the input image associated with an image descriptor used in a reading process of the two-dimensional code, wherein cells having decoded values corresponding to the encoded data in the two-dimensional code are positioned in one or more regions relative to the input image.
  • the information associated with each visual element can comprise one or more of the following: a detection instruction for detecting a next visual element, an identification indicator of the visual element, information of a product with which the medium is associated, and a URL.
  • the detection instruction can include information of a designated next visual element, and the matching relationship is between the information of a designated next visual element included in the information associated with the visual element and the identification indicator included in the information associated with the next visual element.
  • the detection instruction can further include an indication of a relative position between the scanning device and the machine -readable image for detecting the next visual element.
  • the indication of a relative position included in the detection instruction can be provided to a user on a display of the scanning device.
  • the determining can be further based on the number of visual elements that is defined as sufficient for determining the scanning process as successful.
  • the plurality of views can include at least a first view embedded with a first visual element and a second view embedded with a second visual element
  • the method can comprise: detecting the first visual element embedded in the first view of the machine-readable image and analyzing the first visual element to obtain information associated therewith; detecting the second visual element embedded in the second view of the machine-readable image and analyzing the second visual element to obtain information associated therewith; and determining whether the scanning of the machine -readable image is successful at least based on a matching relationship between the information associated with the first visual element and the second visual element.
  • the first visual element can be a first visual code having first data encoded therein
  • the second visual element can be a graphic including a visual feature.
  • the analyzing the first visual element can include: decoding the first visual code to obtain the first data encoded therein, the first data being the information associated with the first visual element and including a detection instruction for detecting the second visual element, the detection instruction including: a) a descriptor representing a designated visual feature, and b) an indication of a relative position between the scanning device and the machine -readable image for detecting the second visual element.
  • the detecting the second visual element can comprise detecting the visual feature included in the graphic in accordance with the detection instruction.
  • the analyzing the second visual element can include calculating a descriptor representing the visual feature in the graphic, the descriptor being the information associated with the second visual element.
  • the determ ini ng can include: detennining the scanning process as successful if the descriptor representing the visual feature of the graphic matches the descriptor representing the designated visual feature.
  • the plurality of views can further include a third view embedded with a third visual element, the third visual element being a second visual code having second data encoded therein.
  • the detection instruction included in the first data can comprise: a) a descriptor representing a designated visual feature, b) an identification indicator for a designated visual code, and c) an indication of a relative position between the scanning device and the machine-readable image for detecting the second visual element and the third visual element.
  • the method can further comprise: detecting the second visual code embedded in the third view of the machine-readable image in accordance with the detection instruction, decoding the second visual code to obtain the second data encoded therein including an identification indicator of the second visual code, and determining the scanning of the machine-readable image as successful if the descriptor representing the visual feature of the graphic matches the descriptor representing the designated visual feature, and if the identification indicator of the second visual code matches the identification indicator for a designated visual code in the detection instruction.
  • the first visual element can be a first visual code having first data encoded therein
  • the second visual element can be a second visual code having second data encoded therein.
  • the analyzing the first visual element can include: decoding the first visual code to obtain the first data encoded therein, the first data being the information associated with the first visual code and including the detection instruction for detecting the second visual code, the detection instruction including: a) an identification indicator for a designated second visual code, and b) an indication of a relative position between the scanning device and the machine -readable image for detecting the second visual code.
  • the detecting the second visual element can comprise detecting the second visual code in accordance with the detection instruction.
  • the analyzing the second visual element can include decoding the second visual code to obtain the second data encoded therein, the second data being the information associated with the second visual code and including an identification indicator of the second visual code.
  • the determining can include detenninmg the scanning of the machine-readable image as successful if the identification indicator of the second visual code matches the identification indicator for a designated second visual code included in the detection instruction.
  • the plurality of views can include at least a first view embedded with a first graphic including a first visual feature and a second view embedded with a second graphic including a second visual feature.
  • the method can comprise: detecting the first visual feature included in the first graphic in the first view and calculating a first descriptor based on the detected first visual feature, the first descriptor being associated with a detection instruction including a) a descriptor of a designated visual feature, and b) an indication of a relative position between the scanning device and the machine -readable image for detecting the second visual feature; detecting the second visual feature included in the second graphic in the second view in accordance with the detection instruction and calculating a second descriptor based on the detected second visual feature; and determining whether the scanning process is successful if the second descriptor of the second visual feature matches the descriptor of the designated visual feature.
  • the medium can be one of the following: a lenticular print, and a hologram print.
  • the printed medium can be in the form of a card.
  • the printed medium can be attached to surface of a product or accessory thereof.
  • the printed medium can be connected to a product or accessory thereof by means of a stripe.
  • the printed medium can also be unattached to a product or accessory thereof and can be packed within a package of the product.
  • the printed medium can have an array of lenses thereon capable of presenting the plurality of views of the machine-readable image when being observed from different viewpoints.
  • the machine-readable image can be printed on one side of the array of lenses.
  • the printed medium further can comprises a substrate attached to one side of the array of lenses, and the machine-readable image can be printed on the substrate..
  • the method can further comprise providing an indication that a product is genuine, and/or product information, and/or a URL of a product in response to a determination that the scanning of the machine-readable image is successful.
  • the indication of a relative position included in the detection instruction can include one of the following: a direction indicator to move the scanning device relative to the printed medium, a direction indicator to move the printed medium relative to the scanning device, and a shaking indicator to shake the scanning device in order to capture the next visual element.
  • the determining whether the scanning of the machine-readable image is successful can be irrespective of which visual element being first detected.
  • the printed medium having a machine-readable image printed thereon and a scanning process thereof can provide customers with a more interactive and secured way to obtain entity related information, such as, e.g., product related information.
  • entity related information such as, e.g., product related information.
  • Particularly such implementation makes it much harder for counterfeit products to imitate real products, since the printed medium having the machine-readable image printed thereon cannot be simply photocopied and duplicated due to the above described technical characteristics thereof.
  • FIG. 1A is a schematic illustration of an exemplified machine-readable image printed on a medium, the machine-readable image presenting different views when being observed from different viewpoints, in accordance with certain embodiments of the presently disclosed subject matter:
  • Fig. IB illustrates an exemplified longitudinal cut of a lenticular print having a machine-readable image printed thereon in accordance with certain embodiments of the presently disclosed subject matter
  • FIG. 2 schematically illustrates a functional block diagram of a system for scanning a machine-readable image printed on a medium by a scanning device in accordance with certain embodiments of the presently disclosed subject matter
  • FIG. 3 illustrates a generalized flowchart of scanning a machine -readable image printed on a medium by a scanning device in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 4 illustrates a generalized flowchart of a scanning process of a machine- readable image printed on a medium, the machine -readable image hav ing at least a first view and a second view in accordance with certain embodiments of the presently disclosed subject matter;
  • Fig. 5A illustrates a generalized flowchart of a scanning process of a machine- readable image printed on a medium, the machine-readable image having at least a first view embedded with a first visual code having first data encoded therein, and the second view embedded with a graphic including a visual feature in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. SB illustrates a generalized flowchart of a continuing scanning process of a machine-readable image pnnted on a medium, the machine-readable image having a third view embedded with a second visual code having second data encoded therein in accordance with certain embodiments of the presently disclosed subject matter;
  • Fig. 6 illustrates a generalized flowchart of a scanning process of a machine- readable image printed on a medium, the machine-readable image having a first view embedded with a first visual code having first data, encoded therein, and the second view embedded with a second visual code having second data encoded therein in accordance with certain embodiments of the presently disclosed subject matter;
  • Fig. 7 illustrates a general flowchart of a scanning process of a machine- readable image printed on a printed medium by a scanning device in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 8 illustrates a general flowchart of a scanning process of a machine- readable image printed on a medium, the machine-readable image having at least a first view embedded with a first graphic including a first visual feature, and the second view embedded with a second graphic including a second visual feature in accordance with certain embodiments of the presently disclosed subject matter;
  • Figs. 9A-9D show exemplified illustrations of different kinds of two- dimensional codes each embedding an input image in accordance with certain embodiments of the presently disclosed subject matter;
  • Fig. 1.0A illustrates the printed medium in the form of a card and having a machine-readable image embedded therein in accordance with certain embodiments of the presently disclosed subject matter
  • Fig. 10B illustrates a product having a card attached thereto in accordance with certain embodiments of the presently disclosed subject matter.
  • Fig. IOC illustrates a product having a card connected thereto in accordance with certain embodiments of the presently disclosed subject matter.
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • non-transitory is used herein to exclude transitory, propagating signals, but to otherwise include any volatile or non-volatile computer memory technology suitable to the presently disclosed subject matter.
  • the phrase “for example,” “such as”, “for instance” and variants thereof describe non-limiting embodiments of the presently disclosed subject matter.
  • Reference in the specification to “one case”, “some cases”, “other cases” or variants thereof means that a particular feature, structure or characteristic described in connection with the embodiment(s) is included in at least one embodiment of the presently disclosed subject matter.
  • the appearance of the phrase “one case”, “some cases”, “other cases” or variants thereof does not necessarily refer to the same embodiment(s).
  • one or more stages illustrated in the figures may be executed in a different order and/or one or more groups of stages may be executed simultaneously and vice versa.
  • FIG. 1A illustrating an exemplified machine -readable image printed on a medium, the machine-readable image presenting different views when being observed from different viewpoints, in accordance with certain embodiments of the presently disclosed subject matter.
  • machine-readable image should be expansively construed to cover any image that can be detected by an image acquisition module and/or detection module and then digitally analyzed by a processing unit to provide information associated therewith (including, e.g., numerical data, strings, pointers and/or any other digital data).
  • the machine-readable image used herein refers to a printed machine-readable image that is printed on a dedicated medium, the medium (hereinafter also termed as "printed medium") capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints or angles. As shown in Fig.
  • the machine-readable image 102 As shown in Fig. 1A, the machine-readable image 102 as exemplified shows three different views 106, 108 and 1 10 respectively when being observed by human eyes from three different angles, such as, e.g., a left angle, a middle angle (e.g., a 90 degree angle as perpendicular to the image) and right angle.
  • the different viewpoints or angles or directions from which the image can he viewed can differ from each other by different intervals varying from a small angle, such as 5 degrees, to a larger angle such as, e.g., 90 or 80 degrees.
  • each of the plurality of views of the machine-readable image can have a respective visual element embedded or incorporated therein.
  • each visual element serves as a part of the machine-readable image, and can be any of the following (but not limited to): i) a visual code having data encoded therein, as will be described in details below, and ii) a graphic including a visual feature.
  • the graphic can be one original image or portion thereof, or a combination of a plurality of original images, and the visual feature included therein can be, for example, a human recognizable pattern or symbol, such as, e.g., a logo, an icon, etc, which is not structurally encoded or constructed as compared to the visual code.
  • the visual element can also possibly be in other suitable visual formats or patterns in addition to the above described.
  • view 106 observed from the left viewpoint is embedded with a visual code 107
  • view 108 observed from the middle viewpoint is embedded with a graphic 109 having a visual feature of a snowfiake
  • view 110 observed from the right viewpoint is embedded with another visual code 111.
  • the machine-readable image can have more views than the plurality of views that embed visual elements as described above.
  • the other views can either be empty and has no content, or can be embedded with graphics or images having no identifying visual features, such as, e.g., a white background image.
  • the machine-readable image 102 in fact is a manifold machine-readable image which is composed of multiple views embedding respective visual elements when observed from different viewpoints.
  • machine-readable image printed machine -readable image
  • manifold machine-readable image or the like. Unless specifically stated otherwise, or it is apparent from the description, these terms are used interchangeably to refer to the above described machine -readable image with multiple views.
  • visual code used herein should be expansively construed to cover any kind of machine-readable optical label that use standardized encoding modes to encode data and store information.
  • a visual code can be a one- dimensional barcode, or alternatively it can be a two-dimensional code.
  • two- dimensional code used herein should be expansively construed to cover any optical machine-readable representation of data in the form of a two-dimensional pattern of symbols.
  • matrix code which represents data in a way of dot distribution in a matrix grid, such as, for example, Quick Response (QR) code and EZcode, etc.
  • the visual code can be a two-dimensional code having an input image or graphic embedded therein.
  • Figs. 9A-9D there are shown exemplified illustrations of different kinds of two-dimensional codes each embedding an input image in accordance with certain embodiments of the presently disclosed subject matter.
  • Fig. 9A and Fig. 9B are two-dimensional codes that have input images superimposed thereon. The superimposing is performed, e.g., by changing the transparency of the dots/cells in the two-dimensional code without changing the distribution of the dots or adjusting the decoded values thereof, such that the two-dimensional code, after being superimposed with the input image, is still machine -readable.
  • Fig. 9A and Fig. 9B are two-dimensional codes that have input images superimposed thereon. The superimposing is performed, e.g., by changing the transparency of the dots/cells in the two-dimensional code without changing the distribution of the dots or adjusting the decoded values thereof, such that the two-dimensional code, after being superimposed with the input image
  • FIG. 9C shows a different kind of two-dimensional code (similar to the visual codes illustrated in view 106 and view 110 in Fig. 1A) in which the input image is not only simply superimposed thereon as described with respect to Fig. 9A and Fig. 9B.
  • the decoded values of dots that correspond to the encoded data are in fact determined such that the appearance of the two-dimensional code complies with a visual similarity criterion when compared with the input image.
  • An exemplified illustration of such a two-dimensional code is described in US patent No. 8,978,989, issued on date March 17, 2015, which is incorporated herein in its entirety by reference.
  • 9D shows another kind of two-dimensional code having an input image embedded therein.
  • the input image can be associated with an image descriptor which is used to verify the authenticity of the two-dimensional code in the reading process thus rendering the code to be functionally safer and stronger.
  • the dots having decoded values corresponding to the encoded data in the two-dimensional code can be positioned in one or more encoding regions relative to the function patterns and a portion of the input image, rendering the two-dimensional code appears visually more appealing.
  • An exemplified illustration of such two-dimensional code is described in US patent application No. 62/097,748, filed on date December 30, 2014, which is incorporated herein in its entirety by reference.
  • the printed medium 104 can include an array of lenses capable of presenting the plurality of views of the machine -readable image when being observed from different viewpoints. Tn some cases, the machine-readable image is printed on one side of the array of lenses. In some other cases, the printed medium further comprises a substrate attached to one side of the array of lenses, and the machine-readable image is printed on the substrate.
  • the printed medium 104 used herein can include any suitable medium that enables such a multi-view display of the machine-readable image, such as, for example, a lenticular print.
  • Fig. IB illustrating an exemplified longitudinal cut of a lenticular print having a machine-readable image printed thereon in accordance with certain embodiments of the presently disclosed subject matter.
  • the lenticular print comprises a series of lenticular lenses 120 (e.g., cylindrical lenses, as seen in Fig. IB) molded on a substrate 122.
  • the lenticular lens 120 can be an array of magnifying lenses, designed so that when viewed from slightly different angles, different images are magnified.
  • the substrate 122 can be made of any suitable material, such as, e.g., plastic.
  • a machine-readable image such as, e.g., the exemplified machine-readable image 102 shown in Fig. 1A
  • Such generating and embedding process can include the following steps, according to certain embodiments of the presently disclosed subject matter:
  • a lenticular image 124 can be created first based on a plurality of existing images (such as, e.g., images each containing a visual element of 107, 109 and 1 11). Specifically, each image containing a visual element can be arranged (sliced) into strips, which are then interlaced with other similarly arranged images, giving rise to the lenticular image 124 comprising interlaced stripes of the plurality of images containing visual elements, b) The lenticular image 124 can then be printed directly to the back (smooth side) of the lens 120, or alternatively it can be printed to the substrate 122 (e.g., printed on a synthetic paper, which is then bonded to the plastic), and laminated to the lenses 120.
  • a plurality of existing images such as, e.g., images each containing a visual element of 107, 109 and 1 11.
  • each image containing a visual element can be arranged (sliced) into strips, which are then interlaced with other similarly arranged images, giving
  • the lenses 120 are accurately aligned with the interlaces of the lenticular image, so that light reflected off each strip is refracted in a slightly different direction, but the light from all pixels originating from the same original image containing a visual element is sent in the same direction.
  • a lenticular print and a lenticular image is only one possible way to implement the manifold machine-readable image disclosed herein and should not be construed to limit the present disclosure in anyway.
  • Other suitable printed medium such as, by way of example, a hologram print and a hologram image can also be used to generate such machine-readable image.
  • the printed machine -readable image generated as described above can be scanned by a scanning device in a scanning process.
  • the scanning process includes sequential detection and analysis of the visual elements embedded in the plurality of views of the machine -readable image to obtain information associated therewith and verification of a successful scanning process at least based on a matching relationship between the information associated with the visual elements, as will be described in details with reference to Figs. 3-8.
  • the printed medium can be in the form of a card.
  • the printed medium having a machine-readable image printed thereon can be correlated or associated with a certain entity, such as, e.g., a product or accessory thereof for product authentication purposes.
  • the printed medium can be attached to or connected to or assembled on a product or accessor],' thereof.
  • the printed medium can be attached to the surface of a product or accessory thereof.
  • the printed medium can be connected to a product or accessory thereof by different means, such as, e.g., a stripe.
  • the printed medium can also be unattached to a product or accessory thereof and can be carried, shipped, deli vered or used alone without being attached to any product. For example, it can be packed together with a product, e.g., within a package box of the product.
  • an indication that a product with which the printed machine-readable image is associated is genuine can be provided in response to a determination that the scanning of the machine-readable image is successful.
  • the products that the printed medium can be correlated or associated with should be expansively construed to include any kinds of article or substance produced during a manufacturing process, including but not limited to, e.g., all merchandises and goods that are manufactured and traded in market.
  • the accessories of a product can include any subordinate or supplementary parts or items related to a product including one or more of the following: a packaging box, a product label, a product poster, a product advertisement, a sticker of a product, etc.
  • the printed medium can also be used to provide information related to a certain entity other than a product, such as, e.g., a brand, and/or a company, etc.
  • at least one of the plurality of views is embedded with a visual element being a visual code having data encoded therein .
  • one or more views of the plurality of views of the machine-readable image can be embedded with respective visual codes, and the rest of the views can be embedded with respective graphics each having a visual feature.
  • all the plurality of views of the machine-readable image can be embedded with respective visual codes.
  • One of such advantages is that, for instance, the present market of such scanning devices and applications, as well as the users that perform the scanning process are educated in such a way that a visual code is more acknowledged and recognizable to be readable by a scanning device as compared to a pure graphic having visual features.
  • visual code can have information encoded therein which may not only facilitate the scanning process but also provide necessary product information to the users, as well as secure the machine-readable image such that it will not be easy for others to change or modify such image. Nevertheless, according to some other embodiments, it is still possible that none of the views of the machine -readable image is embedded with a visual code, and all the views of the machine-readable image are embedded with respective graphics each having a visual feature.
  • FIG. 2 schematically illustrating a functional block diagram of a system for scanning a machine-readable image printed on a medium by a scanning device in accordance with certain embodiments of the presently disclosed subject matter.
  • a system 200 for scanning a machine- readable image printed on a medium the medium capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the plurality of views having a respective visual element embedded therein, such as described above with reference to Figs. 1A and IB.
  • the system 200 can comprise a processing unit 202 that includes a detection module 204, an analysis module 206 and a verification module 208.
  • the processing unit 202 e.g. a digital signal processor (DSP), a microcontroller, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), etc
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • system 200 can further include an image acquisition module 203 (such as, e.g., a camera of the scanning device) configured to capture or acquire an image of each view of the machine -readable image.
  • the acquired image can be provided to the detection module 206 for detecting the visual element therein, e.g., by processing the acquired image and extracting the visual element therefrom, it is to be noted that the image acquisition module 203 and the detection module 204 can be implemented as separate components, or alternatively, their functionality can be consolidated and integrated as one functional module, such as the detection module 204.
  • the analysis module 206 is configured to analyze the detected visual element in the view- to obtain information associated therewith. According to certain embodiments, the detecting process and analyzing process is performed sequentially for each of the plurality of views, giving rise to a plurality of visual elements and respective associated information, as will be described below in further details with respect to Fig. 3.
  • the verification module 208 is configured to determine whether the scanning process of the machine -readable image is successful at least based on a matching relationship between the information associated with the detected visual elements.
  • the verification module 208 can be configured to determine whether the scanning process of the machine-readable image is complete. According to some embodiments, such verification of a successful scanning process is intended for product authentication purposes. As aforementioned, in some cases, an indication that a product is genuine can be provided in response to a determination that the scanning of the machine-readable image is successful. Further details of system 200 will be described below with respect to Figs. 3-8.
  • the system 200 can further comprise an I/O interface 210, a storage module 212 and a display module 214 operatively coupled to the other functional components described above.
  • the I/O interface 210 can be configured to obtain an acquired image for each view of the machine-readable image, and/or provide a verification indication to a user whether the scanning process is successful.
  • the storage module 212 comprises a non-transitory computer readable storage medium that stores data and enables retrieval of various data for processing unit 202 to process and for display module 214 to display.
  • the storage module 212 can store, for example, the acquired images, detected visual elements, associated information, etc.
  • the display module 214 can display one or more of the following to the user: the frames that the image acquisition module captures, e.g., the plurality of views of the machine-readable images, the detected visual elements, a detection instruction during the scanning process, verification indication whether the scanning process is successful, product information and other relevant information.
  • system 200 can be implemented as a standalone scanning device dedicated for performing such scanning process, or alternatively, the functionality of system 200 can be integrated as a sub-unit or component of a scanning device which is a general purpose computer or electronic device.
  • the functionality of system 200 can be realized by running a computer readable program or application on a general purpose computer hardware including but not limited to smart phones (e.g. iPhone, etc), PDAs, tablet computers (e.g. Apple iPad), personal computers, laptop computers, or any other suitable device.
  • system 200 described here with reference to Fig. 2 can be a distributed device or system, which includes several functional components which reside on different devices and are controlled by a control layer as a virtual entity to perform the operations described herein.
  • the image acquisition module and/or the detection module 204 can reside on a portable scanning device, while the processing unit 202 or part of the components thereof can reside on a remote server for performing the image processing, analyzing and verification.
  • the term "processing unit” should be expansively construed to include a single processor or a plurality of processors which may be distributed locally or remotely.
  • the processing unit and/or storage module can in some cases be cloud-based.
  • system 200 can correspond to some or all of the stages of the methods described with respect to Figs. 3- 8.
  • the methods described with respect to Figs. 3-8 and their possible implementations can be implemented by system 200. It is therefore noted that embodiments discussed in relation to the methods described with respect to Figs. 3-8 can also be implemented, mutatis mutandis as various embodiments of the system 200, and vice versa.
  • FIG. 3 there is illustrated a generalized flowchart of scanning a machine-readable image printed on a medium by a scanning device in accordance with certain embodiments of the presently disclosed subject matter.
  • the printed medium (such as, e.g., a lenticular print) is capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the plurality of views having a respective visual element embedded therein.
  • scanning or “scanning process” is known in the art of optical machine-readable image, and should be expansively construed to cover a process (or part thereof) of detecting, analyzing such as decoding or reading of a machine -readable image (including, e.g., a visual code, and/or a graphic having a visual feature), and optionally also providing an indication in response to a correct decoding or reading of such image.
  • a machine -readable image including, e.g., a visual code, and/or a graphic having a visual feature
  • the scanning process of the manifold machine-readable image can include the whole process of sequentially detecting and analyzing each of the views and determining whether the scanning is successful. It is to be noted that in some cases, the term scanning can also refer to scanning, such as detecting and analyzing one view of the machine-readable image. Accordingly, the interpretation of the terms should not be limited to the definitions above and should be given its broadest reasonable interpretation.
  • the visual element embedded therein can be detected (310) (e.g., by the detection module 204 illustrated in Fig. 2). Specifically, starting from a first view and sequentially for each view, a user can aim the scanning device at the view? from a certain viewpoint angle, and an image of the view- including the visual element can be captured (e.g., by the image acquisition module 203 illustrated in Fig. 2). In order to detect the visual element embedded therein, the captured image can be processed by the detection module 204 and the visual element can be extracted therefrom.
  • the visual element can be any of the following: i) a visual code having data encoded therein, and ii) a graphic including a visual feature.
  • at least one of the plurality of views is embedded with a visual element which is a visual code having data encoded therein.
  • the visual code can be a two-dimensional code having an input image embedded therein, as described above with reference to Figs. 9A-9D.
  • certain embodiments of the following description are provided with respect to the visual code being a two-dimensional code. Embodiments are, likewise, applicable to other kinds of visual codes.
  • the detected visual element can be analyzed (320) (e.g., by the Analysis module 206 illustrated in Fig. 2) to obtain information associated therewith.
  • the detecting process and analyzing process is performed sequentially for each of the plurality of views, as illustrated in Fig. 3, giving rise to a plurality of visual elements and respective associated information.
  • the analyzing process for all the detected visual elements can be performed together, after all the visual elements are detected sequentially from all the views. It is to be noted that the order of executing the detecting and analy zing processes described herein should not be construed to limit the present disclosure in any way. Other suitable orders can be implemented in lieu of the above.
  • the analysis of previous detected visual elements and the detection of a current visual element can be performed simultaneously, e.g., in a system with multiple processors.
  • the analysis of different visual elements can be performed sequentially or simultaneously.
  • the information associated with each visual element can comprise (but not limited to) one or more of the following: a detection instruction for detecting a next visual element, an identification indicator of the present visual element, information of a product with which the medium is associated, and a URL.
  • the identification indicator can be, for example, a Unique Identifier or Unique Identification Number (UTD) for identifying each visual element and for verifying a matching relationship with other visual elements.
  • UTD Unique Identification Number
  • the associated information can comprise additional information besides the above, which could possibly depend on the type of visual elements and the usage thereof.
  • the detection instruction can include infonnation of a designated next visual element, so as to provide indication as which next visual element should be searched for.
  • the term "next visual element” refers to the visual element in a next view that the scanning device is moving to capture and detect after finishing processing the present visual element detected in the present view.
  • the information of the next visual element can be a descriptor representing the visual feature in the graphic.
  • the information of the next visual element can be a pair ID that indicates the UID of the next visual element.
  • the detection instruction can also include an indication of a relative position between the scanning device and the printed medium having the machine-readable image printed thereon (or the product with which the printed medium is associated) for detecting the next visual element.
  • Such indication of a relative position can be, by way of example, a direction indicator that indicates to which direction the scanning device or tlie printed medium should be moved relative to each other in order to capture the next visual element.
  • the indication of a relative position can also be a shaking indicator that indicates the scanning device or tlie printed medium should be shaken in order to capture tlie next visual element.
  • such indication of a relative position between the scanning device and the printed medium can be provided to the user visually, e.g., as an arrow on a display of the scanning device to facilitate the scanning process.
  • such indication can also be presented on the printed machine -readable image instead of or in addition to the indication that is provided on a display of the scanning device.
  • such indication can also be provided to the user through audio (e.g., a speaker), or vibration, whose functionalities can be assembled as part of the scanning device (e.g., a mobile phone).
  • the scanning device can be equipped with one or more sensors (such as, e.g., an accelerometer, compass, gesture sensor, and a gyroscope, etc) which can provide additional indication regarding the relative position or movement of the scanning device for detecting a next visual element.
  • sensors such as, e.g., an accelerometer, compass, gesture sensor, and a gyroscope, etc
  • Such indication can include, for example: providing updated direction instructions to the user, estimating updated visual element position and indicating if a real movement of the scanning device is happening.
  • a verification or a determination can be made (330) (e.g., by the verification module 208 illustrated in Fig. 2) as whether the scanning process of the machine -readable image is successful at least based on a matching relationship between the information associated with tlie detected visual elements.
  • the determination is made at least based on tlie matching relationship between the information of a designated next visual element included in the information associated with the present visual element and the identification indicator of a next visual element included in the information associated with the next visual element, as will be described in further details with respect to Figs. 4-8.
  • the determination can also be based on the number of visual elements that is defined as sufficient for determining the scanning process as successful, in one embodiment, it can be determined that the scanning process should be deemed as successful if N visual elements (N>2) are successfully detected and analyzed, besides the matching relationship as described above.
  • the number N can be a predefined fixed number, or alternatively, it can be included or encoded in the information associated with each visual element. For instance, if the visual element is a visual code such as a two-dimensional code, the number N can be part of the encoded data in the two-dimensional code.
  • the scanning process can also be considered as successful if a certain percentage of the N visual elements (e.g., 85%) are successfully detected and analyzed.
  • the scanning process is determined as not successful, such as, e.g., either the information associated with visual elements are not matched, or the number of detected visual elements is not adequate (e.g., smaller than die predetermined number N), the scanning process will continue and the scanning device will be instructed to move to other views for further detection and analyzing process. In some cases, this may include detecting and analyzing visual elements that are already detected and analyzed.
  • a determination process as whether the scanning process of the machine -readable image is complete based on the number of visual elements that is defined as sufficient. If the scanning process is determined as not complete, i.e., the number of detected visual elements is smaller than the predetermined number N, the scanning process will continue and the scanning device will be instructed to move to other views for further detecting and analyzing process, as described above.
  • the number of visual elements that is defined as sufficient is not always necessary for determining whether the scanning process is successful.
  • the scanning process can also be determined as successful if a next visual element to be detected is a repeated visual element that has already been detected previously.
  • the amount of visual elements included in the machine-readable image may be larger than the number of visual elements that is defined as sufficient for determining whether the scanning process is successful .
  • the machine-readable image may have in total five visual elements embedded in five different views thereof, but only three of them are required to be scanned in order to determine the scanning process is successful, hi such case, the scanning process as described above does not have to sequentially traverse each of the five views but only need to scan three of them in order to reach the determination of a successful scanning process.
  • the number of the total visual elements embedded in the machine-readable image can be a predefined fixed number, or alternatively, it can be included or encoded in the information associated with each visual element.
  • Fig. 4 now, illustrating a generalized flowchart of a scanning process of a machine- readable image printed on a medium, the machine-readable image having at least a first view and a second view in accordance with certain embodiments of the presently disclosed subject matter.
  • the plurality of views of the printed machine-readable image can include at least a first view embedded with a first visual element and a second view embedded with a second visual element.
  • the scanning process in this case can include the following steps, as illustrated in Fig. 4:
  • the first visual element embedded in the first view of the machine-readable image can be detected (410) and the first visual element can be analyzed (420) to obtain information associated therewith.
  • the second visual element embedded in the second view of the machine- readable image can be detected (430) and the second visual element can be analyzed (440) to obtain information associated therewith.
  • the determination is made (450) as whether the scanning of the machine- readable image is successful at least based on a matching relationship between the information associated with the first visual element and the second visual element.
  • the first visual element in the first view can be a first visual code having first data encoded therein
  • the second visual element in the second view can be a graphic including a visual feature, as will be described below with reference to Fig. 5A.
  • FIG. 5A there is illustrated a generalized flowchart of a scanning process of a machine-readable image printed on a medium, the machine- readable image having at least a first view embedded with a first visual code having first data encoded therein, and the second view embedded with a graphic including a visual feature in accordance with certain embodiments of the presently disclosed subject matter.
  • the scanning process in this case is similarly performed as described above with reference to Fig. 4, and is specified to include the following steps.
  • the first visual code embedded in the first view of the machine -readable image can be detected (510), as similarly described above with respect to block 410.
  • the analyzing of the first visual element as described in block 420 can be specified as shown in block 520, including decoding the first visual code to obtain the first data, encoded therein.
  • the first data is the information associated with the first visual element, as described above in block 420.
  • the first data can include a detection instruction for detecting the second visual element which is a graphic including a visual feature.
  • the detection instruction can include information of a designated next visual element that should be searched for in the following scanning process, which in this case is a descriptor or image descriptor representing a designated visual feature.
  • descriptor or image descriptor is known in the art of computer vision, and the following definition is provided as a non-limiting example only for convenience purposes.
  • image descriptor relates to visual features of the contents in an image. It may describe characteristics such as shapes, colors, textures; among others more complicated properties of the image.
  • the descriptor may relate to a plurality of parts of the image or to the entire image. It should be appreciated that a certain image descriptor can be composed from several image descriptors. It would be appreciated that when referring to an image descriptor throughout the description and claims, the image descriptor can be represented and saved in any known appropriate format.
  • an image descriptor can be represented and stored as a raster graphic format (including GIF, JPG and PNG fonnats).
  • Another example representation could be a vector representation.
  • Another example representation could be an array of integers, floats or vectors.
  • Another representation could be a byte or bit stream.
  • the detection instruction can further include an indication of a relative position between the scanning device and the machine-readable image for detecting the second visual element, e.g., a direction indicator, as described above with reference to block 320. It is to be noted that in some cases, the detection instruction, such as, e.g., a descriptor representing a designated visual feature, can also be stored in a database instead of being included in the encoded data of the visual code.
  • a visual feature included in a graphic embedded in the second view can be detected (530) in accordance with the detection instruction.
  • the scanning device or the printed machine-readable image can be moved towards a facilitating direction in accordance with the direction indicator included in the detection instruction in order to search for the designated next visual element in a second view that resides to that direction of the first view based on the descriptor representing a designated visual feature.
  • One or more frames or images of the second view are captured and visual feature included therein can be detected.
  • the analyzing of the second visual element as described in block 440 can be specified as shown in block 540, including calculating a descriptor representing the detected visual feature in the graphic.
  • the descriptor is the information associated with the second visual element as described above in block 440.
  • more than one descriptor can be calculated from a certain graphic, from which, one descriptor represents the detected visual feature.
  • the scanning process can be determined (550) as successful if the descriptor representing the visual feature of the graphic matches the descriptor representing the designated visual feature.
  • the scanning process can be determined (550) as successful if one of the descriptors calculated from the graphic matches the descriptor representing the designated visual feature.
  • a descriptor is calculated for each of the visual features, and a search is performed to see which descriptors of the graphic matches the descriptor representing the designated visual feature.
  • one or more frames or images of the second view or other views can be captured and visual feature included therein can be detected for further analyzing and matching verification, in a similar manner as described above.
  • the machine -readable image can further include a third view embedded with a third visual element, the third visual element being a second visual code having second data encoded therein, as will be described below with reference to Fig. SB.
  • FIG. 5B there is illustrated a generalized flowchart of a continuing scanning process of a machine-readable image printed on a medium, the machine-readable image having a third view embedded with a second visual code having second data encoded therein in accordance with certain embodiments of the presently disclosed subject matter.
  • the detection instruction included in the first data can comprise: a) a descriptor representing a designated visual feature for searching for the second visual element which is the graphic having a visual feature; and b) an identification indicator for a designated visual code for searching for the third visual element which is the second visual code.
  • the detection instruction can further comprise: c) an indication of a relative position between the scanning device and the machine-readable image for detecting the second visual element and the third visual element, e.g., a direction indicator.
  • the detection instruction to detect the third visual element can be included in the information associated with the second visual element instead of the first visual element.
  • the descriptor calculated for the visual feature in the graphic of the second view can be associated with a detection instruction (e.g., the descriptor can serve as a pointer to point to the detection instruction stored in a database), including: a) an identification indicator for a designated visual code, and possibly also fa) an indication of a relative position between the scanning device and the machine-readable image for detecting the third visual element, e.g., a direction indicator.
  • the second visual code embedded in the third view of the machine-readable image can be detected (560) in accordance with the above detection instruction.
  • the scanning device or the printed machine-readable image can be moved towards a direction in accordance with a direction indicator included in the detection instruction in order to search for the designated next visual element in a third view that resides to that direction of the second view based on the identification indicator for a designated visual code.
  • One or more frames or images of the third view are captured and the second visual code included therein can be detected.
  • the second visual code can be decoded (570) to obtain the second data encoded therein including an identification indicator of the second visual code.
  • the second data can further include more information as described above, such as detection instruction, product infor ation, URL, etc.
  • the scanning of the machine -readable image can be determined (580) as successful if the descriptor representing the visual feature of the graphic matches the descriptor representing the designated visual feature, and if the identification indicator of the second visual code matches the identification indicator for a designated visual code in the detection instruction.
  • the determination process (580) can be implemented in one stage as described above, or alternatively it can be implemented in two separate stages.
  • the verification of the matching relationship between the descriptor representing the visual feature of the graphic and the descriptor representing the designated visual feature can be performed after block 540, e.g., as described in block 550, and the verification of the matching relationship between the identification indicator of the second visual code and the identification indicator for a designated visual code in the detection instruction can be performed in a later stage, such as after block 570, as described in block 580.
  • one or more frames or images of a present view e.g., the second view or the third view
  • visual elements included therein can be detected for further analyzing and matching verification, in a similar manner as described above.
  • the order of the scanning process of the three views described above is illustrated for exemplified purposes only and should not be construed to limit the present disclosure in any way.
  • the user can start the scanning process from any view of the plurality of views.
  • the second visual code can be first detected and the detection instruction encoded therein can instruct the scanning device to move to capture and detect either the first visual code or the graphic with a visual feature, depending on the specific direction indicator included therein.
  • the determination of whether the scanning of the machine-readable image is successful is irrespective of which visual element is first detected.
  • the first visual element in the first view can be a first visual code having first data encoded therein
  • the second visual element in the second view can be a second visual code having second data encoded therein, as will be described below with reference to Fig.6.
  • FIG. 6 there is illustrated a generalized flowchart of a scanning process of a machine-readable image printed on a medium, die machine-readable image having a first view embedded with a first visual code having first data encoded therein, and the second view embedded with a second visual code having second data encoded therein in accordance with certain embodiments of the presently disclosed subject matter.
  • the scanning process in this case starts with blocks 510 and 52.0 similarly as described above with reference to Fig, 5A.
  • the first visual code embedded in the first view of the machine-readable image can be detected (510), and decoded (520) to obtain the first data encoded therein.
  • the first data can include a detection instruction for detecting the second visual code.
  • the detection instruction can include: a) an identification indicator for a designated second visual code, and possibly also b) an indication of a relative position between the scanning device and the machine- readable image for detecting the second visual code, e.g., a direction indicator.
  • the second visual code can be detected (630) in accordance with the above detection instruction.
  • the scanning device or the printed machine-readable image can be moved towards a facilitating direction in accordance with a direction indicator included in the detection instruction in order to search for the designated second visual code in a second view that resides to thai direction of the first view based on the identification indicator for the designated second visual code.
  • One or more frames or images of the second view are captured and visual code included therein can be detected.
  • the second visual code can be decoded (640) to obtain the second data encoded therein.
  • the second data is the information associated with the second visual code and includes an identification indicator of the second visual code.
  • the scanning of the machine-readable image can be determined (650) as successful if the identification indicator of the second visual code matches the identification indicator for a designated second visual code included in the detection instruction.
  • the encoded data embedded in a visual code such as a two-dimensional code can be encrypted for security purposes.
  • Decryption key- can be stored in the scanning device, such as, e.g., a scanning application or software that runs in a mobile device, or alternatively the decryption key can be stored in a database located on a remote server. Different decryption keys can be used for encrypting different encoded data.
  • FIG. 7 now, illustrating a general flowchart of a scanning process of a machine-readable image printed on a printed medium by a scanning device in accordance with certain embodiments of the presently disclosed subject matter.
  • the printed medium (such as, e.g., a lenticular print) is capable of presenting a plurality of views of the machine-readable image when being observed from different viewpoints, each of the views embedded with a respective graphic including a visual feature.
  • the visual feature included in the graphic embedded in the view can be detected (710) (e.g., by the detection module 204 illustrated in Fig. 2).
  • a descriptor can be calculated based on the detected visual feature (720) (e.g., by the Analysis module 206 illustrated in Fig, 2).
  • the detecting process and calculating process are performed sequentially for each of the plurality of views, as illustrated in Fig. 7, giving rise to a plurality of descriptors of respective visual features each for a view.
  • the calculating process for all the detected visual features can be performed together, after all the visual features are detected sequentially from all the views. It is to be noted that the order of executing the detecting and calculating processes described herein should not be constmed to limit the present disclosure in any way. Other suitable orders can be implemented in lieu of the above.
  • a verification or a determination can be made (730) (e.g., by the verification module 208 illustrated in Fig. 2) as whether the scanning process of the machine -readable image is successful based on the plurality of descriptors or information associated therewith, as will be described in details with reference to Fig. 8 below.
  • FIG. 8 there is illustrated a general flowchart of a scanning process of a machine-readable image printed on a medium, the machine-readable image having at least a first view embedded with a first graphic including a first visual feature, and die second view embedded with a second graphic including a second visual feature in accordance with certain embodiments of the presently disclosed subject matter.
  • the first visual feature included in the first graphic embedded in the first view can be detected (810), and a first descriptor can be calculated (820) based on the detected first visual feature.
  • the first descriptor can be associated with a detection instruction including: a) a descriptor of a designated v isual feature, and possibly also b) an indication of a relative position between the scanning device and the machine-readable image for detecting the second visual code, e.g., a direction indicator.
  • the first descriptor can serve as a pointer that points to the detection instruction stored in a database.
  • the second visual feature included in the second graphic in the second view can be detected (830) in accordance with the above detection instruction, and a second descriptor can be calculated (840) based on the detected second visual feature. It should be appreciated that in some cases the detection (830) and the calculation (840) of the descriptor can be done together as one united process.
  • Tlie scanning process can be determined (850) as successful if the second descriptor of the second visual feature matches the descriptor of the designated visual feature.
  • scanning processes describe scanning a machine-readable image with two or three views
  • these scanning processes are illustrated for exemplified purposes only and should by no means construed to limit the present disclosure in any way.
  • a scanning process of a machine-readable image having more than three views, being it views with visual codes or graphics with visual features, can be implemented as a continuous process in a similar manner as described with reference to Figs. 3-8.
  • the scanning processes of a manifold machine-readable image as above described can be used for product authentication and ami -counterfeit purposes.
  • a single standard visual code such as a two-dimensional code
  • a standard printed two-dimensional code can be very easily photocopied and duplicated. Such copies can be printed out and positioned on a fake product. The fake product may still be deemed as genuine due to the copied code which may still be recognized and decoded by a scanner.
  • Tamper resistance and/or tamper evidence technology can be used in the process of generating or printing the machine-readable image on the medium or when associating the medium with the medium carrier.
  • the medium carrier can be a product package, as aforementioned.
  • tracking and image localization technologies can be used in the scanning process in order to ensure that the plurality of different detected views was actually detected from one machine-readable image.
  • the processing unit can be configured to estimate the relative positions of different detected visual elements, then during the determination stage, further verify if different visual elements are detected from one region, e.g., the region where the manifold machine -readable image is located. If it is determined that different visual elements are not detected from one region, the scanning process is not considered to be successful. By doing so, an attempt to photocopy a plurality of views and print them separately on a paper will probably be verified as a fake scan.
  • the scanning process can further include providing certain indication and information to the user in response to a determination that the scanning of the machine -readable image is successful.
  • an indication can be provided to the user that the product with which the printed machine- readable image is associated is genuine.
  • product information or a pointer that points to information located in a remote server or database, such as a URL that links to the product website, can also be provided.
  • a single visual code such as a standard QR code
  • Passengers can use a wallet application in their mobile phone to scan the QR code and pay for the ride.
  • the wallet application enabled people to charge tlieir accounts with credits, and the credits can be reduced accordingly in response to a scan of a QR code.
  • Such an automatic payment system is both convenient and efficient; however, there exists certain problems of such system for verifying the payment which are currently unsolved. For instance, in order to avoid paying for the ride, people may take a photo of the QR Code, and scan it only in cases when the inspector comes onto the bus. The possibility of doing so is that some QR Code scanners or readers allow scanning from a photo stored in the phone. Also, in some other cases people can even printed out the photocopy of the QR code on paper and scan the code from the paper if needed.
  • a printed machine-readable image that have two or more views each containing or embedding a visual element (such as a visual code, e.g., a QR code, or a graphic with a visual feature) can be used instead of a single standard QR code, as described above.
  • a visual element such as a visual code, e.g., a QR code, or a graphic with a visual feature
  • a passenger needs to move his phone in a certain direction and scan the two or more views sequentially in order to achieve a successful scan.
  • a simple photocopy of such machine-readable image may only catch one of the views and thus cannot lead to a successful scan.
  • the problem of avoiding payment by photocopying the machine -readable can be solved.
  • the scanning device can be further configured to recognize if the scanning is done from a screen (e.g., a digital screen), based on e.g., noise calculation. Due to the refreshment property of the digital screen, noises in certain frequencies are generated.
  • the scanning device can be adapted to detect and calculate the amount of such noises to indicate if the scanning is done from a screen. If it is determined that the scanning is done from a screen, the machine-readable image can be identified as a fake one.
  • the printed machine-readable image having multiple views as disclosed herein not only can be used for product authentication purposes, it can also be used to authenticate or verify a process, such as the payment process as exemplified above.
  • Fig. 10A illustrating the printed medium as aforementioned in the form of a card and having a machine-readable image embedded therein in accordance with certain embodiments of the presently disclosed subject matter.
  • the card 1002 can include a printed medium 1004 as shown in Fig. 10A.
  • the printed medium 1004 as described above with reference to Figs. 1A and IB can include an array of lenses that are capable of presenting a plurality of views of a machine-readable image 1006 when being observed from different viewpoints.
  • the machine-readable image 1006 can be embedded to the array of lenses. In some embodiments, it can be embedded to at least one side of the array of lenses. In som e cases, the machine-readable im age 1006 can be printed directly on one side of the array of lenses.
  • the one side cars be, e.g., the back side of the lenses, which can be a relatively more smooth side as compare to the wavy side of the lenses.
  • the printed medium can further comprise a substrate (e.g., a synthetic paper, etc) attached to one side of the array of lenses.
  • the machine-readable image 1006 can be printed on the substrate and then attached or laminated to the one side of the lenses.
  • the machine-readable image 1006 can be a manifold machine- readable image that is composed of a plurality of views.
  • Each of the plurality of views of the machine-readable image 1006 can have a respective visual element embedded therein.
  • the visual element can be selected from the following: i) a visual code having data encoded therein, as described above in details with reference to Figs. 9A-9D, and ii) a graphic including a visual feature.
  • at least one of the plurality of views can be embedded with a visual element which is a visual code having data encoded therein. For instance, as shown in Fig. 1 ⁇ , one of the visual elements embedded therein is a two-dimensional code having a graphic embedded therein.
  • the card can further comprise an area 1007 to provide additional information that is related to an entity such as, e.g., a product, a brand or a company.
  • the additional information can include (but not limited to) one or more of the following: product information, brand information, company logo, contact information, and any oilier related information etc.
  • the additional information can be provided by, e.g., printing such information on the area 1007 of the card 1002.
  • the printed medium 1004 used herein can include any suitable medium that enables such a multi-view display of the machine-readable image, such as, for example, a lenticular print.
  • the card 1002 can be implemented in different ways. For instance, in some cases the card 1002 comprises only the area of the printed medium 1004. The size of the card 1002 can be the same as or slightly bigger than the area of 1004. In some other cases, the card 1002 can comprise the printed medium 1004 and a base layer or a base substrate that the printed medium can be embedded or attached to.
  • the base layer or base substrate can be made of, by way of example, paper, plastic, or any other material that is suitable for making a card. This is especially useful when the card 1002 includes an extra area for providing additional information related to a product as described above.
  • One example of implementing such case can be providing a base layer which has the same size as the size of the card 1002, and embedding or attaching the printed medium 004 to the base layer.
  • the size of the card 1002 i.e. the size of the base layer
  • the size of the card 1004 can be noticeably bigger than the area of the printed medium 1004 such that there can be extra area on the card for providing the additional information (such as the area 1007 exemplified in Fig. 10A).
  • One benefit of such implementation is cost saving since the material for making the printed medium (e.g., lenticular print) is normally more expensive than the material for making the base layer, thus it may be more economical for the manufacturer to have only the machine -readable image printed on the printed medium and have the rest of information printed on the base layer.
  • the whole card 1 02 including the area for providing additional information, can be made of the same material as the printed medium without providing a base layer.
  • the above described various card implementations are illustrated for exemplified purposes only and should not be construed to limit the present disclosure in any way . Any other suitable ways or materials of implementing the card with respect to the printed medium, as can be appreciated by a person skilled in the art may be implemented in addition or in lieu of the above.
  • the card as described above can be a standalone card and unattached to a product or the accessories thereof. The card can be carried, shipped, delivered or used alone without being attached to any product.
  • the card can serve as, e.g., a club card, loyalty card, a coupon card or a card for providing information related to a certain entity, such as, e.g., a brand, and/or a product, and/or a company.
  • a certain entity such as, e.g., a brand, and/or a product, and/or a company.
  • the card can be packed together with a product, e.g., within a packaging box of the product, in order to provide product related information to a customer.
  • the customer opens the packaging box and takes out the card, he can scan the card to obtain product information.
  • the card can be correlated or associated with a product. Such correlation or association between the card and the product can be implemented in different ways.
  • the card can be attached to the surface of the product.
  • the card can be connected to the product by different means, such as, e.g., a stripe, as will be described below with respect to Figs. 10B and IOC.
  • the products that the card can be correlated with should be expansively constmed to include any kinds of article or substance produced during a manufacturing process, including but not limited to, e.g., all merchandises and goods that are manufactured and traded in market.
  • FIG. 10B there is illustrated a product having a card attached thereto in accordance with certain embodiments of the presently disclosed subject matter.
  • a product or in other words, a product body 1012, and a card 1002 which is attached to the product body 1012.
  • the card 1002 can be attached to the surface of the product body or accessories thereof.
  • the card 1002 can be either attached directly to the surface of the product body, or alternatively it can be attached to an intermediate layer and then attached to the surface.
  • the accessories of a product can include any subordinate or supplementary parts or items related to a product including one or more of the following: a packaging box, a product label, a product poster, a product advertisement, a sticker of a product, etc. As described above with reference to Fig.
  • the card 1002 can include a printed medium 1004 which includes an array of lenses that are capable of presenting a plurality of views of a machine-readable image 1006 when being observed from different viewpoints.
  • the machine-readable image 1006 can be embedded to one side of the array of lenses.
  • Each of the plurality of views of the machine-readable image 1006 can have a respective visual element embedded therein.
  • at least one of the plurality of views can be embedded with a visual element which is a visual code having data encoded therein .
  • FIG. IOC there is illustrated a product having a card connected thereto in accordance with certain embodiments of the presently disclosed subject matter.
  • the card 1002 can include a printed medium 1004 which includes an array of lenses that are capable of presenting a plurality of views of a machine-readable image 1006 when being observed from different viewpoints.
  • the machine -readable image 1006 can be embedded to one side of the array of lenses.
  • Each of the plurality of views of the machine-readable image 1006 can have a respective visual element embedded therein.
  • at least one of the plurality of views can be embedded with a visual element which is a visual code having data encoded therein.
  • system can be implemented, at least partly, as a suitably programmed computer.
  • the presently disclosed subject matter contemplates a computer program being readable by a computer for executing the disclosed method.
  • the presently disclosed subject matter further contemplates a machine-readable memory tangibly- embodying a program of instructions executable by the machine for executing the disclosed method.

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Theoretical Computer Science (AREA)
  • Finance (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • Inspection Of Paper Currency And Valuable Securities (AREA)

Abstract

L'invention concerne un support imprimé sur lequel est imprimée une image lisible par machine, et un procédé et un système informatisés pour le balayage de l'image lisible par machine par un dispositif de balayage, le support pouvant présenter une pluralité de vues de l'image lisible par machine lorsqu'elle est observée à partir de différents points de vue, chaque vue parmi la pluralité de vues ayant un élément visuel respectif incorporé en son sein, au moins une vue parmi la pluralité de vues étant intégrée à un élément visuel qui est un code visuel ayant des données codées en son sein, le procédé comprenant les étapes consistant : i) de manière séquentielle, pour chaque vue parmi la pluralité de vues, à détecter l'élément visuel incorporé en son sein ; et à analyser l'élément visuel afin d'obtenir des informations qui lui sont associées, et ii) à déterminer si le processus de balayage est réussi au moins en fonction d'une relation de correspondance entre les informations associées aux éléments visuels détectés.
PCT/IL2016/050274 2015-04-02 2016-03-13 Support imprimé sur lequel est imprimée une image lisible par machine, et système et procédé de balayage d'image lisible par machine WO2016157168A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201562142044P 2015-04-02 2015-04-02
US62/142,044 2015-04-02

Publications (2)

Publication Number Publication Date
WO2016157168A2 true WO2016157168A2 (fr) 2016-10-06
WO2016157168A3 WO2016157168A3 (fr) 2016-11-17

Family

ID=55394988

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2016/050274 WO2016157168A2 (fr) 2015-04-02 2016-03-13 Support imprimé sur lequel est imprimée une image lisible par machine, et système et procédé de balayage d'image lisible par machine

Country Status (3)

Country Link
CN (2) CN106056183B (fr)
TW (1) TWM519760U (fr)
WO (1) WO2016157168A2 (fr)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016123136A1 (de) * 2016-11-30 2018-05-30 Bundesdruckerei Gmbh Verfahren zum Herstellen und zum Prüfen eines Sicherheitsdokuments und Sicherheitsdokument
CN110796221A (zh) * 2019-10-18 2020-02-14 周晓明 一种防伪标签的生成方法、验证方法及系统和防伪标签
FR3086427A1 (fr) * 2018-09-21 2020-03-27 Idemia France Code graphique a haute densite pour stocker des donnees
CN112668954A (zh) * 2020-09-03 2021-04-16 浙江万里学院 基于移动终端的物流寄收件信息的获取方法
WO2022104452A1 (fr) * 2020-11-09 2022-05-27 Pleora Technologies Inc. Système et procédé de déploiement de la fonctionnalité d'intelligence artificielle et système et procédé l'utilisant

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106779688A (zh) * 2016-12-22 2017-05-31 家乐宝电子商务有限公司 一种二维码扫码充值方法
US10768904B2 (en) * 2018-10-26 2020-09-08 Fuji Xerox Co., Ltd. System and method for a computational notebook interface

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6206288B1 (en) * 1994-11-21 2001-03-27 Symbol Technologies, Inc. Bar code scanner positioning
US7537170B2 (en) * 2001-08-31 2009-05-26 Digimarc Corporation Machine-readable security features for printed objects
US7264169B2 (en) * 2004-08-02 2007-09-04 Idx, Inc. Coaligned bar codes and validation means
US7575168B2 (en) * 2004-10-01 2009-08-18 Nokia Corporation Methods, devices and computer program products for generating, displaying and capturing a series of images of visually encoded data
US8668137B2 (en) * 2009-07-02 2014-03-11 Barcode Graphics Inc. Barcode systems having multiple viewing angles
US8342406B2 (en) * 2010-09-20 2013-01-01 Research In Motion Limited System and method for data transfer through animated barcodes
CN102157106A (zh) * 2011-05-10 2011-08-17 云南荷乐宾防伪技术有限公司 一种二维码和光学可变图像结合的复合多功能防伪标识
CN103729673B (zh) * 2014-01-28 2017-01-04 苏州大学 一种三维码及其制作方法

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102016123136A1 (de) * 2016-11-30 2018-05-30 Bundesdruckerei Gmbh Verfahren zum Herstellen und zum Prüfen eines Sicherheitsdokuments und Sicherheitsdokument
WO2018099521A1 (fr) 2016-11-30 2018-06-07 Bundesdruckerei Gmbh Procédé de fabrication et de vérification d'un document de sécurité et document de sécurité
FR3086427A1 (fr) * 2018-09-21 2020-03-27 Idemia France Code graphique a haute densite pour stocker des donnees
CN110796221A (zh) * 2019-10-18 2020-02-14 周晓明 一种防伪标签的生成方法、验证方法及系统和防伪标签
CN110796221B (zh) * 2019-10-18 2022-09-02 周晓明 一种防伪标签的生成方法、验证方法及系统和防伪标签
CN112668954A (zh) * 2020-09-03 2021-04-16 浙江万里学院 基于移动终端的物流寄收件信息的获取方法
CN112668954B (zh) * 2020-09-03 2023-09-26 浙江万里学院 基于移动终端的物流寄收件信息的获取方法
WO2022104452A1 (fr) * 2020-11-09 2022-05-27 Pleora Technologies Inc. Système et procédé de déploiement de la fonctionnalité d'intelligence artificielle et système et procédé l'utilisant

Also Published As

Publication number Publication date
TWM519760U (zh) 2016-04-01
WO2016157168A3 (fr) 2016-11-17
CN106056183B (zh) 2018-12-11
CN205068462U (zh) 2016-03-02
CN106056183A (zh) 2016-10-26

Similar Documents

Publication Publication Date Title
WO2016157168A2 (fr) Support imprimé sur lequel est imprimée une image lisible par machine, et système et procédé de balayage d'image lisible par machine
US12026860B2 (en) System and method for detecting the authenticity of products
US10963657B2 (en) Methods and arrangements for identifying objects
US11625551B2 (en) Methods and arrangements for identifying objects
US11763113B2 (en) Methods and arrangements for identifying objects
US11048936B2 (en) IC card for authentication and a method for authenticating the IC card
US20210217129A1 (en) Detection of encoded signals and icons
US9195819B2 (en) Methods and systems for verifying ownership of a physical work or facilitating access to an electronic resource associated with a physical work
US10803272B1 (en) Detection of encoded signals and icons
US11257198B1 (en) Detection of encoded signals and icons
CN106408063B (zh) 打印介质及其生成方法和扫描方法和标签
US11580733B2 (en) Augmented reality content selection and display based on printed objects having security features
US12061442B2 (en) Method for determining authenticity using images that exhibit parallax
EA025922B1 (ru) Способ автоматической аутентификации защищенного документа
KR101379420B1 (ko) 정품 인증용 라벨, 그 라벨의 인증코드 생성 방법, 그 라벨의 인증 방법 및 시스템, 그 라벨을 인증하기 위한 휴대용 단말기, 및 그 라벨의 인증을 위한 컴퓨터 가독성 기록매체
KR20150048334A (ko) 정품 인증용 라벨 및 그 라벨을 이용한 인증방법
WO2016199126A1 (fr) Système et procédé de reconnaissance d'un état d'un produit

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16771526

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16771526

Country of ref document: EP

Kind code of ref document: A2