LU501374B1 - Method for reading a tag based on cholesteric spherical reflectors - Google Patents

Method for reading a tag based on cholesteric spherical reflectors Download PDF

Info

Publication number
LU501374B1
LU501374B1 LU501374A LU501374A LU501374B1 LU 501374 B1 LU501374 B1 LU 501374B1 LU 501374 A LU501374 A LU 501374A LU 501374 A LU501374 A LU 501374A LU 501374 B1 LU501374 B1 LU 501374B1
Authority
LU
Luxembourg
Prior art keywords
pictures
flash
picture
tag
csr
Prior art date
Application number
LU501374A
Other languages
French (fr)
Inventor
Jan LAGERWALL
Hakam Agha
Original Assignee
Univ Luxembourg
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Luxembourg filed Critical Univ Luxembourg
Priority to LU501374A priority Critical patent/LU501374B1/en
Priority to PCT/EP2023/051776 priority patent/WO2023148060A1/en
Application granted granted Critical
Publication of LU501374B1 publication Critical patent/LU501374B1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • G06K7/10732Light sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10762Relative movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1447Methods for optical code recognition including a method step for retrieval of the optical code extracting optical codes from image or text carrying said optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1465Methods for optical code recognition the method including quality enhancement steps using several successive scans of the optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/80Recognising image objects characterised by unique random patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/95Pattern authentication; Markers therefor; Forgery detection

Abstract

In a first aspect, the invention proposes a method for reading a machine-readable tag based on Cholesteric Spherical Reflectors (CSRs) using a smart mobile device (10) equipped with a camera (12) and a flash (14). The smart mobile device is used to take a first picture of the tag with the flash turned off and a second picture of the tag with the flash turned on. The first and second pictures are then processed to bring out retroreflections of the CSRs (16). Further aspects of the invention relate to a computer-implemented method for reading a CSR-based machine-readable tag and a corresponding application program (app).

Description

DESCRIPTION
METHOD FOR READING A TAG BASED ON CHOLESTERIC SPHERICAL
REFLECTORS
Background of the Invention
[0001] The invention generally relates to the detection and reading of machine- readable tags comprising cholesteric spherical reflectors (CSRs).
[0002] As counterfeit, grey market and other substandard products are rapidly increasing problems across supply chains and markets, there is a growing interest in serialisation and identification solutions for traceability and confirmation of authenticity, to reduce health and environmental risks caused by fraudulent products as well as brand protection. As smart mobile devices (like smartphones and tablets) are now so ubiquitous across the world that any potential customer can be assumed to be in possession of one, many solutions rely on the smartphone and its embedded technologies for carrying out the authentication/identification. This limits the complexity of the tagging technology that is used to give the product to be authenticated or traced its ‘fingerprint’. Many solutions rely on QR-codes or similar graphical codes, which are highly visible and aesthetically disturbing. This is because they are read by standard camera technologies and they offer no way for detection beyond strong contrast between the code elements and the background, most often black on white. The visible footprint of these codes reduces their usefulness and general applicability, and they are thus typically printed in small scale on regions of product packaging that are not on immediate display. This limits their use in, e.g., logistics applications. Moreover, since this type of standard graphical code is easy to replicate, the reliability of such solutions can be questioned, and criminal actors are known to engage in duplication or transfer of valid QR-codes onto fraudulent products.
[0003] In contrast to QR-codes and similar labels, a Physical Uncloneable Function,
PUF (see, e.g., Arppe, R. & Serensen, T. J. Physical unclonable functions generated through chemical methods for anti-counterfeiting. Nature Reviews Chemistry 1, (2017)), cannot be duplicated, and a tag that functions as a PUF is thus much more secure and reliable. Over the last years, the inventors have developed a new PUF-tag based on Cholesteric Spherical Reflectors (CSRs), which are spheres of polymerized cholesteric liquid crystal exhibiting spherically symmetric selective (Bragg) reflection.
For further information, the interested reader is referred to (a) Schwartz, M. et al,
Linking Physical Objects to Their Digital Twins via Fiducial Markers Designed for
Invisibility to Humans, Multifunctional Materials 4, 022002 (2021), (b) Geng, Y.,
Kizhakidathazhath, R. & Lagerwall, J. P. F., Encoding Hidden Information onto
Surfaces Using Polymerized Cholesteric Spherical Reflectors, Adv. Funct. Mater. 31, 2100399 (2021), (c) Schwartz, M. et al., Cholesteric Liquid Crystal Shells as Enabling
Material for Information-Rich Design and Architecture, Adv. Mater. 30, 1707382 (2018), and (d) Geng, Y. et al., High-fidelity spherical cholesteric liquid crystal Bragg reflectors generating unclonable patterns for secure authentication, Sci. Rep. 6, 26840 (2016). llluminated with white light, CSRs reflect only a narrow wavelength band and only with one circular polarisation, which can be either right- or left-handed. Because of their spherical shape, they function as retroreflectors, i.e., they reflect light back to any light source that illuminates them, with the selectivity in wavelength and polarisation given by the cholesteric liquid crystal structure. Their PUF characteristics arise because their optical appearance changes dynamically with illumination and viewing conditions, and they depend sensitively on the exact arrangement of CSRs and of the particular characteristics of each CSR in a sample.
[0004] A new version of CSRs are “d-CSRs”, where the “d” stands for “dye-doped”.
D-CSRs may be specifically designed to be used for serialization and authentication of mass-produced products, where the CSR-based tag can be printed with any size on the front of the product packaging, where it can easily be used for logistics and supply chain tracing. This is possible because the dye mixed into the CSRs gives them the function of a pigment, as in standard printed matter, and they can thus be printed (or deposited using other technologies) into any desired pattern on top of the packaging (without interfering with the design apparent to the human eye). By combining CSRs with dyes giving them the appearance of (primary) colours of a colour model (e.g., red (R), green (G), blue (B) and black (K)), effectively any colour can be produced. An additive colour generation principle as in display technology and/or a subtractive colour model could be used. This has the advantage that the CSRs can be camouflaged on their target substrate, by choosing an appropriate fraction of CSRs of the different (primary) colours to generate the correct background colour in each area. Thereby, a serialization code such as a QR-code or other machine-readable code can be hidden from plain eyesight, allowing it to be printed across the entire front of the packaging, if desired, without impairing (significantly) the design.
[0005] While the human eye should ideally not detect the CSRs, a machine that carries out the authentication should. As described hereinafter, the special optical properties of CSRs (including d-CSRs) allow them to be distinguished no matter what the background is, and the equipment required is low-cost. However, it is not part of standard smartphone technology, causing a significant acceptance threshold for CSR- based tagging. To reduce the acceptance threshold, it is highly desirable to find a way to detect a pattern generated by CSRs that does not require any additional hardware beyond what is in a standard smartphone.
Summary of the Invention
[0006] Aspects of the present invention are drawn to addressing the problem of making it easier to read CSR-based patterns, in particular CSR-based machine- readable tags.
[0007] In a first aspect, the present invention proposes a method for reading a CSR- based machine-readable tag using a smart mobile device equipped with a camera and a flash. The smart mobile device is used to take a first picture of the tag with the flash turned off and a second picture of the tag with the flash turned on. The first and second pictures are then processed to bring out retroreflections of the CSR.
[0008] As used herein, the expression “smart mobile device” designates an electronic device capable of connecting to other devices or networks (in particular the Internet) via different wireless protocols such as Bluetooth, Zigbee, NFC, Wi-Fi, LiFi, 5G, etc., and small enough to hold and operate in the hand. Examples of most preferred smart mobile devices include smartphones and tablets (tablet computers) with a touchscreen.
[0009] It shall be appreciated that the proposed method for reading CSR-tags makes use of the retroreflection of CSRs rather than of the circular polarisation of the reflection.
[0010] The method may comprise placing the smart mobile device in an off-normal position with respect to the tag to take the first and second pictures. The off-normal position is preferably at an angle from 30° to 60° from the surface normal across the tag. Such off-normal perspective is preferred because the specular reflection of light flashes emitted by the smart mobile device on the surface on which the tag is applied will in this case be directed away from the mobile device, whereas the retroreflections from the CSRs will be directed back to the smart mobile device.
[0011] The processing of the first and second pictures may comprise subtracting the first picture from the second picture.
[0012] When necessary, the processing of the first and second pictures includes image registration of the first and second pictures. Image registration (also image alignment) could be feature-based or intensity-based. Image registration involves spatially transforming one of the first and second pictures to align with the other one of the first and second pictures. Feature-based image registration identifies correspondences between features such as points, lines, and contours. It will be appreciated that image registration need not be necessary if the smart mobile device is not moved (or not much moved) with respect to the CSR-based tag between the taking of the first and second pictures. The method may include attempting to extract the machine-readable tag without carrying out image registration and make a further attempt with image registration only if the first attempt failed.
[0013] The first and second pictures may be taken by recording a video and using a first and a second frame of the video as the first and the second picture, respectively.
The video may include a slow-motion video, i.e., a video that is recorded with a higher frame rate than the play back frame rate. During the recording of the video, the flash of the smart mobile device is activated at least once to generate the second picture(s).
[0014] The CSR-based machine-readable tag could comprise a one-dimensional barcode or a two-dimensional barcode, e.g., a QR code, an Aztec Code, a Data Matrix code, a PDF417 code, a DotCode, an EZcode, a ShotCode, a MaxiCode, etc.
[0015] The method may comprise extracting the machine-readable tag from the retroreflections of the CSR.
[0016] A further aspect of the invention relates to a computer-implemented method for reading a CSR-based machine-readable tag with a smart mobile device including a camera and a flash. The method comprises controlling the camera and the flash to take a first picture of the tag with the flash turned off and a second picture of the tag with the flash turned on, and processing the first and second pictures to bring out retroreflections of the CSR.
[0017] The computer-implemented method may include providing visual and/or audio guidance to a user to hold the smart mobile device in an off-normal position with respect to the tag. Additionally, tactile feedback could be given to the user. Optionally, the method may include verifying whether the smart mobile device has reached an off- normal position and continuing providing the guidance until an off-normal position has been reached. The guidance given to the user may include taking into account further parameters such as the distance between the smart mobile device and the tag, the ambient illumination, the size of the tag relative to the size of the first and second pictures, etc. The guidance given to the user may be based on readings of further sensors available on the smart mobile device, e.g., an accelerometer, a compass, an inclination sensor, etc.
[0018] Optionally, the computer-implemented method may further include controlling camera settings, such as aperture, focal length, exposure time, etc. If the smart mobile device includes more than one camera (comprising each an optical system, an image sensor and a dedicated software to control both), the computer-implemented method may control one or more of the cameras individually or jointly, depending on the possibilities offered by the API(s) (application programming interface(s)) of the camera(s)
[0019] The off-normal position may be from 30° to 60° from the normal. 45° may be preferred.
[0020] The processing of the first and second pictures may be carried out entirely onboard the smart mobile device. Alternatively, the processing the first and second pictures could be outsourced at least in parts to a remote location (e.g., a data centre).
The processing of the first and second pictures could comprise subtracting the first picture from the second picture.
[0021] As set out above, processing of the first and second pictures may include image registration of the first and second pictures.
[0022] The method may comprise controlling the camera and the flash to record a video during which the first picture is taken as a first frame of the video while the flash is turned off and the second picture is taken as a second frame of the video while the flash is turned on.
[0023] During the recording of the video, the flash may be controlled to emit a series of light flashes whereby a series of first pictures is taken when the light flashes are emitted and a series of second pictures is taken between the light flashes. In this case, processing the first and second pictures to bring out retroreflections of the CSR may include processing the series of first pictures and the series of second pictures.
[0024] The video could include a slow-motion video. The camera could, for instance, be controlled to record the video at the maximum frame rate available on the camera (Le. made available via the camera API).
[0025] The CSR-based machine-readable tag may comprise a two-dimensional barcode.
[0026] The processing of the first and second pictures may comprise extracting the machine-readable tag from the retroreflections of the CSR.
[0027] Yet a further aspect of the invention relates to an app (“application program”), comprising instructions, which when executed on a smart mobile device including a camera and a flash, causes the smart mobile device to carry out the computer- implemented method as set out above.
[0028] In the present document, the verb “to comprise” and the expression “to be comprised of” are used as open transitional phrases meaning “to include” or “to consist at least of’. Unless otherwise implied by context, the use of singular word form is intended to encompass the plural, except when the cardinal number “one” is used: “one” herein means “exactly one”. Ordinal numbers (“first”, “second”, etc.) are used herein to differentiate between different instances of a generic object; no particular order, importance or hierarchy is intended to be implied by the use of these expressions. Furthermore, when plural instances of an object are referred to by ordinal numbers, this does not necessarily mean that no other instances of that object are present (unless this follows clearly from context). When this description refers to “an embodiment”, “one embodiment”, “embodiments”, etc., this means that the features of those embodiments can be used in the combination explicitly presented but also that the features can be combined across embodiments without departing from the invention, unless it follows from context that features cannot be combined.
Brief Description of the Drawings
[0029] By way of example, preferred, non-limiting embodiments of the invention will now be described in detail with reference to the accompanying drawings, in which:
Fig. 1: is a schematic illustration of a substrate carrying a CSR-based machine- readable tag under ambient illumination;
Fig. 2: is a schematic illustration of the substrate carrying a CSR-based machine- readable tag of Fig. 1 when illuminated by the flash of a mobile device;
Fig. 3: is a schematic illustration of the substrate carrying a CSR-based machine- readable tag of Fig. 1 when illuminated by the flash of a mobile device and under ambient illumination;
Fig. 4: shows (a.) a picture of a substrate with randomly deposited d-CSRs taken without flash, (b.) a picture of the same substrate taken with flash, (c.) the picture obtained by subtraction of the picture of Fig. 4a from the picture of Fig. 4b, and (d.) the picture obtained by conversion of the picture of Fig. 4c into a monochrome image.
Detailed Description of Preferred Embodiments
[0030] According to a preferred embodiment, the proposed solution for detecting and reading CSR-tags makes use of the retroreflection of CSRs rather than of the circular polarisation of the reflection. By illuminating and viewing a substrate with a CSR-based machine-readable tag, such as, e.g., a QR-code (or a similar code), along a direction sufficiently different from the substrate normal (e.g., about 30-60° from the normal) the solution is based on distinguishing the retroreflection, which always travels along the direction of illumination, from the specular reflection of ordinary surfaces, which travels away from the light source. In a typical embodiment, a user films a substrate with a machine-readable code made using CSRs, illuminated by ambient light, with a standard smartphone (or other smart mobile device) running a dedicated software (app). The user holds the phone as still as possible, such that the camera images the entire area containing the code, and the app guides the user to orient the camera viewing direction at an angle of about 30-60° (e.g. 45°) to the surface normal. The app then starts a brief maximum-speed video recording, during which the app ignites the
LED torch (the flash) so as to generate a few sequential light flashes. The light from the flashes will be reflected forward by the background surface (specular reflection) and not back to the camera, due to the inclined imaging angle. In contrast, the light reflected by CSRs is, by virtue of them being retroreflectors, reflected back to the camera. Although scattering from the substrate means that some LED light also reaches the camera, the CSR reflections will be more intense. Moreover, in case of d-
CSRs, they also reflect with a red-shifted colour compared to their appearance under ambient light, since retroreflection appears at longer wavelength than the mixed-colour reflection under ambient illumination.
[0031] The app then subtracts each frame with flash from the frame immediately before and/or after it in the video. With high enough frame rate, the displacement between the frames can be sufficiently small that the background appearance is much reduced, since it appears similar with and without flash, since the ambient illumination is always present. The CSRs, on the other hand, appear with increased contrast since strong retroreflection is visible only in a frame with flash. Processing the sequence of subtractions, a final image is produced in which machine-readable code pattern is clearly detectable. This image is then used for the extraction of the message encoded in the machine-readable tag. The image processing may further include any processing required for identifying the unique PUF character and consequent authentication.
[0032] The principle described above is shown schematically in Figs. 1-3. A substrate 18 carries a coating containing CSRs 16 arranged in a pattern forming a machine- readable tag. Fig. 1 shows how the diffuse ambient light (rays illustrated by dotted lines) makes the entire substrate 18, with and without CSRs 16, equally visible to the camera 12 of a mobile phone 10 filming along an angle away from the substrate normal. Fig. 2 shows the response when no ambient light is present, and the only illumination is the light from the mobile phone torch 14 (rays illustrated by dashed lines with large gaps). The specular reflection from the background surface reflects the rays (dotted lines) away from the mobile phone 10, in contrast to the CSRs 16, which retroreflect (rays illustrated by dashed lines with small gaps) the rays back to the phone 10 and its camera 12. In Fig. 3, both ambient and torch illumination are present. By subtracting an image taken under the condition of Fig. 1, without torch light, from an image taken under the condition of Fig. 3, with torch light, the background specular reflection is removed and only the retroreflection is retained. Since only the CSRs 16 provide retroreflection, this reveals the locations of CSRs 16 and thus produces an image corresponding to the encoded machine-readable code, e.g., a QR-code, even if this is undetectable to the human eye.
[0033] Figure 4 illustrates the principle on the basis of a concrete example. D-CSRs were randomly deposited on a surface. The CSRs were doped with (orange) dye that was not tuned to the CSR reflection colour (green). À standard smartphone (iPhone
XS) without any dedicated software was used for the demonstration. Figure 4a shows a picture taken in the conditions of Fig. 1, with only ambient light. The background has the same orange colour as the d-CSRs and therefore one cannot distinguish where the d-CSRs are. Figure 4b shows a picture taken in the conditions of Fig. 3, and one may note that parts of the image appear brighter, with a different apparent colour than the background orange. This is where the d-CSRs are, the different appearance being due to the green retroreflection of the torch light from the mobile phone. The picture of
Fig. 4a is subtracted from the picture of Fig. 4b in a standard graphics software on a computer, yielding the image in Fig. 4c. The d-CSRs here appear with their green retroreflection colour, but they still appear dark on the black background. To emphasise their locations, the image is converted to monochrome (Fig. 4d), where any colour present above a threshold strength is pictured in white, and now the randomly distributed d-CSRs are easy to detect.
[0034] The above-discussed example demonstrates the proof of the principle underlying the invention, which concerns the ability to detect CSRs and the pattern they produce using a regular smartphone without any additional hardware. In a real serialisation application, the d-CSRs will be laid out in the pattern of a machine- readable tage (e.g., a QR-code or the like), allowing the smartphone to decode the information encoded onto the surface, invisible to the naked human eye.
[0035] A key advantage of the invention is that it allows a smart mobile device, such as a standard smartphone, to distinguish CSRs hidden in a surface, such that they are undetectable to the human eye, and remove the background, such that any pattern produced by the CSRs becomes apparent to the mobile device software, which can then process it to decode the information contained in the pattern, e.g., a QR-code.
This allows CSRs to generate serialisation codes that can be made very large and cover the most visible face of product packaging, since they are not visible to the human eye. Nevertheless, a smartphone can read the code without necessitating any additional hardware, in order to perform authentication and supply chain tracing functions, of great value to confirm the authenticity of a product and/or to trace an item through a supply chain.
[0036] It may be noted that the details of the acquired image may not be as rich and sharp as when a dedicated device with opposite circular polarisers is used to detect the CSRs. If only the simplest of the PUF characteristics of CSR tags, primarily related to the physical location of each type of CSR and its reflection colour, can be tested with the presented methods, it is possible to carry out a more complete PUF analysis as a supplement. To carry out such supplementary analysis, dedicated equipment that uses circular polarisers and that can illuminate the sample from different directions may be useful.
[0037] While specific embodiments have been described herein in detail, those skilled in the art will appreciate that various modifications and alternatives to those details could be developed in light of the overall teachings of the disclosure. Accordingly, the particular arrangements disclosed are meant to be illustrative only and not limiting as to the scope of the invention, which is to be given the full breadth of the appended claims and any and all equivalents thereof.

Claims (20)

Claims:
1. A method for reading a CSR-based machine-readable tag, comprising: providing a smart mobile device with a camera and a flash: using the smart mobile device to take a first picture of the tag with the flash turned off: using the smart mobile device to take a second picture of the tag with the flash turned on; processing the first and second pictures to bring out retroreflections of the CSR.
2. The method as claimed in claim 1, comprising placing the smart mobile device in an off-normal position with respect to the tag to take the first and second pictures, the off-normal position preferably being from 30° to 60° from the normal;
3. The method as claimed in claim 1 or 2, wherein the processing of the first and second pictures comprises subtracting the first picture from the second picture.
4. The method as claimed in any one of claims 1 to 3, wherein the processing of the first and second pictures includes image registration of the first and second pictures.
5. The method as claimed in any one of claims 1 to 4, wherein the first and second pictures are taken by recording a video and wherein the first and second pictures are first and second frames of the video.
6. The method as claimed in claim 5, wherein the video includes a slow-motion video.
7. The method as claimed in any one of claims 1 to 6, wherein the CSR-based machine-readable tag comprises a two-dimensional barcode.
8. The method as claimed in any one of claims 1 to 7, comprising extracting the machine-readable tag from the retroreflections of the CSR.
9. A computer-implemented method for reading a CSR-based machine-readable tag using a smart mobile device including a camera and a flash, comprising: controlling the camera and the flash to take a first picture of the tag with the flash turned off and a second picture of the tag with the flash turned on; processing the first and second pictures to bring out retroreflections of the CSR.
10. The method as claimed in claim 9, comprising providing visual and/or audio guidance to a user to hold the smart mobile device in an off-normal position with respect to the tag.
11. The method as claimed in claim 10, wherein the off-normal position is from 30° to 60° from the normal.
12. The method as claimed in any one of claims 9 to 11, wherein the processing of the first and second pictures comprises subtracting the first picture from the second picture.
13. The method as claimed in any one of claims 9 to 12, wherein the processing of the first and second pictures includes image registration of the first and second pictures.
14. The method as claimed in any one of claims 9 to 13, comprising controlling the camera and the flash to record a video during which the first picture is taken as a first frame of the video while the flash is turned off and the second picture is taken as a second frame of the video while the flash is turned on.
15. The method as claimed in claim 14, wherein, during the recording of the video, the flash is controlled to emit a series of light flashes and a series of first pictures is taken when the light flashes are emitted and a series of second pictures is taken between the light flashes and wherein processing the first and second pictures to bring out retroreflections of the CSR includes processing the series of first pictures and the series of second pictures.
16. The method as claimed in claim 14 or 15, wherein the video includes a slow- motion video.
17. The method as claimed in claim 16, wherein the camera is controlled to record the video at the maximum frame rate available on the camera.
18. The method as claimed in any one of claims 9 to 17, wherein the CSR-based machine-readable tag comprises a two-dimensional barcode.
19. The method as claimed in any one of claims 9 to 18, wherein the processing of the first and second pictures comprises extracting the machine-readable tag from the retroreflections of the CSR.
20. An app, comprising instructions, which when executed on a smart mobile device including a camera and a flash, causes the smart mobile device to carry out the method as claimed in any one of claims 9 to 19.
LU501374A 2022-02-01 2022-02-01 Method for reading a tag based on cholesteric spherical reflectors LU501374B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
LU501374A LU501374B1 (en) 2022-02-01 2022-02-01 Method for reading a tag based on cholesteric spherical reflectors
PCT/EP2023/051776 WO2023148060A1 (en) 2022-02-01 2023-01-25 Method for reading a tag based on retroreflectors, e.g., cholesteric spherical reflectors.

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
LU501374A LU501374B1 (en) 2022-02-01 2022-02-01 Method for reading a tag based on cholesteric spherical reflectors

Publications (1)

Publication Number Publication Date
LU501374B1 true LU501374B1 (en) 2023-08-02

Family

ID=80461854

Family Applications (1)

Application Number Title Priority Date Filing Date
LU501374A LU501374B1 (en) 2022-02-01 2022-02-01 Method for reading a tag based on cholesteric spherical reflectors

Country Status (2)

Country Link
LU (1) LU501374B1 (en)
WO (1) WO2023148060A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100200649A1 (en) * 2007-04-24 2010-08-12 Andrea Callegari Method of marking a document or item; method and device for identifying the marked document or item; use of circular polarizing particles
EP3504665A1 (en) * 2016-08-23 2019-07-03 V. L. Engineering, Inc. Reading invisible barcodes and other invisible insignia using physically unmodified smartphone
US20200311365A1 (en) * 2019-03-29 2020-10-01 At&T Intellectual Property I, L.P. Apparatus and method for identifying and authenticating an object

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100200649A1 (en) * 2007-04-24 2010-08-12 Andrea Callegari Method of marking a document or item; method and device for identifying the marked document or item; use of circular polarizing particles
EP3504665A1 (en) * 2016-08-23 2019-07-03 V. L. Engineering, Inc. Reading invisible barcodes and other invisible insignia using physically unmodified smartphone
US20200311365A1 (en) * 2019-03-29 2020-10-01 At&T Intellectual Property I, L.P. Apparatus and method for identifying and authenticating an object

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ARENAS MONICA MONICA ARENAS@UNI LU ET AL: "Cholesteric Spherical Reflectors as Physical Unclonable Identifiers in Anti-counterfeiting", CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, ACMPUB27, NEW YORK, NY, USA, 17 August 2021 (2021-08-17), pages 1 - 11, XP058807583, ISBN: 978-1-4503-9681-3, DOI: 10.1145/3465481.3465766 *
ARPPE, R.SORENSEN, T: "J. Physical unclonable functions generated through chemical methods for anti-counterfeiting", NATURE REVIEWS CHEMISTRY, vol. 1, 2017
GENG, Y. ET AL.: "High-fidelity spherical cholesteric liquid crystal Bragg reflectors generating unclonable patterns for secure authentication", SCI. REP., vol. 6, 2016, pages 26840, XP002799044, DOI: 10.1038/srep26840
GENG, Y.KIZHAKIDATHAZHATH, R.LAGERWALL, J. P. F.: "Encoding Hidden Information onto Surfaces Using Polymerized Cholesteric Spherical Reflectors", ADV. FUNCT. MATER., vol. 31, 2021, pages 2100399
SCHWARTZ, M. ET AL.: "Cholesteric Liquid Crystal Shells as Enabling Material for Information-Rich Design and Architecture", ADV. MATER., vol. 30, 2018, pages 1707382
SCHWARTZ, M. ET AL.: "Linking Physical Objects to Their Digital Twins via Fiducial Markers Designed for Invisibility to Humans", MULTIFUNCTIONAL MATERIALS, vol. 4, 2021, pages 022002

Also Published As

Publication number Publication date
WO2023148060A1 (en) 2023-08-10

Similar Documents

Publication Publication Date Title
US20220050983A1 (en) Systems and methods for Physical Control Verification and Authentication Event Scan Logging
US20120211555A1 (en) Machine-readable symbols
US10074046B2 (en) Machine-readable food packaging film
CN103339642A (en) Machine-readable symbols
US20090316950A1 (en) Object Authentication Using a Programmable Image Acquisition Device
CN205068462U (en) Embedding have machine readable image card, attach product that connects card and product of hitch clevis
JP2016019286A (en) Strengthening of bar code by secondary coding for forgery prevention
US9094595B2 (en) System for authenticating an object
US20210150690A1 (en) Method and system for optical product authentication
US9691208B2 (en) Mechanisms for authenticating the validity of an item
US9123190B2 (en) Method for authenticating an object
US10282648B2 (en) Machine readable visual codes encoding multiple messages
LU501374B1 (en) Method for reading a tag based on cholesteric spherical reflectors
CN108021965A (en) A kind of barcode scanning method of stereoscopic two-dimensional code and stereoscopic two-dimensional code
JP5784813B1 (en) Bar code display device, operation method and program of bar code display device
CA2728338A1 (en) Object authentication using a programmable image acquisition device
CN111316305A (en) System and method for authenticating a consumer product
KR20200060858A (en) RFID Tag Preventing Forgery and Falsification Comprising Photonic Crystal Materials and Method Using there of
US20230056232A1 (en) Optical authentication structure with augmented reality feature
KR102490443B1 (en) Method, system and computer-readable recording medium for processing micro data code based on image information
JP6493974B2 (en) Bar code display device, bar code server device, bar code reading device, operation method thereof, and program
CN109308430A (en) Color bar code is decoded
CN117935669A (en) Anti-counterfeit label and manufacturing and verifying methods and devices thereof
KR101688207B1 (en) Method for providing information
RU2205451C1 (en) Method for authenticating religious relics

Legal Events

Date Code Title Description
FG Patent granted

Effective date: 20230802