WO2012118912A2 - A method for encoding and simultaneously decoding images having multiple color components - Google Patents

A method for encoding and simultaneously decoding images having multiple color components Download PDF

Info

Publication number
WO2012118912A2
WO2012118912A2 PCT/US2012/027175 US2012027175W WO2012118912A2 WO 2012118912 A2 WO2012118912 A2 WO 2012118912A2 US 2012027175 W US2012027175 W US 2012027175W WO 2012118912 A2 WO2012118912 A2 WO 2012118912A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
elements
pattern
color
latent
Prior art date
Application number
PCT/US2012/027175
Other languages
English (en)
French (fr)
Other versions
WO2012118912A3 (en
Inventor
Cvetkovic SLOBODAN
Thomas C. Alasia
Alfred J. Alasia
Cary QUINN
Original Assignee
Graphic Security Systems Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/270,738 external-priority patent/US8682025B2/en
Application filed by Graphic Security Systems Corporation filed Critical Graphic Security Systems Corporation
Priority to EP12751950.2A priority Critical patent/EP2681692A4/en
Priority to CN201280021269.7A priority patent/CN103782306A/zh
Priority to SG2013065289A priority patent/SG192997A1/en
Priority to AU2012223367A priority patent/AU2012223367B2/en
Priority to MX2013009995A priority patent/MX2013009995A/es
Priority to CA2828807A priority patent/CA2828807A1/en
Publication of WO2012118912A2 publication Critical patent/WO2012118912A2/en
Priority to ECSP13012903 priority patent/ECSP13012903A/es
Publication of WO2012118912A3 publication Critical patent/WO2012118912A3/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09CCIPHERING OR DECIPHERING APPARATUS FOR CRYPTOGRAPHIC OR OTHER PURPOSES INVOLVING THE NEED FOR SECRECY
    • G09C5/00Ciphering apparatus or methods not provided for in the preceding groups, e.g. involving the concealment or deformation of graphic data such as designs, written or printed messages
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00838Preventing unauthorised reproduction
    • H04N1/0084Determining the necessity for prevention
    • H04N1/00843Determining the necessity for prevention based on recognising a copy prohibited original, e.g. a banknote
    • H04N1/00846Determining the necessity for prevention based on recognising a copy prohibited original, e.g. a banknote based on detection of a dedicated indication, e.g. marks or the like
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32219Spatial or amplitude domain methods involving changing the position of selected pixels, e.g. word shifting, or involving modulating the size of image components, e.g. of characters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32229Spatial or amplitude domain methods with selective or adaptive application of the additional information, e.g. in selected regions of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32256Spatial or amplitude domain methods in halftone data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32288Multiple embedding, e.g. cocktail embedding, or redundant embedding, e.g. repeating the additional information at a plurality of locations in the image
    • H04N1/32304Embedding different sets of additional information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32309Methods relating to embedding, encoding, decoding, detection or retrieval operations in colour image data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/44Secrecy systems
    • H04N1/448Rendering the image unintelligible, e.g. scrambling
    • H04N1/4493Subsequently rendering the image intelligible using a co-operating image, mask or the like

Definitions

  • the invention relates generally to the field of counterfeit protection, and more particularly to the field of electronic and printed document protection using encoded images.
  • One approach to the formation of a latent image is to optically encode the image so that, when applied to an object, the image can be viewed through the use of a corresponding decoding device.
  • Such images may be used on virtually any form of printed document including legal documents, identification cards and papers, labels, currency, and stamps. They may also be applied to goods or packaging for goods subject to counterfeiting.
  • Articles to which an encoded image is applied may be authenticated by decoding the encoded image and comparing the decoded image to an expected authentication image.
  • the authentication image may include information specific to the article being authenticated or information relating to a group of similar articles (e.g., products produced by a particular manufacturer or facility). Production and application of encoded images may be controlled so that they cannot easily be duplicated. Further, the encoded image may be configured so that tampering with the information on the document or label is readily apparent.
  • the hidden content is revealed as a monochrome image.
  • the hidden content is revealed as an image that has the exact same colors provided in each location of the printed artwork without the decoding device. While the hidden content may appear brighter or darker compared to the visible content, the image follows the colors present in the visible content.
  • the present disclosure provides a computer-implemented method and system for encoding a latent image into a visible image based on encoding parameters, the latent image having two or more color components that are simultaneously revealed upon placing a decoder over an encoded image.
  • the decoder includes decoding parameters that match the encoding parameters.
  • the method generates a first image associated with a first color component and a second image associated with a second color component, the first image having a first pattern of elements and a second pattern of elements that are manipulated based on corresponding color components of the latent image.-A first angle is assigned to the first image and a second angle is assigned to the second image.
  • the first image and second image are aligned by orienting the first pattern of elements according to the first angle and second angle, respectively.
  • the aligned first image and the aligned second image are superimposed to render the encoded image.
  • the present disclosure further provides a computer-implemented method and system for decoding a composite image having a latent image embedded therein.
  • the decoded latent image includes first and second color separations that are oriented at different angles within the composite image, the first and the second color separations being simultaneously revealed by placing a decoder over the composite image.
  • the method includes determining a first angle associated with the first color separation of the latent image and determining a second angle associated with the second color separations of the latent image.
  • a first color component is assigned to the first color separation based on the determined first angle and a second color component is assigned to the second color separation based on the determined second angle.
  • a decoder is provided to simultaneously display the first color component and the second color component of the latent image in order to present a color composite image.
  • Another aspect of the disclosure provides a multi-layer decoder for decoding a composite image having a latent image embedded therein.
  • the latent image is encoded into the composite image based on a plurality of encoding parameters and includes first and second color separations that are oriented at different angles within the composite image.
  • the first and the second color separations are simultaneously revealed by placing the multi-layer decoder over the composite image.
  • the multi-layer decoder includes a first layer having first elements oriented along a first angle associated with the first color separation of the latent image.
  • a second layer is affixed to the first layer; the second layer includes second elements that oriented along a second angle associated with the second color separation of the latent image.
  • Yet another aspect of the disclosure provides a single-layer decoder for decoding a composite image having a latent image embedded therein.
  • the latent image is encoded into the composite image based on a plurality of encoding parameters and includes first and second color separations that are oriented at different angles within the composite image.
  • the first and the second color separations are simultaneously revealed by placing the single-layer decoder over the composite image.
  • the single-layer decoder includes first elements that are oriented along a first angle associated with the first color separation of the latent image.
  • Second elements are provided and are oriented along a second angle associated with the second color separation of the latent image.
  • the first elements and the second elements are positioned relative to each other so that the first elements and the second elements simultaneously reveal the first color component and the second color component of the latent image to present a color composite image.
  • Yet another aspect of the disclosure provides a computer-implemented method for encoding two latent images into a visible image based on encoding parameters, the latent images having different content that is associated with two or more color components to generate a rainbow effect that is revealed upon placing a decoder over an encoded image.
  • the decoder includes decoding parameters that match the encoding parameters.
  • the method includes generating a first image associated with a first color component, the first image having a first pattern of elements that are manipulated based on a corresponding color component provided in the latent image.
  • a second image associated with a second color component is generated, the second image has a second pattern of elements that are manipulated based on a corresponding color component provided in the latent image, the second latent image includes different content than the first latent image.
  • the first image and the second image are superimposed to render the encoded image, the encoded image being visually similar to the visible image when viewed with an unaided eye.
  • Figure 1 illustrates an example of encoding a latent image having two color components into a visible image according to an embodiment of the invention
  • Figure 2 illustrates an overlay of halftone screens according to an embodiment of the invention
  • Figure 3 illustrates a phase shifted segment for a half tone image according to an embodiment of the invention
  • Figures 4A-4C are a schematic representation of component images used to produce a composite image according to an embodiment of the invention.
  • Figures 5A-5B are a schematic representation of component image elements produced in a method of producing a composite image according to an embodiment of the invention.
  • Figure 6 is a schematic representation of component image elements produced in a method of producing a composite image according to an embodiment of the invention.
  • Figure 7 illustrates a composite image produced in a method according to an embodiment of the invention
  • Figure 8 is a schematic representation of component images used to produce a composite image according to an embodiment of the invention.
  • Figure 9 is a flow diagram of a method of producing a composite image
  • Figure 10 is an illustration of component images used to produce a composite image according to an embodiment of the invention.
  • Figure 11 is an illustration of a composite image formed from a visible image screened using the composite image of Figure 10 in accordance with a method according to an embodiment of the invention
  • Figure 12 illustrates component images formed from a visible image and used to produce a composite image using a method according to an embodiment of the invention
  • Figure 13 illustrates visible and latent component images used to produce a composite image using a method according to an embodiment of the invention
  • Figure 14 is a schematic representation of the elements of a series of component images used to produce a composite image using a method according to an embodiment of the invention.
  • Figure 15 illustrates a visible image and two latent component images used to produce a composite image using a method according to an embodiment of the invention
  • Figure 16 illustrates a side view, bottom view and top view for a decoder having two layers according to an embodiment of the invention
  • Figures 17A-17C illustrate different configurations for a two layer decoder according to an embodiment of the invention
  • Figure 18 illustrates an example single-layer decoder that simultaneously decodes latent image color components according to an embodiment of the invention
  • Figure 19 illustrates an example of a single-layer decoder that decodes frequency sampled color components according to an embodiment of the invention
  • Figure 20 illustrates a digital decoder that decodes and simultaneously displays latent images encoded having two or more color separations according to an embodiment of the invention
  • Figure 21 illustrates a system for encoding and decoding images such that a composite image encoded with latent images two or more color components are simultaneously displayed according to an embodiment of the invention.
  • Figure 22 illustrates lens element patterns that may be used to view images produced using a method of the invention.
  • the disclosure provides methods of encoding and decoding images having color information.
  • the image (hereinafter “composite image”) may include two or more latent images that are embedded into a visible image.
  • the composite image may include a latent image having two or more color separations embedded into the visible image.
  • the composite image is placed upon articles that are subject to alteration, falsification and counterfeiting.
  • a "latent image” refers to an image that is manipulated and hidden within the visible image.
  • the latent image cannot be discerned from the composite image by a human eye, without the aid of a latent image rendering device ("rendering device”) or a decoding device.
  • One or more latent images may be hidden in the visible image so that the latent image is difficult to discern without a rendering device.
  • the latent image may be visible, but not readable, because latent image content is systematically scrambled within the composite image or otherwise manipulated.
  • This disclosure provides techniques for encoding a latent image having two or more color components into a visible image.
  • This disclosure further provides techniques for encoding two or more latent images generated using different color components, into a visible image.
  • the disclosure further provides techniques for simultaneously decoding two or more color components associated with the latent images.
  • the disclosure provides digital techniques for decoding latent images having color separation information, such as differing orientation angles or other color separation information.
  • latent images may be encoded into visible images using optical cryptography and optical steganography.
  • optical cryptography describes a process by which a latent image is "scrambled” (i.e. made unreadable) until a matching optical decoder is placed over the composite image to descramble the hidden content.
  • the latent images may be encoded into the visible image at selected angles for each of the two or more color components.
  • the half tone latent images associated with the two or more color components may be encoded into the visible image at one or more selected frequencies.
  • the selected frequency may be the same for each of the two or more color components.
  • the selected frequency may be different for each of the two or more color components.
  • phase shifting techniques may be applied to embed latent image into the visible image.
  • half tone segment or line gratings may be phase shifted to account for density patterns of the latent image at a particular location.
  • the rending device may include elements that are configured to correspond to the encoding parameters.
  • half tone encoding parameters may include the selected encoding angles and encoding frequencies.
  • the rendering device may further provide color depths of the latent image by decoding the density patterns of the latent image color components. Thus, the latent image becomes visible when the rendering device is placed over the composite image.
  • techniques other than phase shifting may be provided to encode the latent images.
  • Figure 1 illustrates an example for encoding a latent image into a visible image using two color components.
  • Visible image 110 includes a first color component 112 and a second color component 114.
  • Latent image 116 include a heart shaped item 117 generated using a first color component.
  • Latent image 118 include a star shaped item 119 generated using a second color component.
  • the resulting composite image 120 includes the latent images 116 and the latent image 118 embedded into the visible image 110.
  • Decoded image 122 is revealed upon decoding the resulting composite image 120. As discussed below, decoded image 122 shows both the heart shaped item 117 from latent image 116 and the star shaped item 119 from latent image 118, displayed simultaneously in their corresponding color components.
  • the color latent image 116 and the color latent image 118 are encoded separately into the corresponding color component of the visible image 110.
  • Encoding for the first color component may be performed by shifting the first color component halftone image by half of the halftone frequency at sections where there is content inside the first color component of the latent image.
  • the first screen frequency may be 200 lines per inch and the first screen may be oriented at 75 degrees from a horizontal axis.
  • the encoding of the second color component may be performed by shifting the halftone image at sections where there is content inside the second color component of the latent image.
  • the second screen frequency may be 200 lines per inch and the second screen may be oriented at 15 degrees from the horizontal axis.
  • the color latent image may be decoded using two layered rendering device having a first layer that matches halftone parameters of the first screen and a second layer that matches halftone parameters of the second screen.
  • the first screen may correspond to cyan and the second screen may correspond to magenta.
  • Figure 2 illustrates an overlay of halftone screens 200.
  • the overlay 200 depicts an equal amount of the manipulation of the halftone screens 202, 204 at all points having content of the color components of the latent images.
  • the decoded latent images will include a consistent or same color intensity level at all positions. This reduces a depth of each color components to two levels.
  • a variation of this method may be performed to encode the color latent images with more tonal levels for each color component.
  • the quality of the decoded color latent image may be improved by preserving a color depth of the hidden latent. In other words, rather than providing a limited number of phase shifts, such as a full phase shift or no phase shift at a given spot in the decoded latent image, the color component information may be preserved with finer granularity during the encoding process.
  • Figure 3 illustrates an example of phase shifting a segment 301 for a halftone image to preserve color components using three phase shifts during the encoding process.
  • phase shifts may be used to represent color density values between 0-100%. Shift areas are shown, including a partial shift area 310 for 25% color density and a full shift area 315 for 100% color density at sections where there is content inside a corresponding color component of the latent image.
  • the segment 301 is moved into a selected area 302 located adjacent to the segment 301.
  • a no shift area 305 is shown for a latent image section that does not include content.
  • an amount of segment shifting may be commensurate to a density value of the latent image at the given spot. For example, if the density value is 100%, a maximum shift of the encoding segment (usually half of the decoder period) may be applied; if the density value is 50%, the segment can be shifted by 50% of the maximum possible shift; if the density value is 25%, the segment can be shifted by 25% of the maximum possible shift; and so forth.
  • the shifting value may be any increment, such as 10%, 1%, 0.1%, 0.01%, or the like.
  • An optical decoder will show areas with different amounts of shifting as having different densities, thus giving a color depth to the decoded latent image.
  • the decoded latent image may appear with improved quality if this method is used with a monochrome latent image. This is due to the fact that the decoded image may be shown with multiple brightness levels, instead of a binary image.
  • the above described concepts also apply to the scrambling examples described below.
  • optical steganography describes a process where a plain or cryptographically modified image is used to reform the visible image by applying different transformations. For example, the segment line grating associated with the color components may be shifted to match a pattern of the rendering elements provided on the rendering device. The latent image remains invisible to the unaided eye until decoded by placing a matched rendering device over the visible image.
  • Various techniques may be used to encode and decode latent images and visible images.
  • encoded latent images may be optically decoded using a software decoder or a rendering device, such as a physical lens.
  • the rendering device may include elements arranged in linear and non-linear patterns.
  • the latent images or latent image color components may be encoded and decoded using a segment frequency matching a segment frequency of the rendering device or software decoder. For example, the latent image segments are distorted in selected areas to hide the latent image when viewed with the unaided eye.
  • Encoded latent images may be produced in digital form as described in U.S. Patent Application 13/270,738, filed on October 11, 2011, or in U.S. Pat. No. 5,708,717, issued January 13, 1998, the contents of both of which are incorporated herein by reference in their entirety. Encoded latent images may be produced in analog form using specialized photographic equipment as disclosed in U.S. Pat. No. 3,937,565, the content of which is incorporated herein by reference in its entirety.
  • the encoded latent image is embedded within visible images, such as photographs, tint-like images, documents, or the like, to form composite images.
  • the composite image may be printed on, affixed or associated with articles, including documents, identification cards, passports, labels, products, product packaging, currency, stamps, holograms, or the like.
  • the encoded composite images may be produced using visible inks, invisible inks, special inks, toner, dye, pigment, varnish, a transmittent print medium, or the like.
  • the encoded composite images may be embossed, debossed, molded, laser etched, engraved, or the like, and affixed to articles.
  • the composite image serves to authenticate articles and promotes anti-counterfeiting efforts.
  • This disclosure describes various techniques for encoding multiple latent images or a latent image having two or more color components into a corresponding visible image.
  • Techniques described by the same assignee for encoding latent images into a visible image include (1) combining multiple components to create a composite image, such as described in U.S. Patent Application 13/270,738, filed on October 11, 2011, which is hereby incorporated by reference in its entirety; (2) using cryptographic and steganographic methods, such as described in U.S. Patent No. 7,796,753, issued September 14, 2010.U.S. Patent No. 7,466,876, issued December 16, 2008, U.S. Patent No. 6,859,534, issued February 22, 2005, and U.S. Patent No. 5,708,717, issued January 13, 1998,which are hereby incorporated by reference in their entirety; and (3) using stenographic methods, such as described in U.S. Provisional Application
  • an image to be encoded or scrambled is broken into image portions or component images that may include tonal complements of one another, for example.
  • the tonal component images may be balanced around a selected feature, such as a color shade or other feature.
  • the component images are sampled based on a selected parameter, such as frequency, and the sampled portions may be configured to provide a composite image, which appears to the unaided eye to be a single tone image, for example.
  • the single tone may be the selected color shade.
  • the samples may be arranged according to a parameter defined in a corresponding decoder or rendering device that may be used to view the encoded, scrambled or latent image.
  • image portions may be extracted from at least two different images.
  • the different images each may contribute image portions that are encoded or scrambled to render the composite image.
  • the image portions from at least two different images may be encoded or scrambled together to form a single composite image that can be decoded to reveal one or more hidden images.
  • one or more latent images can be "hidden" within a visible image by constructing a composite image as described herein.
  • the composite image may be transformed to a visible image using rendering technology such as halftone screens, stochastic methods, dithering methods, or the like.
  • one or more latent images may be hidden within the visible image by creating a composite image that is derived from samples of component images obtained from the visible image.
  • the composite image is created by obtaining a complementary inverse image portion for each corresponding image portion.
  • the image portion and the complementary inverse image portion may be patterned in pairs according to a parameter, such as frequency, and multiple pairs may be positioned adjacent to each other to render the composite image.
  • encoding is performed by overlaying the latent image onto the visible image to identify visible image content areas that correspond to latent image content areas. At these identified visible image content areas, the inverse image content and the corresponding image content are exchanged or swapped.
  • the encoded composite image is obtained by applying this technique to each of the image portion and the complementary inverse image portion pairs over the composite image. This technique enables images to be encoded without dividing the latent image into color separations.
  • this disclosure supports encoding images by modifying a single parameter of the composite image, such as a tone parameter.
  • the composite image may be rendered using halftone screens, for example. Since the latent image is encoded using a desired frequency for the image portion and the complementary inverse image portion pairs, the halftone screens may be printed at a halftone frequency that is a larger than the desired frequency of the image portion and the complementary inverse image portion pairs. For example, the halftone frequency may be at least two times larger than the desired frequency of the image portion and the complementary inverse image portion pairs. Furthermore, the halftone screen angles may be selected to avoid Moire effects, such as between the printing screen and the encoding element. One of ordinary skill in the art will appreciate that various larger multiples of halftone frequency may be used without causing interference to the composite image.
  • One example for generating the composite image includes digitally encoding a color latent image using an image portion and the complementary inverse image portion to generate an encoded tint image.
  • Another example for generating the composite image includes digitally encoding a darkened and brightened version of the color latent image.
  • An alternative technique for blending colors into the composite image include transforming a color image into a color space that separates the image into intensity and color components, such as Lab, Yuv, or HSI color space. Other color spaces may be used.
  • the composite image may be printed using standard printing techniques, such as halftone screen printing, stochastic screen printing, and dither printing, among other printing techniques.
  • the halftone frequency may be set to a frequency that is larger than the frequency of the rendering device or decoder frequency.
  • the halftone frequency may be at least two times larger than the frequency of the rendering device or decoder frequency.
  • an encoded image is provided in the form of a composite image constructed from multiple component images.
  • This disclosure provides methods of using a multiple component approach for hiding information into the composite image.
  • the use of component images takes advantage of the fact that the human eye is unable to discern tiny details in an encoded image.
  • the encoded image is usually a printed or otherwise displayed image.
  • the human eye tends to merge together fine details of the printed or displayed image.
  • printers are designed to take advantage of this human tendency.
  • Printers produce multitudes of tiny dots or other structures on a printing medium, such as a substrate, paper, plastic, or the like.
  • the size of individual printed dots can be measured as small as thousands of an inch and are not perceived by unaided human vision.
  • the human eye averages the dot patterns to create a color shade.
  • the dot size or the dot density for example, will determine the perceived color shade. If the printed dot sizes are bigger, or if the dots are printed closer together, the eye will perceive a darker shade. If the printed dot sizes are smaller, or if the dots are printed further apart, the eye will perceive a lighter shade.
  • the latent image can be broken into tonally
  • tonal values or tonal shade mean either intensity value or color value.
  • Figure 4A shows first and second component images defining a latent image.
  • a solid background 410 is provided with a first tonal shade that surrounds an area 420.
  • a second tonal shade is provided to define the latent image depicted by the block letters "USA".
  • the tonal values are reversed in comparison to component image 1.
  • the second tonal shade covers a background area 410' and the area 420' defining the block letters "USA” includes the first tonal shade.
  • the first and second tonal shades are balanced around a single shade so that if the component images are composite, the naked eye will perceive only the single shade and the block letters "USA" may not be discernible.
  • Each component image may be referred to as a "phase" of the original image.
  • each phase can be divided into small elements according to a pattern corresponding to a pattern of the decoder or rendering device.
  • the rendering device patter may be defined by lens elements.
  • the lens elements may be linear elements (straight or curved) or segments that correspond to the lens elements of a lenticular lens, for example.
  • the lens elements may be formed in a matrix of two dimensional elements corresponding to a multiple-element lens, such as a fly's eye lens.
  • the component images are divided into an array of square elements 430, 430'.
  • the square elements 430, 430' may correspond in size and position to the elements of a fly's eye lens, for example.
  • the component element pattern may include a frequency that corresponds to the frequency (or one of the element frequencies) of the lens elements.
  • the component element pattern may have the same frequency (or frequencies for a multi-dimensional pattern) as the lens element frequency (or frequencies).
  • the component element pattern may have a frequency that is a multiple of the of the lens element frequency.
  • the elements 430, 430' corresponding to the component image 1 and the component image 2 may be systematically divided into subelements 432, 432'. Samples may be taken from the subelements 432, 432' and may be combined to form a composite image 440 that has an average tone that matches that of the shade around which the component image 1 and the component image 2 are balanced. As illustrated in Figure 4C, the elements and subelements are so large that the latent image is readily apparent. It will be understood, however, that if the elements of the composite images are sufficiently small, the human eye will merge the elements together so that only a single uniform color shade is perceived.
  • the composite image may appear not to include content.
  • the latent images become visible when a decoder or rendering device is positioned over the composite image such that features of the decoder include a frequency, a shape and a geometrical structure that correspond to the pattern of the subelements 432, 432'.
  • the latent images are decoded when the decoder or rendering device is properly oriented on the composite image 440.
  • the decoder features are configured to separately extract portions of the composite image contributed by each of the component image 1 and the component image 2. This allows the latent image to be viewed by a human observer looking through the decoder.
  • the decoder features may include magnifying properties and the particular component image viewed by the observer may change depending on an angle of view through the decoder. For example, from a first angle of view, the viewer may see an image having light background with a dark inset. From a second angle of view, the viewer may see an inverse image having dark background with a light inset.
  • the example component images illustrated in Figures 4A-4C include two color shades. It will be understood, however, that the number of color shades is unlimited. According to one example for producing a single apparent tonal value in the composite image, the various color shades provided in the two component images may be balanced around the single tonal value. Alternatively, the component images may be balanced around multiple tonal values, in which case, the resulting composite image will have multiple apparent tonal values.
  • the composite image may be designed to work with individual lenses, such as fly's eye lenses, arranged in an array, such as a square or rectangular grid.
  • the lens features may be formed in virtually any pattern including a symmetric pattern, an asymmetric pattern, a regularly spaced pattern, or an irregularly spaced pattern.
  • the lens feature may be adapted to any shape.
  • the size of the composite image elements may be determined by the features sizes of the decoding lens.
  • the sampling frequency of the component images may be calculated to be a multiple of the feature frequency of the decoder. For example, the sampling frequency of the component image may be the same, twice, or three times the sampling frequency of the lens features.
  • Figures 5A and 5B illustrate an approach to collecting and ordering portions of the component images 500, 500' to form elements of the composite image 500".
  • the component images 500, 500' may be constructed using tonal values that are balanced around one or more selected tonal values.
  • the balanced values may be used to define a latent image.
  • the component images 500, 500' are divided into elements 530, 530' each having a 2x2 pattern of subelements 532, 532'.
  • the pattern is similar to the pattern used in the example of Figure 4C. It will be understood that while only a single exemplary element 530, 530' is shown for each component 500, 500', the disclosure supports dividing the entire composite image into a grid of such component images. As illustrated in Figure 5 A, diagonally opposed subelements Al and A2 are taken from each- element (or cell) 530 of the first component image 500. Similarly, the diagonally opposed subelements Bl and B2 are taken from the corresponding element 530' of the second component image 500'.
  • the Bl and B2 portions may be selected so that they differ in exact location from the Al and A2 portions, as shown in Figure 5A. Alternatively, the B portions may be taken from the same locations as the A portions as shown in Figure 5B. In either case, the selected portions are then used to construct a composite image 500".
  • the subelements Al, A2, Bl and B2 all may be placed in the corresponding element 530" of the composite image 500' in the exact locations from which they were taken.
  • the B subelements may be positioned in a slightly different location in the composite image from where they were taken in order to fill out the element 530". In both examples, however, the four subelements Al, A2, Bl and B2 are all taken from the same cell location to assure that the corresponding cell 530" in the composite image 500" will have the same apparent tonal value in either case.
  • the subelements 532, 532' may be shapes other than a square shape.
  • the subelements 532, 532' may include, but not limited to, any polygon, circle, semicircle, ellipse and combinations or portions thereof.
  • the component elements 530, 530' could be divided into two or four triangles, for example.
  • the component elements 530, 530' also may be formed as two rectangles that make up a square element.
  • the component elements (or portions thereof) can be sized and shaped to correspond to the shape of the decoder features. Any combination of subelement shapes can be used that, when combined, form the corresponding element shape.
  • the disclosure contemplates mixing different shapes, as long as the desired tonal balance is maintained. Different sized subelements may also be used within a composite image. Even if a total area belonging to each of the image components are not equal, any disparity can be compensated by using a darker or lighter tone for one of the image components.
  • first image area at 50% having a 60% density associated with the first component and for a second image area at 50% having a 40% density associated with the second component will give a 50% overall tint.
  • using a first image area at 75% having a 60% density associated with the first component and using a second image area at 25% having a 20% density associated with the second component will also be perceived as 50% overall tint density.
  • Another approach includes using a different number of subelements from different components.
  • two subelements can be taken from the first component and four subelements can be taken from the second component, as long as the tonal balance is maintained.
  • half of each component image is used to form the composite image.
  • Figure 6 illustrates an embodiment that produces a scrambling effect in the composite image.
  • overlapping sample portions are taken from the component images and the sample portions are reduced in size so as to form non-overlapping pieces or subelements of a composite image.
  • the difference in sizes between the portions of the component image and the subelements of the composite image may be referred to as a zoom factor or subelement reduction factor.
  • the size of the portions of the component images would be three times larger than the size of the subelements of the composite image.
  • the size of the portions of the component images are reduced in size three times before being inserted into the composite image.
  • Figure 6 illustrates first and second component images 600, 600', which are used to construct a composite image 600".
  • overlapping elements 650, 650' are taken from corresponding component images 600, 600', reduced in size as a function of the zoom factor, and placed as subelements 632" within element 630" to form the composite image 600".
  • the overlapping elements 650, 650' cover the entirety of the two component images 600, 600'.
  • Each subelement is positioned based on the configuration and frequency of the decoder features and on the configuration of the subelements 632". In the embodiment shown in Figure 7, the overlapping elements are centered on the locations of the subelements 632".
  • FIG 7 which shows a composite image 700 formed from the component images in Figure 4.
  • the pattern of the elements and the subelements in the composite image 700 are configured to correspond to the features of a matching decoder.
  • the composite image of Figure 7 was formed using a zoom factor of 4, but it will be understood that the composite image may be formed using any zoom factor.
  • placement of the matching decoder over the composite image results in the "reassembly" of the component images 410, 410' for viewing by an observer. It follows that the observer will see the latent image 420, 420' within the corresponding component images 410, 410' .
  • the latent images may appear to move or "float" when as the observer changes his angle of view through the decoder.
  • This floating effect results from using the overlapping component portions that have been zoomed.
  • the zoom effect causes the elements of the component images to spread into multiple parts of the composite image.
  • the decoder By adjusting the angle of view, the decoder renders information from the multiple parts of the component image, thereby creating an illusion of floating.
  • the bigger the zoom factor the more pronounced the floating effect.
  • shrinking the portions of the component images by a zoom factor the effective resolution of the component images may be decreased when seen through the decoding lenses.
  • the elements of the component images may be flipped before being used to form the composite image. Flipping portions of the component images changes the direction in which these portions appear to float when seen through the decoder. By alternating between flipping and not flipping the elements of the component images, different parts of the component images may appear to float in opposite directions when seen through the decoder.
  • the above effects may be applied to a single component image (or two identical component images) that is used to produce a non-tonally balanced encoded image.
  • Such images could be used, for example, in applications where a decoder lens is permanently affixed to the composite image. In such applications, tonal balancing is unnecessary because the latent image is always viewable through the permanently affixed decoder.
  • a composite image may be formed from more than one latent (or other) image.
  • a plurality of component images may be created using the methods previously discussed. Portions from each component image may then be used to form a single composite image. For example, if it is desired to use two latent images (Image 1 and Image 2), each latent image could be used to form two component images.
  • the two component images may each be divided into elements and subelements as shown in
  • Figures 4-6 This would produce four component images, each having corresponding elements and subelements.
  • a composite image similar to those of Figures 5 A and 5B could be formed using a subelement Al taken from a first component of Image 1 and a subelement A2 taken from a second component of Image 1.
  • a subelement B 1 could be taken from a first component of Image 2 and a subelement B2 from a second component of Image 2.
  • subelements Al and B2 could be taken from components of Image 1 and subelements Bl and A2 could be taken from components of Image 2.
  • the subelements could be ordered in multiple ways. For example, the subelements could be ordered one below another, side by side, across the diagonal from each other, or in any other way.
  • the composite image may produce the effect that the human observer may see the different latent images depending on the angle of view through the decoder.
  • the component images may alternate and switch when the angle of view is changed. Additionally, the zoom factor and flipping techniques may be used with this technique. This may create a multitude of effects available to the designer of the composite image. Any number of latent images may be hidden together in this manner and any number of component images may be used for each.
  • a zoom factor of two may be used for the subelements obtained from Image 1 and a zoom factor of eight may be used for the phases obtained from Image 2.
  • the subelements obtained from the different images may appear to be at different depths when seen through the decoder. In this way, various 3D effects may be achieved.
  • Figure 8 illustrates an approach to collecting and ordering portions of the component images to form elements of a composite image that is decodable using a lenticular lens.
  • two component images 800, 800' are divided into elements 830, 830' corresponding in shape and frequency to the features of a decoder having "wavy" lenticules.
  • the component images are created so as to be balanced around a particular shade (or shades).
  • the composite image 800" is again formed by assembling subelements 832, 832' from the component images 800, 800'.
  • a zoom factor can be used if desired.
  • the zoom factor is one, which indicates that the composite image elements are the same size as the component image elements (i.e., the component image elements are not shrunk).
  • the approaches of collecting and ordering discussed herein may also be applied for a wavy decoder or any other type of decoder.
  • the portion of the first component image which is the light gray portion
  • the portion of the second component image which is the dark gray portion.
  • the portions of the component images may have equal size.
  • the combined portions of the component images or the elements of the composite images may cover the area of a single decoding feature in the composite image.
  • the techniques described herein may produce an image that looks like a tint, i.e. uniform color shade, when printed.
  • Figure 9 illustrates a generalized method 900 of producing a composite image according to an embodiment of the invention.
  • the method 900 begins at S902 and at S904 a latent image is provided.
  • two or more component images are created at S906.
  • these component images are formed so that at each position, the tonal values are balanced around a selected tonal value or tint density.
  • the image components are used to produce a plurality of image elements to be used to form a composite image. These composite image elements are formed and positioned according to a pattern and frequency of the features of a decoder.
  • the component elements may be positioned and sized based on a frequency that matches or is a multiple of the frequency of the decoder.
  • the component image elements are constructed by dividing the composite image into non-overlapping elements or cells. In other embodiments, the component image elements may be formed as overlapping elements or cells.
  • the action of extracting content may include subdividing each element of each component image into a predetermined number of subelements.
  • the image content from the subelements is then extracted.
  • the subelements from which content is extracted may be the inverse of the number of component images or a multiple thereof. Thus, if two component images are used, then half of the subelements are extracted from each element.
  • the content of each element may be extracted.
  • a zoom factor may be applied to the extracted elements to produce subelements that can be used to form the composite image.
  • the extracted content from the component images is used to form a composite image. This may be accomplished by placing subelements from each of the components into positions corresponding to the positions in the component images from which the content of the subelements was extracted. The method ends at S914.
  • any or all of the steps provided in method 900 and any variations may be implemented using any suitable data processor or combination of data processors and may be embodied in software stored on any data processor or in any form of non-transitory computer- readable medium.
  • the encoded composite images may be applied to a substrate by any suitable printing, embossing, debossing, molding, laser etching or surface removal or deposit technique.
  • the images may be printed using ink, toner, dye, pigment, a transmittent print medium, as described in U.S. Pat. No.
  • Another example for generating the composite image includes digitally encoding a darkened and brightened version of the color latent image.
  • One component can be darkened by using the intensity/color curve designed for darkening, and the other component can be brightened in each location by the same amount as the first component was darkened.
  • An alternative technique for blending colors into the composite image include transforming a color image into a color space that separates the image into intensity and color components, such as Lab, Yuv, or HSi color space, and applying intensity/color curves as mentioned above in these color spaces. Other color spaces may be used.
  • a tint based composite image may be integrated or embedded into a visible image, such as any visible art.
  • the composite image(s) may be hidden to the naked eye within the visible image, but rendered not hidden when a decoder is placed on the printed visible image or composite image. All of the effects associated with the composite image (i.e. the appearance of floating, alternation of component image viewability, etc.) may be retained.
  • One approach to this is to apply a halftone screening technique as discussed above that uses the composite images as a screen file to halftone the visible image.
  • This technique may modify the elements of the composite image by adjusting the size of the element to mimic the densities of the pieces of the visible image at the same positions.
  • the composite image has no more than two intensity levels in each of its color separations.
  • the corresponding color separations of the composite image are used as screens for the visible image. If the color components of the composite image are not bilevel,they can be preprocessed to meet this requirement.
  • Figures 10 and 11 illustrate an example of this approach.
  • Figure 10 illustrates two component images 1000, 1000' constructed based on a block letter "USA" latent image, which are used to construct a composite image 1000" formed from square elements of the two component images 1000, 1000'.
  • the basic composite image appears as single tone image to the naked eye.
  • Magnification shows that the composite image 1000" is formed from a plurality of subelements. Each of these subelements is a square portion taken from a corresponding element of one of the component images 1000, 1000'. It will be understood that all of these subelements are the same size and shape. The appearance of varying sized rectangles in the enlarged area occurs as the result of the variation in content within the subelements. Placement of a corresponding decoder over the composite image 1000"
  • Figure 11 illustrates a visible image 1110 along with a halftone 1110' of the same image screened using the composite image 1000" of Figure 10.
  • the unmagnified half-tone image 1110' appears unchanged to the naked eye. Magnification, however, shows that the image 1110' is made up of the square elements of the composite image, which have been modified according to the tone density of the original image 1110. In effect, the composite image 1000" of Figure 10 is embedded within the visible image 1110.
  • a decoder is placed over the encoded image (i.e., the halftone artwork 1110'), the component images 1000, 1000' will be visible.
  • Figure 12 illustrates another approach to hiding a latent image within a visible image 1200.
  • component images 1210. 1210' may be formed by tonally balancing corresponding positions around different tone densities in different areas. This approach can be used to create component images 1210, 1210' from a visible image 1200 as shown in Figure 12.
  • One approach is to darken the visible image 1200 to create a first replica image and correspondingly lighten the visible image 1200 to create a second replica image.
  • An area matching a latent image may be masked from each of the replica images and replaced in each case by the content from the masked area of the other replica.
  • the areas of the visible image 1200 that align with the letters "USA" i.e.
  • the latent image are essentially swapped between the replica images to produce the component images 1210, 1210'.
  • the component images may then be sampled and combined to create the composite image 1210" using any of the techniques discussed herein.
  • the composite encoded image 1210" closely resembles the original primary image 1200, but with the hidden message "USA” being viewable using a decoder corresponding to the size and configuration of the elements used to form the subelements of the composite image 1210".
  • FIG. 13 illustrates (in gray scale) a color visible image 1300 of a tiger, and a color latent image 1310 of a girl.
  • the visible image 1300 is used to form four identical component images 1300A, 1300B, 1300C, 1300D, which are divided into elements 1430A, 1430, 1430C, 1430D as shown in Figure 14.
  • a matching decoder may include a rectangular or elliptical lens, for example.
  • each of the elements of the four components is divided into subelements 1432A, 1432B, 1432C, 1432D. Because, in this example, a total of six components are used to produce the composite image, the component image elements are divided into six subelements.
  • the latent image 1310 is used to produce two corresponding component images 1310A, 1310B.
  • the second component image 1310B is produced as an inverse of the first component image 1310A.
  • the first and second component images 1310A, 1310B are divided into elements 630E, 630F, which may be non-overlapping elements (as shown in Figure 11) or as overlapping elements like those shown in Figure 3.
  • each of the elements of the latent components 1310A, 1310B is divided into subelements 1432E, 1432B, 1432C, 1432D. Again, six subelements are formed from each element.
  • the goal is for the visible image to be visible to the naked eye and the latent image to be visible with the assistance of a decoder, which is configured to correspond to encoding parameters, including a frequency of the elements extracted from the visible and latent component images 1300A, 1300B, 1300C, 1300D, 1310A, and 1310B.
  • a decoder configured to correspond to encoding parameters, including a frequency of the elements extracted from the visible and latent component images 1300A, 1300B, 1300C, 1300D, 1310A, and 1310B.
  • the majority of the subelements used are taken from the visible component imagesl300A, 1300B, 1300C, and 1300D that correspond to the visible image.
  • four subelements (Al, B2, C4 and D5) of the six subelements used in each element 1422 of the composite image 1420 are extracted from the four visible component images 1300A, 1300B, 1300C, and 1300D that correspond to the visible image.
  • the other two subelements (E3 and F6) used in the element 1422 are extracted from the latent composite images 1310A, 1310B that correspond to the latent image.
  • the subelements E3 and F6 are interlaced with the four subelements Al, B2, C4, and D5 extracted from the visible image. Because the subelements E3 and F6 extracted from the latent image are compensated such that an original image tint for one subelement is exchanged with an inverse image tint for the other subelement, the subelements E3 and F6 will not be visible to the naked eye. In other words, the eye will mix up the corresponding subelements E3 and F6 into a 50% tint. As in previous embodiments, the subelements used and their placement within the element 1422 of the composite image 1420 can vary.
  • the composite image elements will be visually grouped so that, for some angles of view, the observer will see the visible image 1300 (e.g., the tiger of Figure 13), for other angles of view, the observer will see the latent image 1310 (e.g., the girl of Figure 13), and for yet other angles of view the observer will see the inverse of the latent image 1310. In this way, the color latent image 1310 and its inverse are hidden inside the color visible image 1300. Additional effects may be added to the decoded image by applying element flipping and/or a zoom factor larger than one to the latent component images 1310A, and 1310B generated from the latent image 1310.
  • the visible image 1300 can be preprocessed to increase its contrast. This allows the reduction of the number of subelements that are extracted from the visible image 1300 in order to hide the latent image 1310.
  • the visible and latent images used to create a composite image may be binary, grayscale, color images, or a combination of any type of image.
  • the component images revealed with the decoder may be binary, grayscale or color images.
  • the composite image may include latent images that are encoded into a visible image using two or more color components. While the composite image may include multiple color components, the color components may be blended to generate a monotone image. For example, blending equal amounts of Cyan tint, Magenta tint, and Yellow.
  • the visible images include content variations such that the observer cannot correlate the colors revealed with the decoding device with colors seen by the unaided eye.
  • the latent images are divided into at least two color component separations that correspond to color components that are available in the visible image.
  • the color separated latent images are encoded into the corresponding color components of the visible image based on encoding parameters that are determined by features of the matching decoder.
  • the encoding parameters may include a relative angle for depositing a particular color component and a frequency of decoding elements, such as lenses, used to decode the composite image, among other encoding parameters.
  • Figures 16 and 17 illustrate examples of multi-layer decoders, or rendering devices, having two layers.
  • the elements (or lenticules) in each layer of the multi-layer decoder are arranged according to a selected frequency or pattern of the corresponding latent image color component.
  • the elements (or lenticules) in each layer of the multi-layer decoder are further oriented to match the relative angle in which the latent image color component was deposited onto the substrate.
  • each layer of the latent image may be simultaneously decoded.
  • more than two rendering devices may be used to simultaneously decode latent images associated with more than two color components.
  • Figure 16 illustrates components of a two layer decoder.
  • the first rendering device 1601 is shown in a side view 1602 and a bottom view 1604.
  • the second rendering device 1610 is shown in a side view 1612 and a top view 1614.
  • the first layer elements 1605 are oriented approximately perpendicular to the second layer elements 1615. It follows that the two color component layers associated with the two latent images are oriented to match the angle of the corresponding first layer elements 1605 and the second layer elements 1615. Therefore, each layer of the latent images is decoded simultaneously to provide a multicolor decoded image.
  • first rendering device 1601 and the second rendering device 1610 are illustrated to include linear lenses, one of ordinary skill in the art will readily appreciate that the first rendering device 1601 and/or the second rendering device 1610 may include non- linear lenses.
  • Non- linear lens structures may include a wavy line structure, zigzag structure, fish bone structure, arc-like structure, a free-hand shaped structure, or the like.
  • the first rendering device 1601 and the second rendering device 1610 may include a same lens structure or different lens structures.
  • the multi-layered lens may include layers formed using different technologies, such as having a first layer formed using a molded lens array and a second layer formed using a silkscreen printing process.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned in various configurations relative to each other.
  • the arrow 1700 shows the direction of view.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned in various configurations relative to each other.
  • the arrow 1700 shows the direction of view.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned in various configurations relative to each other.
  • the arrow 1700 shows the direction of view.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned in various configurations relative to each other.
  • the arrow 1700 shows the direction of view.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned in various configurations relative to each other.
  • the arrow 1700 shows the direction of view.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned in various configurations relative to each other.
  • the arrow 1700 shows the direction of view.
  • the second rendering device 1610 may be positioned so that the first layer elements 1605 and the second layer elements 1615 face inwardly toward each other.
  • the first layer elements 1605 When the first layer elements 1605 are oriented toward the image, the first layer elements 1605 will decode the image through the second rendering device 1610. In this case, the second rendering device 1610 will be positioned to physically contact the encoded image.
  • the first rendering device 1601 and the second rendering device 1610 may be positioned so that the first layer elements 1605 and the second layer elements 1615 face outwardly away from each other.
  • the first rendering device 1601 may be positioned slightly above the encoded image to enable the first layer elements 1605 to focus on the encoded image.
  • the first rendering device 1601 and the second rendering device 1610 may be oriented in a same direction so that both the first layer elements 1605 and the second layer elements 1615 face upward.
  • the curvature of the first layer elements 1605 and the second layer elements 1615 may be designed so that the first layer elements 1605 and the second layer elements 1615 focus on the bottom surface of the multi-layered decoder.
  • the frequency of the first layer elements 1605 and the second layer elements 1615 may be the same or different.
  • the frequency of the first layer elements 1605 and the second layer elements 1615 may be 250 lines per inch.
  • the frequency of the first layer elements 1605 and the second layer elements 1615 may be 200 lines per inch for Layer 1 and 250 lines per inch for Layer 2.
  • the composite image may include latent images that are encoded into a visible image using two or more color components. While the composite image may include multiple color components, the color components may be blended to generate a monotone image. For example, blending equal amounts of Cyan tint, Magenta tint, and Yellow tint creates an image that appears to have a uniform brown tone. Techniques described herein enable encoding and decoding of color latent images from seemingly uniform visible images, or visible images with variations in their content where the observer cannot correlate the colors shown with decoding device with colors seen by naked eye.
  • the latent images are divided into at least two color component separations that correspond to color components that are available in the visible image.
  • the color separated latent images are encoded into the corresponding color components of the visible image based on encoding parameters that are determined by features of the matching decoder.
  • the encoding parameters may include a relative angle for depositing a particular color component and a frequency of decoding elements, such as lenses, used to decode the composite image, among other encoding parameters.
  • Figure 18 illustrates an example micro-array lens matrix 1800 configured provided on a single layer.
  • the elements or micro-lens elements 1802 are arranged according to a selected frequency or pattern of the corresponding latent image color component.
  • Path 1 (1810) and Path 2 (1815) are illustrated to have a same frequency
  • Path 3 (1820) is illustrated to have a higher frequency.
  • frequency is controlled by adjusting a distance between rows of micro-array elements 1802 for a corresponding path. Increasing a distance between rows of micro-array elements 1802 may correspond to lowering a frequency, while reducing a distance between rows of micro-array elements 1802 may correspond to increasing a frequency.
  • the micro-lens elements 1802 may include a same lens structure or different lens structures.
  • the micro-lens elements 1802 are further oriented along one or more of Path 1 (1810), Path 2 (1815) and Path 3 (1820) to match the relative angle in which the latent image color component was deposited onto the substrate. While Paths 1-3 are illustrated to be linear paths, this disclosure supports non-linear paths. Non-linear paths may include a wavy line path, zigzag path, fish bone path, arc-like path, a free-hand shaped path, or the like. When the encoding parameters of the latent image are matched to the features of the single layer micro- array lens matrix 1800, each layer of the latent image may be simultaneously decoded.
  • micro-lens elements 1802 may be arranged in other matrix configurations that support multiple decoding paths with equidistant lens elements that match a frequency and angular orientation used for the encoding process.
  • matrix configurations include a hexagonal grid configuration, concentric ring configuration, or other configurations.
  • the encoding process may arrange micro-lens elements 1802 to support variable frequencies. In this case, the so that along a path, the distance between the micro-lens elements 1802 elements varies.
  • the micro-lens elements 1802 will sample the encoded image and recreate the latent image. By sampling pieces of the segments rather than decoding an entire segment of the encoded image, the decoded image may appear to be slightly jagged. However, if a frequency of the encoding process and a frequency of the micro- lens elements 1802 are sufficiently high, such as greater than 140 lines per inch, this effect may not be noticeable.
  • the latent images are divided into at least two color component separations that correspond to color components that are available in the visible image.
  • four color component separations are used and a linear lenticular decoding lens is provided to decode a four color component composite image.
  • the lenticular lines used in the encoding process may be divided into four subsets that match the four color separations provided in the visible image and the latent image.
  • the four subsets include black segments 1901, yellow segments 1902, magenta segments 1903, and cyan segments 1904.
  • one fourth of the segments may be used to encode black separation
  • another fourth of the segments may be used to encode yellow separation
  • another fourth of the segments may be used to encode magenta separation
  • the last fourth of the segments may be used to encode cyan separation.
  • the subsets 1901, 1902, 1903, 1904 may be interleaved.
  • line numbers 1,5,9,13,17, etc. include the first subset of black segments 1901 corresponding to the black component.
  • Line numbers 2,6,10,14,18, etc. include the second subset of yellow segments 1902 corresponding to the yellow component.
  • Line numbers 3,7,11,15,19, etc. include the third subset of magenta segments 1903 corresponding to the magenta component.
  • Line numbers 4,8,12,16,20, etc. include the fourth subset of cyan segments 1904 corresponding to the cyan component.
  • the color components of the latent image are encoded into the appropriate subsets 1901, 1902, 1903, 1904.
  • the frequency used for the encoding method is one fourth of the decoder frequency.
  • a same encoding angle is applied to each of the subsets 1901, 1902, 1903, 1904.
  • FIG 19 an image is illustrated with cyan, magenta, yellow, and black color separations printed in a line screen at 25 degrees.
  • Each color component of the visible image is encoded with the corresponding color component of the latent image.
  • the decoder When the decoder is placed over the composite image, the decoder will simultaneously decode all color components, including the black component, the yellow component, the magenta component, and the cyan component.
  • the decoded composite image reveals the composite color latent image.
  • This disclosure contemplates several variations to the above-described method. For example, rather than using a same frequency for all color components, more line repetitions may be used for one of the color separations. This will result in a higher frequency for the selected color separation.
  • This disclosure further contemplates using different screening elements for different color separations. For example, straight segments may be used for some color separations and wavy segments may be used for other color separation.
  • Another variation includes arranging lenticular lenses or micro-array lens elements to follow non-linear patterns, such as concentric rings pattern. Non-linear patterns may be produced by dividing the decoder elements into subsets. The number of subsets may match the number of the color components of the latent image and the color separations of the latent image may be encoded into these subsets.
  • the frequency sampled color components would be applied to brighter images, including images having color separations within a 0-25% density range.
  • This disclosure contemplates using any lens pattern to encode and decode color latent images, where different color components of the latent image are encoded into the elements of the visible image that match the designated subsets of the decoder pattern.
  • Each of the color separations of the encoding image is printed using the same color separation from the latent image. For example, a cyan color separation of the latent image is encoded into the cyan color separation of the visible image.
  • the above examples describe techniques for creating complex color latent images for encoding into a visible image.
  • the complex color latent images may include different color components provided within the composite image.
  • the degree to which decoded color latent image matches the color latent image used for encoding depends significantly on the quality and resolution of the device used to apply encoded image to the articles.
  • repetitive monochromatic latent images may be provided.
  • an inversion process may be used to manipulate the latent image or portions of the latent image.
  • color differences are discernible between the latent image and the surrounding visible image.
  • a process may be provided for changing the brightness of the latent image or portions of the latent image before encoding the latent image into the particular color separation of the visible image. Changing the brightness of the latent image produces color differences between the latent images and the surrounding visible image that are discernible when the decoder is placed over the printed encoded visible image.
  • Figure 15 illustrates a visible image 1510 depicting a photograph of a subject, a first latent image having a first pattern that is generated using a red color component 1515, a first latent image having a first pattern that is generated using a blue color component 1520, and a second latent image 1525 having a second pattern that is generated using a green color component 1520.
  • the content of the first latent images 1515, 1520 is different than the content of the second latent image 1525.
  • One of ordinary skill in the art will readily appreciate that other content differences may be provided between the first latent image and the second latent image.
  • the first latent image 1515 illustrates the block letters "JANE” which are rendered against a dark background, the block letters "JANE” being generated using a red color component.
  • the first latent image 1520 illustrates the block letters "JANE” which are rendered against a dark background, the block letters "JANE” being generated using a blue color component.
  • the second latent image 1525 illustrates block letters "JANE” which are rendered against a white background 1527, the block letters “JANE” 1526 being generated using a green color component.
  • the second latent image 1525 further illustrates block letters "JANE” which are rendered against a dark background 1528, the block letters "JANE” being generated based on the absence of ink.
  • different high contrast content may be provided for the second latent image.
  • the composite image 1530 is generated by encoding the latent images 1515, 1520, 1525 into the visible image 1530. Since the content provided in the second latent image 1525 offers more variations as compared to the content provided in the first latent images 1515, 1520, the color component of the second latent image
  • the 1525 introduces color variations in the encoded composite image 1530.
  • the variations in the green color component contributed by the second latent image 1525 combine with the red color component contributed by the first latent image 1515 and the blue color component contributed by the first latent image 1520.
  • introducing the second latent image 1525 provides color variations and vivid color properties to an encoded composite image 1530.
  • more than two latent images may be provided for encoding into the visible image.
  • variations in the latent image can be introduced to any color component.
  • a Digital Decoding Device That Decodes And Concurrently Displays Latent Images Encoded
  • the component images used to produce the composite images may be viewed by application of a corresponding decoder.
  • the decoder may include a software decoder programmed to decode and concurrently display two or more color separations that correspond to two or more component images.
  • An example software decoder is described herein.
  • the component images may be viewable through the use of a software-based decoder., such as those described in U.S. Pat. Nos.
  • an image of an area where an encoded image is expected to appear can be captured using an image capturing device such as a scanner, digital camera, or
  • such a software-based decoder may decode a composite image by emulating the optical properties of the corresponding decoder lens.
  • Software-based decoders also may be used to decode a digital version of a composite image of the invention that has not been applied to an article.
  • the latent image is divided into two or more color separations before being embedded within the visible image.
  • the two or more color separations of the latent image may correspond to colors that are not present in the visible image.
  • each color separation of the latent image may be encoded into different color separations of the visible image.
  • each color separation may be independently encoded into the visible image using encoding techniques described herein.
  • 2025, 2027, and 2029 may be encoded into multiple halftone screens that are positioned at different angles with respect to a horizontal line 2022.
  • the different angles are represented by segments 2045, 2047, and 2049.
  • the multiple halftone screens may be printed using a same color.
  • the line screens for the latent image may be printed using black ink.
  • Figure 20 illustrates a digitally encoded image 2010 that includes a visible image 2020 and a latent image having three corresponding color component separations 2025, 2027, and 2029.
  • the visible image 2020 may be generated using black ink, gray-scale or multi-color ink.
  • the color component separations 2025, 2027, and 2029 are embedded within the visible image 2020. According to one example, the color component separations 2025, 2027, and 2029 may be separately encoded and then merged or embedded into the visible image 2020.
  • the process may be performed so that the color component separations 2025, 2027, and 2029 are encoded as they are embedded.
  • the color component separations 2025, 2027, and 2029 are not viewable to an unaided eye without a software decoding device 2030.
  • the visible image may be sampled to produce a visible image having a first periodic pattern at a first predetermined frequency.
  • the latent image having two color components is then mapped to the visible image so that the first periodic pattern of the visible image is altered at locations corresponding to locations in the latent image having image content depicted with the two color components.
  • the alterations to the visible image are sufficiently small that they are difficult for the unaided human eye to discern.
  • the software decoding device 2030 displays the encoded image at a frequency that correspond to the first predetermined frequency
  • the software decoding device 2030 captures the alterations in the visible image to display the latent image.
  • the first periodic pattern is first imposed on the latent image having two color components rather than on the visible image.
  • the alterations are provided on the content that is associated with the latent image having two color components.
  • the latent image is then mapped to the visible image and the content of the visible image is altered pixel by pixel based on the content of the encoded latent image.
  • Other methods are available for embedding or merging the latent image with the visible image.
  • the software decoding device 2030 decodes the latent images 2025, 2027, and 2029 while displaying the encoded image 2010 on a graphical user interface ("GUI").
  • GUI graphical user interface
  • the digital decoding system 1800 described below and illustrated in Figure 18 performs image processing and may be configured to assign a designated color to each latent image, including
  • monochrome latent images For example, three monochrome latent images may be provided within a visible image. A first monochrome latent image may be oriented at 15 degrees, a second monochrome latent image may be at 30 degrees, and a third monochrome latent image may be at 60 degrees.
  • the software decoding device 2030 may be configured to detect the orientation of each monochrome latent image and assign a corresponding color component to the color separation. Accordingly, the software decoding device 2030 may assign a color red to the first monochrome latent image oriented at 15 degrees, a color blue to the second monochrome latent image oriented at 30 degrees, and a color green to the third monochrome latent image oriented at 60 degrees.
  • the software decoding device 2030 may merge the designated colors to generate a composite color latent image for display to the user. For example, the assigned colors may be merged to yield a desired color shade.
  • the assigned colors may be merged to yield a desired color shade.
  • any combination of colors may be used and any desired color shades may be provided.
  • Figure 21 illustrates an exemplary digital decoding system 2100 for authenticating an encoded image affixed to an article.
  • An encoder device 2110 is provided to include an encoder module 2112 and an embedding module 2114 that communicate with an encoding information database 2140 via a network 2120.
  • the encoder module 2112 and the embedding module 2114 are configured to perform encoding and embedding operations, respectively.
  • the color encoding module 2112 also may be programmed to generate an encoded image to be affixed to the article, based on encoding parameters, the visible image, and the latent image.
  • An encoder interface module 2150 is provided to serve as an interface between a user or document processing module (not shown) and the encoder device 2110.
  • the color encoding module 2112 may be configured to store the encoding parameters, the visible image, and the latent image in the encoding information database 2140 for subsequent use in authenticating the digitally encoded image.
  • the color encoding module 2112 also may store the encoded image in the database
  • the color encoding module 2112 further may provide the latent image to the embedding module 2114, which is adapted to embed the latent image into the visible image.
  • the encoded image with the embedded latent image may be returned to the encoder interface module 2150.
  • the software decoder or authenticator 2130 may include a decoding module 2132 and an authentication module 2134 that may be in communication with the encoding information database 440.
  • the decoding module 2132 is adapted to retrieve the encoding parameters and/or the encoded image from the encoding information database 2140.
  • the decoding module 2132 decodes the digitally encoded image using the encoding parameters.
  • the decoding module 2132 also may be adapted to receive the encoded image to be authenticated and extract the latent image.
  • the latent image may be obtained from an authenticator interface 460 that is adapted as an interface between an authentication requestor and the authenticator 2130.
  • the decoding module 2132 may return the decoded image to the authenticator interface and/or forward the decoded image to the authentication module 2134.
  • the authentication module 2134 is adapted to extract latent image from the decoded image for comparison to authentication criteria, which may be derived from multitude of image features, such as shape descriptors, histograms, coocurence matrices, frequency descriptors, moments, color features etc..
  • the authentication module 2134 may further be adapted to determine an authentication result and return the result to the authenticator interface.
  • the authentication module 2134 may include OCR software or bar-code interpretation software to extract information from the article.
  • OCR software or bar-code interpretation software to extract information from the article.
  • the color encoding module 2112, the embedding module 2114, the decoding module 2132, the authentication module 2134, the encoding information database 2140, the encoder interface module 2150 and the authenticator interface module 2160 may be distributed among one or more data processors. All of these elements, for example, may be provided on a single user data processor. Alternatively, the various components of the digital decoding system 2100 may be distributed among a plurality of data processors in selective communication via the network 2120.
  • software-based decoders enable encoding of composite images using multiple color separations and geometrically complicated element patterns. Some lens element patterns and shapes may be difficult or impractical to physically manufacture as optical lenses.
  • the software-based decoder may be designed with flexibility to enable device users to adjust the decoding parameters.
  • the methods described herein can make use of a "software lens" having lens elements that have a variable frequency, complex and/or irregular shapes (including but not limited to ellipses, crosses, triangles, randomly shaped closed curves or polygons), variable dimensions, or a combination of any of the preceding characteristics.
  • the methods of the invention can be applied based on the specified lens configuration, even if this configuration cannot be physically manufactured.
  • the methods of creating composite images from component images as described herein are based on the innovative use of geometric transformations, such as mapping, scaling, flipping etc, and do not require a physical lens to be created for this purpose.
  • Providing a software-based lens configuration, or specification, allows a user to implement desired software lenses. Some or all of the characteristics of the software lens could then be used by a software decoder to decode the encoded composite image to produce decoded versions of the component images used to create the composite image.
  • the decoder also may include a rendering device that is configured decode the latent images.
  • the rendering device may include a lens configured in any shape and having lens elements arranged in any pattern.
  • the lens may include lens elements arranged in a symmetrical pattern, an asymmetrical pattern, or a combination of both.
  • the lens may further include lens elements that are arranged in a regular pattern or an irregular pattern.
  • the rendering device may include a lenticular lens having lenticules arranged in a straight line pattern, a wavy line pattern, a zig-zag pattern, a concentric ring pattern, a cross-line pattern, an aligned dot pattern, an offset dot pattern, a grad frequency pattern, a target pattern, a herring pattern or any other pattern.
  • the rendering device may include lenses, such as a fly's eye lens, having a multidimensional pattern of lens elements.
  • the multidimensional pattern may include a straight line pattern, a square pattern, a shifted square pattern, a honey-comb pattern, a wavy line pattern, a zigzag pattern, a concentric ring pattern, a cross-line pattern, an aligned dot pattern, an offset dot pattern, a grad frequency pattern, a target pattern, a herring pattern or any other pattern. Examples of some of these decoding lenses are illustrated in Figure 22.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Color Television Systems (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Editing Of Facsimile Originals (AREA)
PCT/US2012/027175 2011-03-01 2012-03-01 A method for encoding and simultaneously decoding images having multiple color components WO2012118912A2 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
EP12751950.2A EP2681692A4 (en) 2011-03-01 2012-03-01 METHOD FOR SIMULTANEOUSLY ENCODING AND DECODING IMAGES HAVING MULTIPLE COLOR COMPONENTS
CN201280021269.7A CN103782306A (zh) 2011-03-01 2012-03-01 对具有多个颜色分量的图像进行编码及同时解码的方法
SG2013065289A SG192997A1 (en) 2011-03-01 2012-03-01 A method for encoding and simultaneously decoding images having multiple color components
AU2012223367A AU2012223367B2 (en) 2011-03-01 2012-03-01 A method for encoding and simultaneously decoding images having multiple color components
MX2013009995A MX2013009995A (es) 2011-03-01 2012-03-01 Un metodo para codificar y decodificar simultaneamente imagenes que tienen varios componentes de color.
CA2828807A CA2828807A1 (en) 2011-03-01 2012-03-01 A method for encoding and simultaneously decoding images having multiple color components
ECSP13012903 ECSP13012903A (es) 2011-03-01 2013-09-27 Un método para codificar y decodificar simultáneamente imágenes que tienen varios componentes de color

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201161447886P 2011-03-01 2011-03-01
US201161447878P 2011-03-01 2011-03-01
US61/447,878 2011-03-01
US61/447,886 2011-03-01
US13/270,738 US8682025B2 (en) 2010-10-11 2011-10-11 Method for constructing a composite image incorporating a hidden authentication image
US13/270,739 2011-10-11

Publications (2)

Publication Number Publication Date
WO2012118912A2 true WO2012118912A2 (en) 2012-09-07
WO2012118912A3 WO2012118912A3 (en) 2014-04-17

Family

ID=46758480

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/027175 WO2012118912A2 (en) 2011-03-01 2012-03-01 A method for encoding and simultaneously decoding images having multiple color components

Country Status (8)

Country Link
EP (1) EP2681692A4 (es)
CN (1) CN103782306A (es)
AU (1) AU2012223367B2 (es)
CA (1) CA2828807A1 (es)
EC (1) ECSP13012903A (es)
MX (1) MX2013009995A (es)
SG (1) SG192997A1 (es)
WO (1) WO2012118912A2 (es)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729824A (zh) * 2013-12-17 2014-04-16 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互系统
CN103745434A (zh) * 2013-12-17 2014-04-23 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互系统
US9836857B2 (en) 2013-12-17 2017-12-05 Beijing Zhigu Rui Tuo Tech Co., Ltd. System, device, and method for information exchange
DE102019132529A1 (de) * 2019-11-29 2021-06-02 Schreiner Group Gmbh & Co. Kg Verfahren zum Extrahieren, Auslesen und/oder Ausgeben einer in einer bedruckten und/oder visuell gestalteten Oberfläche verborgenen Information

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104268613A (zh) * 2014-09-30 2015-01-07 上海宏盾防伪材料有限公司 一种基于色彩平衡的图文信息隐藏及重现的结构及其方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6859534B1 (en) * 1995-11-29 2005-02-22 Alfred Alasia Digital anti-counterfeiting software method and apparatus
US5772250A (en) * 1997-04-11 1998-06-30 Eastman Kodak Company Copy restrictive color-reversal documents
US6104812A (en) * 1998-01-12 2000-08-15 Juratrade, Limited Anti-counterfeiting method and apparatus using digital screening
US7162035B1 (en) * 2000-05-24 2007-01-09 Tracer Detection Technology Corp. Authentication method and system
AU785178B2 (en) * 2000-09-15 2006-10-12 Trustcopy Pte Ltd. Optical watermark
GB0202962D0 (en) * 2002-02-08 2002-03-27 Ascent Systems Software Ltd Security printing
AU2003902810A0 (en) * 2003-06-04 2003-06-26 Commonwealth Scientific And Industrial Research Organisation Method of encoding a latent image
US7916343B2 (en) * 2003-07-07 2011-03-29 Commonwealth Scientific And Industrial Research Organisation Method of encoding a latent image and article produced
WO2007127862A2 (en) * 2006-04-26 2007-11-08 Document Security Systems, Inc. Solid-color embedded security feature
US8224019B2 (en) * 2007-05-22 2012-07-17 Xerox Corporation Embedding information in document blank space

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP2681692A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729824A (zh) * 2013-12-17 2014-04-16 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互系统
CN103745434A (zh) * 2013-12-17 2014-04-23 北京智谷睿拓技术服务有限公司 信息交互方法及信息交互系统
US9836857B2 (en) 2013-12-17 2017-12-05 Beijing Zhigu Rui Tuo Tech Co., Ltd. System, device, and method for information exchange
DE102019132529A1 (de) * 2019-11-29 2021-06-02 Schreiner Group Gmbh & Co. Kg Verfahren zum Extrahieren, Auslesen und/oder Ausgeben einer in einer bedruckten und/oder visuell gestalteten Oberfläche verborgenen Information

Also Published As

Publication number Publication date
WO2012118912A3 (en) 2014-04-17
CN103782306A (zh) 2014-05-07
EP2681692A4 (en) 2015-06-03
CA2828807A1 (en) 2012-09-07
AU2012223367B2 (en) 2016-09-22
AU2012223367A1 (en) 2013-09-19
SG192997A1 (en) 2013-10-30
MX2013009995A (es) 2013-12-06
ECSP13012903A (es) 2014-02-28
EP2681692A2 (en) 2014-01-08

Similar Documents

Publication Publication Date Title
US9092872B2 (en) System and method for creating an animation from a plurality of latent images encoded into a visible image
US8792674B2 (en) Method for encoding and simultaneously decoding images having multiple color components
US9275303B2 (en) Method for constructing a composite image incorporating a hidden authentication image
RU2176823C2 (ru) Программно-реализуемый цифровой способ защиты от подделок и устройство для осуществления способа
JP4915883B2 (ja) 偽造防止用印刷物及びその作製方法並びに網点データの作製用ソフトウェアを格納した記録媒体
PL191448B1 (pl) Sposób tworzenia kodowanej siatki obrazu w skomputeryzowanej technologii cyfrowej
ZA200502978B (en) Authentication of documents and articles by moiré patterns.
AU2012223367B2 (en) A method for encoding and simultaneously decoding images having multiple color components
JP5799425B2 (ja) 真偽判別可能な印刷物、その作製装置及びその作製方法並びに真偽判別可能な印刷物の認証装置及びその認証方法
CN102203823A (zh) 在电子设备上进行解码的方法
JP5768236B2 (ja) 偽造防止用印刷物、偽造防止用印刷物の作製装置及び偽造防止用印刷物の作製方法
JP6991514B2 (ja) 偽造防止印刷物及び偽造防止印刷物用データの作成方法
Amidror New print-based security strategy for the protection of valuable documents and products using moiré intensity profiles
EP1477026B1 (en) A method of incorporating a secondary image into a primary image and subsequently revealing said secondary image
JP5990791B2 (ja) 偽造防止用印刷物
JP7284950B2 (ja) 潜像印刷物の読取装置、読取方法及び読取用ソフトウェア
JP5678364B2 (ja) 真偽判別可能な印刷物、その作製装置及びその作製方法並びに真偽判別可能な印刷物の認証装置及びその認証方法
KR100562073B1 (ko) 디지털 스크리닝을 이용한 위조 방지 방법과 장치

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12751950

Country of ref document: EP

Kind code of ref document: A2

ENP Entry into the national phase in:

Ref document number: 2828807

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: MX/A/2013/009995

Country of ref document: MX

ENP Entry into the national phase in:

Ref document number: 2012223367

Country of ref document: AU

Date of ref document: 20120301

Kind code of ref document: A