GB2555395A - Virtual lenticular lens - Google Patents

Virtual lenticular lens Download PDF

Info

Publication number
GB2555395A
GB2555395A GB1617907.9A GB201617907A GB2555395A GB 2555395 A GB2555395 A GB 2555395A GB 201617907 A GB201617907 A GB 201617907A GB 2555395 A GB2555395 A GB 2555395A
Authority
GB
United Kingdom
Prior art keywords
image
digital representation
supplementary
angle
input image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1617907.9A
Other versions
GB2555395B (en
GB201617907D0 (en
Inventor
Guadalupe Calixto Gortarez Jose
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nautilus GB Ltd
Original Assignee
Nautilus GB Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nautilus GB Ltd filed Critical Nautilus GB Ltd
Priority to GB1617907.9A priority Critical patent/GB2555395B/en
Publication of GB201617907D0 publication Critical patent/GB201617907D0/en
Priority to PCT/GB2017/053170 priority patent/WO2018078339A1/en
Publication of GB2555395A publication Critical patent/GB2555395A/en
Application granted granted Critical
Publication of GB2555395B publication Critical patent/GB2555395B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B42BOOKBINDING; ALBUMS; FILES; SPECIAL PRINTED MATTER
    • B42DBOOKS; BOOK COVERS; LOOSE LEAVES; PRINTED MATTER CHARACTERISED BY IDENTIFICATION OR SECURITY FEATURES; PRINTED MATTER OF SPECIAL FORMAT OR STYLE NOT OTHERWISE PROVIDED FOR; DEVICES FOR USE THEREWITH AND NOT OTHERWISE PROVIDED FOR; MOVABLE-STRIP WRITING OR READING APPARATUS
    • B42D25/00Information-bearing cards or sheet-like structures characterised by identification or security features; Manufacture thereof
    • B42D25/30Identification or security features, e.g. for preventing forgery
    • B42D25/342Moiré effects
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • G02B3/0006Arrays
    • G02B3/0037Arrays characterized by the distribution or form of lenses
    • G02B3/005Arrays characterized by the distribution or form of lenses arranged along a single direction only, e.g. lenticular sheets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/06Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency using wave or particle radiation
    • G07D7/12Visible light, infrared or ultraviolet radiation
    • G07D7/128Viewing devices
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/2008Testing patterns thereon using pre-processing, e.g. de-blurring, averaging, normalisation or rotation
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/202Testing patterns thereon using pattern matching
    • G07D7/207Matching patterns that are created by the interaction of two or more layers, e.g. moiré patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00838Preventing unauthorised reproduction
    • H04N1/00883Auto-copy-preventive originals, i.e. originals that are designed not to allow faithful reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/04Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa
    • H04N1/19Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays
    • H04N1/195Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a two-dimensional array or a combination of two-dimensional arrays
    • H04N1/19594Scanning arrangements, i.e. arrangements for the displacement of active reading or reproducing elements relative to the original or reproducing medium, or vice versa using multi-element arrays the array comprising a two-dimensional array or a combination of two-dimensional arrays using a television camera or a still video camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0051Embedding of the watermark in the spatial domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2201/00Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
    • H04N2201/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N2201/3201Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N2201/3225Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document
    • H04N2201/3233Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title of data relating to an image, a page or a document of authentication information, e.g. digital signature, watermark

Abstract

A method of processing an image to emulate an effect of viewing the image through a physical lenticular lens. The method comprises receiving a first digital representation of an input image (13, fig.8), the input image comprising a primary image and secondary information encoded therein and generating a supplementary image (17, fig.8) by translating each pixel of the first digital representation by a predetermined amount in a predetermined direction and generating a first output image (18, fig.8) by combining the supplementary image with the digital representation of the input image. The output image may be a decoded version of the input image. The method may include receiving a series of digital representations of the encoded image, each being captured at a respective distance to and at one of a set of different angles with respect to the input image; at a particular distance and angle the secondary information will be visible in the output image of the digital representation. The lenticular frequency of the virtual lenticular lens is adjusted by moving the camera closer to or further away from the input.

Description

(71) Applicant(s):
Nautilus GB Limited Unit 3, Stiltz Building, Ledson Road, Roundthorne Industrial Estate, MANCHESTER, M23 9GP, United Kingdom (72) Inventor(s):
Jose Guadalupe Calixto Gortarez (51) INT CL:
G06T1/00 (2006.01) (56) Documents Cited:
US 20140334665 A US 20090003646 A1 US 20050237577 A
US 20130258410 A US 20070076868 A1 (58) Field of Search:
INT CL G02B, G06K, G06T, H04N Other: EPODOC, WPI, TXTE, INTERNET (74) Agent and/or Address for Service:
Appleyard Lees IP LLP
Clare Road, HALIFAX, West Yorkshire, HX1 2HY, United Kingdom (54) Title of the Invention: Virtual lenticular lens Abstract Title: Virtual lenticular lens (57) A method of processing an image to emulate an effect of viewing the image through a physical lenticular lens. The method comprises receiving a first digital representation of an input image (13, fig.8), the input image comprising a primary image and secondary information encoded therein and generating a supplementary image (17, fig.8) by translating each pixel of the first digital representation by a predetermined amount in a predetermined direction and generating a first output image (18, fig.8) by combining the supplementary image with the digital representation of the input image. The output image may be a decoded version of the input image. The method may include receiving a series of digital representations of the encoded image, each being captured at a respective distance to and at one of a set of different angles with respect to the input image; at a particular distance and angle the secondary information will be visible in the output image of the digital representation. The lenticular frequency of the virtual lenticular lens is adjusted by moving the camera closer to or further away from the input.
Figure GB2555395A_D0001
FIG. 5
At least one drawing originally filed was informal and the print reproduced here is taken from a later filed formal copy. This print incorporates corrections made under Section 117(1) of the Patents Act 1977.
/8
02 17
1a 1h
Figure GB2555395A_D0002
FIG. 1
Figure GB2555395A_D0003
FIG. 2
2/8
02 17
Figure GB2555395A_D0004
Figure GB2555395A_D0005
FIG. 3a
Figure GB2555395A_D0006
3/8
02 17
Figure GB2555395A_D0007
FIG. 3b
4/8
02 17
Figure GB2555395A_D0008
FIG. 4
Figure GB2555395A_D0009
501
502
503
504
FIG. 5
5/8
02 17
Original image
Figure GB2555395A_D0010
Simplified image
FIG. 6
Figure GB2555395A_D0011
Figure GB2555395A_D0012
Figure GB2555395A_D0013
FIG. 7
6/8
02 17
Figure GB2555395A_D0014
Figure GB2555395A_D0015
7/8
Figure GB2555395A_D0016
FIG. 9a
8/8
Figure GB2555395A_D0017
02 17
Figure GB2555395A_D0018
FIG. 9b <
Figure GB2555395A_D0019
m
Figure GB2555395A_D0020
Application No. GB1617907.9
RTM
Date :29 March 2017
Intellectual
Property
Office
The following terms are registered trade marks and should be read as such wherever they occur in this document:
Android (Page 13) iOS (Page 13)
OpenGL (Pages 13 & 17))
Intellectual Property Office is an operating name of the Patent Office www.gov.uk/ipo
VIRTUAL LENTICULAR LENS
The present invention relates to a method of processing an image to emulate an effect of viewing the image through a physical lenticular lens.
A lenticular lens is commonly used optical device. Referring to Figure 2, there is illustrated a perspective view of a typical lenticular lens 2. The lenticular lens 2 is formed from a transparent substrate, which may be made from, for example, plastic. The lenticular lens 2 further has an array of cylindrical lenticules 3 formed on one side. The thickness of the lenticular lens 2 is sufficient to support the cylindrical lenticules 3 with a particular configuration and focal length. The cylindrical lenticules 3 are regularly spaced at a frequency (or frequencies) defined by the numbers of lenticules per inch. The frequency of the cylindrical lenticules 3 is referred to as “lenticular frequency” below.
The lenticular lens 2 has an optical effect such that when viewed from different angles, different images under the lens 2 are magnified and become visible. Due to this optical effect, lenticular lenses are widely used for:
1 (i) decoding anti-counterfeit security features, as counterfeiting of products is a
CM serious problem for both producers of such products and the general public;
1 20 (ii) emulating depth, therefore creating three-dimensional effect from a two00 dimensional image;
1 (iii) causing an appearance of change in an image in dependence on the viewing angle; and (iv) causing an appearance of movement in an image in dependence on the viewing angle.
One way to help prevent counterfeiting is to provide optical security features on products or documents to indicate that they are authentic. Such optical security features are difficult to reproduce and may be referred to as anti-counterfeit images.
In the field of anti-counterfeit images, it is known to encode information (e.g., one or more secondary images) into a primary image to create an encoded anti-counterfeit image in which only the primary image is visible to the naked eye. The secondary image is hidden within the primary image and can be viewed only with an optical decoder such as the lenticular lens 2. Further, the secondary image may be hidden in such a way that prevents a counterfeiter discovering the presence of the secondary image, and in such a way that makes it difficult to copy the encoded anti-counterfeit image and to retain the hidden secondary image. Accordingly, counterfeit copies of the image are detectable by the absence of the secondary image within a copy of the primary image.
One method for hiding secondary images within a visible primary image is described in 5 EP1477026. EP1477026 teaches a method in which one or more secondary images are hidden within a primary image using a vectorial grid comprising an array of parallel lines, or an array of dots defining parallel lines. The parallel lines of the vectorial grid have particular characteristics. In order to ascertain the presence of the secondary image within an image, a user views the image through a lenticular lens having corresponding characteristics to those of the vectorial grid used to hide the secondary image.
In the method of EP1477026, the frequency of the vectorial grid used to encode an anticounterfeit image may be selected from a large number of frequencies, making it more difficult for a counterfeiter to ascertain the nature of the grid that was used to create the anti15 counterfeit image, and therefore making it more difficult for counterfeits to be created. While the number of possible vectorial grids that can be made is extremely large, for each vectorial grid used, a corresponding lenticular lens 2 needs to be created. Parties wishing to use the 1 method of EP1477026 may therefore restrict the number of vectorial grids that they use in
CM order to reduce the number of corresponding lenticular lenses that must be manufactured.
Such a reduction in the number of vectorial grids used to create encoded anti-counterfeit 00 images may reduce the difficulty with which counterfeiters can successfully forge an antiΊ counterfeit image.
For additional security, parties wishing to use the encoded anti-counterfeit images may prefer a customised frequency of the periodic pattern or a set of customised frequencies to be associated with, for example, different product lines. Accordingly, a large number of lenticular lenses 2 having matching lenticular frequencies need to be ordered and manufactured. This increases the cost and time delay for the parties to use the anticounterfeit images.
For the applications (ii) to (iv) described above, a lenticular lens is generally required to be printed on the surface of related images. This, however, prevents any modifications to the characteristics of the lenticular lens and limits the design freedom of the user.
It is an object of the present invention, among others, to obviate or mitigate at least one of the problems outlined above.
According to a first aspect described herein, there is provided a method of processing an image to emulate an effect of viewing the image through a physical lenticular lens. The method comprises receiving a first digital representation of an input image, the input image comprising a primary image and secondary information encoded therein; generating a supplementary image based on the first digital representation, wherein generating the supplementary image comprises translating each pixel of the first digital representation by a first predetermined amount in a first predetermined direction; and generating a first output image by combining the supplementary image with the digital representation of the input image , wherein the first output image emulates an effect of viewing the input image through a physical lenticular lens having a first lenticular frequency.
The processing of the first aspect, emulates an optical effect of viewing the input image through a lenticular lens. For example, the optical effect may reveal the presence of the secondary information. The secondary information may be, for example, a second image encoded within the primary image, or may be an optical effect, such as providing the primary image with a three-dimensional appearance. In this way, the output image generated by the method of the first aspect appears to a user as if the user is viewing the input image through 1 a physical lenticular lens. Accordingly, the method provides a virtual lenticular lens that may
CM be used as a replacement for physical lenticular lenses. This provides greater flexibility and conveniences to the user in applications where physical lenticular lenses are commonly 00 used.
Combining the supplementary image with the first digital representation may comprise subtracting the supplementary image from the first digital representation.
The first predetermined direction may have components in each of two axes of the digital representation. For example, the digital representation may define horizontal and vertical axes, and the first predetermined direction may be a vector having components along each of the horizontal and vertical axes.
The method may be implemented by any computing device, including but not limited to, a smartphone, a tablet, a laptop, a digital camera, a server, etc.
The digital representation of the input image may be produced by any means, including digital photography and digital scanning, for example. The digital representation may be in any format, such as JPG, PNG, BMP or GIF, etc.
The method is carried out in a manner that is independent of the characteristics (e.g., the encoding parameter, the spatial frequency, etc.) of the input image. Such an encoding parameter may include an encoding frequency of the input image, for example. The method does not, therefore, require any parameter indicating the characteristics of the input image to be received or otherwise determined. Nor does the method perform calculation of any such parameter of the input image.
The first digital representation may be generated by an image capturing device when the image capturing device is at a first distance to the input image and the first output image may emulate an effect of viewing the input image through a physical lenticular lens having the first lenticular frequency, the first lenticular frequency being associated with the first distance. The term “lenticular frequency” refers to the frequency of lenticules formed on a lenticular lens.
The image capture device may be, for example, a digital camera or a scanner.
The method may further comprise receiving a second digital representation of the input 1 image, wherein the second digital representation is generated by the image capturing device
CM when the image capturing device is at a second distance to the input image, the second 20 distance being different from the first distance; generating a second supplementary image based on the second digital representation, wherein generating the second supplementary Ί image comprises translating each pixel of the second digital representation by a second predetermined amount in a second predetermined direction; and generating a second output image by combining the second supplementary image with the second digital representation of the input image. The second output image emulates an effect of viewing the input image through a physical lenticular lens having a second lenticular frequency associated with the second distance and different from the first lenticular frequency.
The first predetermined direction and the second predetermined direction may be the same direction. The second predetermined amount may be the same as the first predetermined amount.
The first and second digital representations of the input image have different levels of details. That is, respective unit distances within each of the first and second digital representations will each correspond to a different distance within the input image. By obtaining and processing the first and second digital representations, therefore, the first and second output images emulate an optical effect of viewing the input image through two physical lenticular lenses with different lenticular frequencies. In this way, the method is able to provide a virtual lenticular lens, the lenticular frequency of which is adjusted by moving the image capturing device closer to or further away from the input image when the image capturing device generates a digital representation of the input image for use with the method.
When a digital representation of the input image is generated by an image capture device, the particular lenticular frequency which is emulated by the method is associated with the distance between the image capture device and the input image.
io
The second lenticular frequency may be higher than the first lenticular frequency if the second distance is smaller than the first distance, and the second lenticular frequency may be lower than the first lenticular frequency if the second distance is larger than the first distance.
The method may further comprise receiving a third digital representation of the input image, wherein the third digital representation corresponds to the first digital representation at a 1 different level of magnification; generating a third supplementary image based on the third
CM digital representation, wherein generating the third supplementary image comprises translating each pixel of the third digital representation by a third predetermined amount in a third predetermined direction; and generating a third output image, wherein generating the
Ί third output image comprises combining the third supplementary image with the third digital representation. The third output image emulates an effect of viewing the input image through a physical lenticular lens having a third lenticular frequency different from the first lenticular frequency.
The first and third digital representations of the input image have different levels of details since they have different magnifications. Adjusting a magnification of the digital representation has a similar effect to adjusting a distance between an image capturing device and the input image. By processing digital representations having different magnification levels, the method is able to provide a virtual lenticular lens with a range of lenticular frequencies. The particular lenticular frequency provided by the method is associated with the magnification level of the digital representation.
The third lenticular frequency may be higher than the first lenticular frequency if the third digital representation is obtained by increasing the magnification of the first digital representation, and the third lenticular frequency may be lower than the first lenticular frequency if the third digital representation is obtained by decreasing the magnification of the first digital representation.
The first digital representation may be generated by the image capturing device when the 5 image capturing device is at a first angle with respect to the input image and the first output image may emulate an effect of viewing the input image through a physical lenticular lens positioned at a second angle with respect to the input image where the second angle is associated with the first angle. The second angle may be determined by the first angle and the direction of translation of the pixels in generating the supplementary image (e.g., the first, second, third, etc., predetermined directions).
The method may further comprise receiving a fourth digital representation of the input image, wherein the fourth digital representation is generated by the image capturing device when the image capturing device is at a third angle with respect to the input image, the third angle different from the first angle; generating a fourth supplementary image based on the fourth digital representation, wherein generating the fourth supplementary image comprises translating the each pixel of the fourth digital representation by a fourth predetermined 1 amount in a fourth predetermined direction; and generating a fourth output image, the fourth
CM output image generated by combining the fourth supplementary image with the fourth digital representation of the input image, wherein the fourth output image emulates an effect of 00 viewing the input image through a physical lenticular lens positioned at a fourth angle with
Ί respect to the input image, the fourth angle different from the second angle.
By obtaining and processing digital representations which are oriented at different angles with respect to the input image, each output images generated by the method emulates an effect of viewing the input image through a physical lenticular lens oriented at a different angle. In this way, the method is able to provide a virtual lenticular lens, the orientation of which is adjusted by rotating the angle of the image capturing device with respect to the input image.
A difference between the third angle and the first angle may be equivalent to a difference between the fourth angle and the second angle.
The method may further comprise receiving a fifth digital representation of the input image, the fifth digital representation obtained by rotating the first digital representation by a rotation angle; generating a fifth supplementary image based on the fifth digital representation, wherein generating the fifth supplementary image comprises translating each pixel of the fifth digital representation by a fifth predetermined amount in a fifth predetermined direction; and generating a fifth output image, the fifth output image generated by combining the fifth supplementary image with the fifth digital representation of the input image. The fifth output image may emulate an effect of viewing the input image through a physical lenticular lens positioned at a fifth angle with respect to the input image, the fifth angle different from the second angle by an amount equal to the rotation angle.
The first and fifth digital representations of the input image are orientated at different angles with respect to the input image. This difference in orientation is similar in effect to rotating the image capturing device during capture of the digital representations. The output images generated by the method therefore emulate an effect of viewing the input image through a physical lenticular lens oriented at different angles.
The method may further comprise displaying the output image on a display device. The predetermined amount may be determined by a screen parameter of the display device. For example, the predetermined amount may be determined by the display resolution of the display device. In particular, the predetermined amount may be determined by the number of 1 pixels in at least one of a horizontal (or X) and vertical (or Y) dimensions of the display
CM device.
1 20
The predetermined amount may be a constant value for a particular display device. For
Ί different display devices, the predetermined amount may be different in order to support the screen parameter of each device.
The method may further comprise generating a sixth supplementary image based on the digital representation, wherein the sixth supplementary image is generated by translating each pixel of the first digital representation by a sixth predetermined amount in a sixth predetermined direction. The sixth predetermined amount may be the additive inverse of the predetermined amount. The first predetermined direction may be an opposite direction to the sixth predetermined direction.
The output image may be generated by combining the supplementary image and the second supplementary image with the first digital representation.
The sixth supplementary image improves the quality of the emulation of the optical effect of viewing the input image through a physical lenticular lens provided by the output image.
The output image may be generated by subtracting the first supplementary image and the sixth supplementary image from the digital representation of the input image.
Receiving the first digital representation may comprise simplifying a sixth digital 5 representation.
Simplifying the sixth digital representation is advantageous for reducing the computing complexity of the generation of supplementary and output images, such that the method can be run more quickly on computers having limited computing power.
Simplifying the sixth digital representation may comprise converting the sixth digital representation to a greyscale image. Simplifying the sixth digital representation may comprise performing edge detection on the greyscale image to generate an edge image. Simplifying the sixth digital representation may comprise performing a thresholding operation on the edge image which may be a gradient magnitude image.
The first digital representation may be a frame of a video of the input image. The method 1 may comprise performing generating a supplementary image and generating a first output
CM image for each of a plurality of frames of the video.
1 20
According to a second aspect described herein, there is provided a method of decoding an Ί encoded image comprising a primary image and a secondary image incorporated into the primary image using at least one encoding parameter. The method comprises receiving a series of digital representations of the encoded image, each digital representation in the series of digital representations being captured by an image capturing device at a respective distance to the input image and at one of a set of different angles with respect to the input image; and processing each of the digital representations to generate an output image according to the method of the first aspect. If one of the digital representations is captured by the image capturing device when the image capturing device is at a particular distance and a particular angle to the input image that corresponds to the at least one encoding parameter, the secondary image is visible in the output image of that digital representation.
The method provides a virtual lenticular lens for decoding an encoded image which comprises a primary image and a secondary image incorporated into the primary image using at least one encoding parameter. The lenticular frequency of the virtual lenticular lens is adjusted by moving the image capturing device closer to or further away from the input image when the image capturing device generates a digital representation of the input image for use with the method. The orientation of the virtual lenticular lens is adjusted by rotating the image capturing device with respect to the input image when the image capturing device generates a digital representation of the input image for use with the method. In this way, the user is spared with the inconveniences of preparing a plurality of physical lenticular lens for decoding the encoded image.
It will be appreciated that the method need never determine the at least one encoding parameter in order to reveal the secondary image.
The method may be implemented by any computing device, such as, a smartphone, a tablet, a laptop, a digital camera, a server, etc.
Receiving a series of digital representations may comprise adjusting a magnification of the first digital representation and rotating the first digital representation by different rotation angles to generate each of the series of digital representations.
The method provides a virtual lenticular lens for decoding an encoded image which 1 comprises a primary image and a secondary image incorporated into the primary image
CM using at least one encoding parameter. The lenticular frequency of the virtual lenticular lens is adjusted by adjusting the zoom level of an existing digital representation of the input image. The orientation of the virtual lenticular lens is adjusted by rotating the existing digital i representation of the input image. In this way, the user is spared with the inconveniences of preparing a plurality of physical lenticular lens for decoding the encoded image.
The method may be implemented by any computing device, such as, a smartphone, a tablet, a laptop, a digital camera, a server, etc.
According to a fourth aspect described herein, there is provided a computer readable storage medium having computer readable instructions recorded thereon, the instructions configured to cause a processor to carry out a method according to any one of the first to third aspects.
According to a fifth aspect, there is provided a virtual lenticular lens system comprising a processor; and a memory storing computer readable instructions for causing the processor to perform the method of any one of the first to third aspects.
12 17
The virtual lenticular lens system may further comprise an image capture device arranged to capture the digital representation of input images.
The virtual lenticular lens system may further comprise a display device arranged to display the output image.
It will be appreciated that aspects of the present invention can be implemented in any convenient way including by way of suitable hardware and/or software. For example, a device arranged to implement the invention may be created using appropriate hardware components. Alternatively, a programmable device may be programmed to implement embodiments of the invention. The invention therefore also provides suitable computer programs for implementing aspects of the invention. Such computer programs can be carried on suitable carrier media including tangible carrier media (e.g., hard disks, CD ROMs and so on) and intangible carrier media such as communications signals.
It will be appreciated that features presented in the context of one aspect of the invention in the preceding and following description can equally be applied to other aspects of the invention.
An embodiment of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
Figure 1 is a schematic illustration of a computing device;
Figure 2 is a perspective view of a prior art lenticular lens;
Figure 3a is an illustration of a process of using the lenticular lens to decode an encoded anti-counterfeit image with encoded information hidden therein;
Figure 3b is a schematic illustration of the process shown in Figure 3a;
Figure 4 is a schematic illustration of a virtual lenticular lens system;
Figure 5 is a flowchart showing processing carried out to decode an encoded anti-counterfeit image using the virtual lenticular lens system of Figure 4;
Figure 6 is a flowchart showing processing carried out to simplify a received image in the processing of Figure 5;
Figure 7 is a schematic illustration of how the virtual lenticular lens system of Figure 4 5 emulates different lenticular frequencies;
Figure 8 is an illustration of images output by the processing of Figures 5 and 6;
Figure 9a is an illustration of a process of using a smartphone decoder, which has the virtual lenticular lens system of Figure 4, to decode an encoded anti-counterfeit image;
Figure 9b is a schematic illustration of the process shown in Figure 9a.
Referring to Figure 1, there is shown a schematic illustration of components of a computer 1 which can be used to process images in accordance with some embodiments of the present invention. It can be seen that the computer 1 comprises a processor 1a which is configured to read and execute instructions stored in a volatile memory 1b which takes the form of a 1 random access memory. The processor 1a may be a single processor or a group of
CM processors, including but not limited to, a central processing unit (CPU) and/or a graphics processing unit (GPU). The volatile memory 1b stores instructions for execution by the processor 1a and data used by those instructions. For example, during processing, the
Ί images to be processed may be loaded into and stored in the volatile memory 1b.
The computer 1 further comprises non-volatile storage 1c, which may be, for example, a solid state drive (SSD). The images and vectorial grids to be processed may be stored on the non-volatile storage 1c. The computer 1 further comprises an I/O interface 1d to which are connected peripheral devices used in connection with the computer 1. More particularly, a display 1e is configured so as to display output from the computer 1. The display 1e may, for example, display representations of the images being processed. The display 1e may also be an input device in the form of a touch screen. Also connected to the I/O interface 1d is a keyboard 1f for use as an input device to interact with the computer 1. The keyboard 1f may be a floating keyboard formed on a touch screen. A camera 1g connected to the I/O interface 1d allows images to be acquired by the user and stored in the memories 1b and 1c for processing by the computer 1.
The computer 1 may be, for example, a laptop, a tablet, a smartphone, or any other suitable electronic device having a structure according to Figure 1.
An encoded anti-counterfeit image is typically made by encoding information (e.g., a secondary image) into a primary image using a regularized periodic pattern which has a frequency. One of the encoding methods is described in EP1477026 and a vectorial grid is used therein as the regularized periodic pattern. However, it will be appreciated that other encoding methods and other types of regularized periodic pattern may be used.
In order to use the lenticular lens 2 to successfully decode an encoded anti-counterfeit image, the lenticular frequency of the lenticular lens 2 is selected to match the frequency of the periodic pattern which is used to encode the anti-counterfeit image. The frequency of the periodic pattern used to encode the anti-counterfeit image is referred to below as “encoding frequency” of the anti-counterfeit image below.
Figure 3a illustrates the process of using the lenticular lens 2 to decode an encoded anti15 counterfeit image 9a. The encoded anti-counterfeit image 9a is printed on a substrate 7a as shown in Figure 3a(a). To decode the image 9a, the lenticular lens 2 is held over the image 9a with the cylindrical lenticules 3 facing up (i.e., the cylindrical lenticules 3 not contacting 1 the substrate 7a). As shown in Figure 3a(b), the lenticular lens 2 may be out of focus if it is
CM not flat against the substrate 7a. When the lenticular lens 2 is out of focus, the encoded anti20 counterfeit image 9a is seen through the lens 2 as a blurred image 10a. When the lenticular 00 lens 2 is in focus, the cylindrical lenticules 3 allows a viewer to see samples of the encoded
Ί anti-counterfeit image 9a taken at intervals determined by the lenticular frequency. The cylindrical lenticules 3 magnify the samples and human vision interpolates them into a continuous picture. When the lenticular lens 2 is rotated so as to be orientated at a particular angle (as shown in Figure 3a(c)), this causes encoded information of the anticounterfeit image 9a that has the same frequency as the lenticular frequency of the lens 2 to be sampled and magnified, thus becoming visible through the lens 2. The action of the lenticular lens 2 is essentially to assemble periodic samples of the encoded anti-counterfeit image 9a into a reconstruction of the encoded information. As shown in the example of
Figure 3a(c), the encoded anti-counterfeit image 9a is seen through the lens 2 as an image 11a, and the image 11a reveals the encoded information (i.e., letters “BHATIA”). It may additionally be observed that the revealed encoded information seen through the lenticular lens 2 has a three-dimensional visual effect due to the optical effects of the lenticular lens 2. As shown in Figure 3a(d), the encoded information disappears from an image 12a seen through the lenticular lens 2, as soon as the lens 2 is rotated away from the particular angle.
Figure 3b is a schematic illustration of the process shown in Figure 3a. The sub-figures of Figure 3b correspond to those of Figure 3a, respectively. Figure 3b schematically depicts how a lenticular lens 2 is used to reveal encoded information from an encoded anticounterfeit image 9b which is provided on a substrate 7b. Images 10b, 11b and 12b correspond to images 10a, 11a and 12a, respectively, and schematically illustrate the images seen through the lens 2 when the lens 2 is out of focus, or revealing the encoded information, or being misaligned with the particular encoding angle of the anti-counterfeit image 9b.
More than one set of encoded information may be hidden within a primary image using the same regularized periodic pattern. The multiple sets of encoded information may be encoded into the primary image along different encoding angles. To reveal each set of encoded information, the lenticular lens 2 is rotated so as to be orientated at a particular angle corresponding to the encoding angle of the respective encoded information.
In the image decoding process described above, the lenticular lens 2 is, in general terms, a frequency filter. Encoded information is embedded in a visible primary image at a specific 1 frequency. The lenticular lens 2 that has the same lenticular frequency filters out the primary
CM image and reveals the hidden encoded information.
1 20
There is now described an alternative to the use of lenticular lenses of the type shown in i Figure 2 in decoding anti-counterfeit images. More specifically, a virtual lenticular lens system 4 for decoding anti-counterfeit images without the use of any physical lenticular lens is described. The virtual lenticular lens system 4 reveals hidden encoded information by recreating the optical effects of the lenticular lens 2.
As shown in Figure 4, the virtual lenticular lens system 4 includes two main units: a user interface unit 5 and an image processing unit 6. The virtual lenticular lens system 4 may be programmed to operate on any operating system, including but not limited to, Android and iOS. The image processing unit 6 may be programmed in any suitable computer language, including but not limited to, OpenGL ES. The image processing unit 6 may run entirely on a GPU. It will be appreciated that the image processing unit 6 may instead run on a CPU or on a CPU and a GPU collectively.
In an example arrangement, the virtual lenticular lens system 4 is implemented in a handheld electronic device, such as a smartphone or a tablet, which has a structure according to the computer 1. In general terms, in use, the camera 1g of the handheld electronic device may provide a video of an encoded anti-counterfeit image to the image processing unit 6 in real time. The video is a sequence of images or frames. The frame rate of the video may be, for example, 30 frames per second. The image processing unit 6 processes the images contained in the video and then the user interface unit 5 acts to display the processed images, for example on the display 1e of the handheld electronic device.
During acquisition of the video, the user may be required to rotate the camera 1g and move the handheld electronic device closer to, or further away from, the encoded anti-counterfeit image, in order to find the encoded information. The rotation of the camera 1g has the same function as rotation of the lenticular lens 2 described above, i.e., to match the encoding angle of the hidden encoded information. The movement of the handheld electronic device closer to or further away from the encoded anti-counterfeit image may be considered as equivalent to selecting a lenticular lens that has a lenticular frequency matching the encoding frequency of the anti-counterfeit image (i.e., the line frequency of the regularized periodic pattern used to encode information into the primary image).
1 Therefore, when the handheld electronic device is moved to a position that has a particular
CM distance to the encoded anti-counterfeit image and when the camera 1g is rotated to a correct angle, the user is able to see an output image which reveals the encoded information on the display 1e of the handheld electronic device in real time. Operation of the virtual
Ί lenticular lens system 4 is described in more detail below.
Figure 5 is a flow chart of exemplary processing that may be carried out by the image processing unit 6. This flow chart illustrates how the image processing unit 6 processes each image received from the camera 1g.
at Step 501, the image processing unit 6 receives an image 13 (Figure 8) of an encoded anti-counterfeit image.
The encoded anti-counterfeit image may be displayed on a printed medium, or may be displayed on a digital display medium. Generally, it is expected that the encoded anticounterfeit image may be printed on a substrate (e.g., paper, plastic, etc.) using any suitable printing methods, such as Flexography, Lithography, inkjet printing etc.
The image received at Step 501 is a digital representation of the encoded anti-counterfeit image. The received image 13 may be extracted from a video of the encoded anticounterfeit image captured by the camera 1g in real time as described above.
At Step 502, the image processing unit 6 simplifies the received image 13 to generate a simplified image 14 (Figure 8). The simplification of the received image 13 may be desirable to reduce the computing complexity of the following processing steps (discussed below), such that the processing can be run on a computer having limited computing power (for example, a handheld computing device such as a smartphone or a tablet) with greater speed. It will be appreciated that Step 502 is an optional step and as such may not be present in every embodiment of the invention.
Figure 6 shows an exemplary embodiment of processes that may be carried out at Step 502 to simplify the received image 13. At Step 601, the image processing unit 6 converts the received image 13 from a coloured format to greyscale. This may be done by applying different weighting factors to the colour channels of the received image. If the received image 13 is defined in a colour space (for example, based on the RGB model), each pixel of the image has respective colour components (for example, red, green and blue). Based on the theory of luminosity where green is the main contributor to the intensity of colour luminance, red less and blue the least, a luminosity algorithm may be applied at Step 601 to calculate the luminance Y of each pixel, for example, based upon the equation:
1 Y=0.21R+0.72G+0.07B (where R, G, and B are the luminance values of red, green and blue
CM components of the pixel, respectively). Subsequently, each pixel in the colour space is converted to a corresponding pixel in greyscale that has the same luminance Y.
oo
Ί It will be appreciated that the received image 13 may be in other colour spaces and different luminance conversion algorithms may be used. It will further be appreciated that Step 601 is optional and can be omitted if the originally received image is in greyscale already.
At Step 602, edge detection is applied to the greyscale image obtained at Step 601. Generally, Step 602 aims to identify points in the image at which the image brightness changes sharply or, in other words, has discontinuities. Those identified points are then organized into a set of curved line segments (i.e., edges). Edge detection will be well-known to the skilled person and as such is not described in detail herein. In general, however, in one exemplary embodiment, the edge detection may use the Sobel operator to estimate the gradient magnitudes at each pixel of the greyscale image. The sobel operator consists of a pair of convolution kernels Gx, Gy, which are used to calculate the gradients at each pixel in the X and Y directions, respectively. The gradient magnitude at the pixel is then calculated based on a combination of the gradients in the X and Y directions. The gradient magnitudes at each pixel of the greyscale image are then grouped in a matrix to form a gradient magnitude image. The gradient magnitude image shows sets of edges which indicate the positions where the brightness of the greyscale image changes sharply.
It will be appreciated that other suitable edge detection techniques may be employed at Step 5 602 in place of the sobel operator. Other examples of edge detection techniques which may be used, include, for example, Robert’s cross operator, Prewitt’s operator, Laplacian of
Gaussian, or Canny edge detector.
It will be also appreciated that the edge detection at Step 602 may output a general edge image. The edge image includes, but not limited to, the gradient magnitude image described above.
At Step 603, thresholding is applied to the edge image obtained at Step 602, to reduce noise and improve the contrast of the edge image. If the value of the edge image at a particular pixel is lower than a predetermined threshold, the value of that pixel will be set to zero. Otherwise, the value of the pixel will be kept unchanged. The threshold may be set as, for example, 60% of the maximum value of all the pixels in the edge image. It will be 1 appreciated, however, that other thresholds may be used. However, lower thresholds will
CM result in the “detection” of more edges, and the result will be increasingly susceptible to noise and detecting edges of irrelevant features in the image. Conversely a high threshold may miss subtle edges, or result in fragmented edges. The actual threshold chosen will
Ί therefore depend upon particular requirements of particular applications.
After the simplified image 14 is generated at Step 603, the image processing unit 6 proceeds to Step 503. In this step, at least one supplementary image is generated based upon the simplified image 14 obtained at Step 502 (or the image 13 received at Step 501 where simplification is not performed). In general terms, the at least one supplementary image is generated by offsetting each of the pixels in the simplified image 14 by a predetermined amount. A set of supplementary images may be created using an offset based upon a minimum displacement value “MDV”. With reference to Figure 8, a first supplementary image 15 may be generated by shifting the location of each pixel (i.e., translating each pixel) of the simplified image 14 by an offset of (X=+MDV, Y=-MDV). A second supplementary image 16 may be generated by shifting the location of each pixel (i.e., translating each pixel) of the simplified image 14 by an offset of (X=-MDV, Y=+MDV). That is, the offset used for generating the second supplementary image 16 has the same absolute value but a different sign when compared to that used for generating the first supplementary image 15 in each of the X and Y directions. In other words, in generating the first and second supplementary images 15, 16, the location of each pixel of the simplified image 14 is shifted by the same distance but in opposite directions. A final supplementary image 17 may be generated by way of a straightforward combination of the first and second supplementary images 15, 16. By “straightforward combination”, it is meant that pixels of the final supplementary image are the summed value of the pixels of the first and second supplementary images at the same location.
It will be appreciated that Step 503 may generate a single supplementary image only. For example, only one of the first and second supplementary images 15, 16 may be generated and output at Step 503. Accordingly, the final supplementary image may be one of the first and second supplementary images 15, 16. In particular, it has been found that one supplementary image is sufficient for the image processing unit 6 to emulate a lenticular lens. However, having a second (or further) supplementary image may be helpful for improving the quality of output image of the image processing unit 6.
The offsets, i.e., (+MDV, -MDV) and (-MDV, +MDV), may be related to the screen specifications of the handheld electronic device, and more particularly, by the screen 1 resolution of the handheld electronic device. In an example embodiment, the image
CM processing of the step 503 may be done using OPEN GL ES fragments, which processes each pixel “on the fly” before it is displayed on the screen. The screen coordinates in OPEN
GL are from -1 to 1 in the X and Y directions. For an average screen resolution of 1920 x
Ί 1080 of a handheld electronic device (e.g., a smartphone), this translates into a minimum displacement of 1/1080 px = 0.000926 px in the X direction of the screen and 1/1920 px = 0.000521 px in the Y direction of the screen. As, in the described example, pixels are translated by the same distance in both the X and the Y dimensions, these values may provide a Minimum Displacement Value (MDV) of 0.000926. It will be appreciated that other suitable offset values may be used in generating the supplementary images. It will further be appreciated that pixels may be translated by different distances in the X and Y dimensions, or may even be translated in only one of the X and Y dimensions.
For a particular handheld electronic device, the offset between the simplified image and each supplementary image may be constant. For different handheld electronic devices, the offset value may be different in order to support the particular screen specifications of each device. In some embodiments, therefore, the processing of Figure 5 may include a determination of the resolution of the handheld electronic device, using, e.g., APIs provided by the operating system of the electronic device or any other appropriate method.
At Step 504, encoded information is reconstructed for display on the screen 1e of the handheld electronic device 1. In particular, the displayed image 18 may be generated by subtracting the final supplementary image 17 from the simplified image 14. By “subtracting”, it is meant that each pixel of the displayed image 18 has a value obtained by subtracting the corresponding pixel of the final supplementary image 17 from the corresponding pixel of the simplified image 14 at the same location. As shown in Figure 8, the displayed image 18 has an embossed three-dimensional effect generated around the edges of displayed content. This appearance is similar to a decoded image seen through the lenticular lens 2.
The image 13 received at step 501 was obtained by a user holding the handheld electronic device, with the camera 1g at a certain distance with respect to the encoded anti-counterfeit image and oriented at a certain angle. If the distance of the camera 1g corresponds to the encoding frequency of the anti-counterfeit image and the angle corresponds to the encoding angle of the anti-counterfeit image, the user is able to see revealed encoded information on the screen of the handheld electronic device. Otherwise, the encoded information will not be revealed. If the encoded information is not visible, the user may rotate the camera 1g and/or move the handheld electronic device with respect to the anti-counterfeit image in order to 1 reveal the encoded information.
CM
The virtual lenticular lens system 4 may comprise instructions stored in the non-volatile 00 storage 1c. When the instructions are executed by the processor 1a, the instructions cause Ί the processor 1a to perform the above processing steps.
As described above, movement of the handheld electronic device closer to or further away from the encoded anti-counterfeit image is equivalent to changing the lenticular frequency of the virtual lenticular lens provided by the virtual lenticular lens system 4. The reason for this is now described in more detail. Generally, it is considered that the optimal distance between the eyes of a user and the screen of a handheld electronic device is approximately 30 centimetres. The optimal distance between the camera 1g of the handheld electronic device 1 and an object may be approximately 8 to 10 centimetres. For example, smartphone manufacturers often calibrate cameras to quickly focus on objects at a distance of 8 centimetres. Further, at around a distance of 8 to 10 centimetres, a flash of a smartphone camera can provide extra illumination to improve the contrast and the visual acuity of the image acquired.
The distance between an object and a camera changes the level of detail discernible by the camera sensor. With reference to Figure 7, it has been found that as the distance B between the camera 1g of the handheld electronic device 1 and the encoded anti-counterfeit image 19 increases (i.e., the distance A between the viewer 20 and the handheld electronic device 1 decreases), the minimal discernible distance D between line patterns on the anticounterfeit image 19 increases. As such, reducing the distance A is equivalent to reducing the lenticular frequency f of a lenticular lens. Conversely, if the distance B between the camera 1g of handheld electronic device 1 and the image 19 decreases (i.e., distance A increases), the minimal discernible distance D between line patterns on the anti-counterfeit image 19 decreases. This is equivalent to increasing the lenticular frequency f of a lenticular lens.
Therefore, the change in distance between the encoded anti-counterfeit image 19 and the camera 1g allows the image processing unit 6 to emulate a large range of lenticular frequencies. It will be appreciated that a particular distance between the encoded anticounterfeit image 19 and the camera 1g may achieve a lenticular frequency matching the encoding frequency of the anti-counterfeit image 19.
Further, since the at least one supplementary image 15, 16 generated at step 503 is at a 1 predetermined offset with respect to the simplified image 14 (as well as the received image
CM 13), generation of the supplementary image 15 or 16 emulates use of a virtual lenticular lens oriented at a fixed angle with respect to the X direction of the screen 1e of the handheld electronic device 1. For the example offsets of (+0.000926, -0.000926) and (-0.000926,
Ί +0.000926) as described above, this is equivalent to a virtual lenticular lens oriented at an angle of 45° with respect to the X direction of the screen 1e. If the X direction of the screen 1e of the handheld electronic device 1 is oriented at an angle of a° with respect to the anti25 counterfeit image 19, then the virtual lenticular lens is oriented at an angle equal to a+45° with respect to the anti-counterfeit image in this example. Therefore, rotating the handheld electronic device 1 with respect to the anti-counterfeit image 19 causes a rotation of the virtual lenticular lens provided by the image processing unit 6. When the handheld electronic device 1 is at the particular distance from the anti-counterfeit image which matches the encoding frequency, if the handheld electronic device 1 is further rotated to an angle that matches the encoding angle of the anti-counterfeit image 19, the encoded information will be revealed on the screen of the handheld electronic device.
Figure 9a illustrates an example showing how a user uses a handheld electronic device, such as a smartphone, to reveal encoded information hidden in an encoded anti-counterfeit image. The encoded anti-counterfeit image is provided on a substrate 26a.
In Figure 9a, each of the panels 22a, 23a, 24a and 25a is a “screenshot” from a smartphone (not shown). The smartphone is an example of the computer 1 and has the virtual lenticular lens system 4 executing thereon. One hand of the user holds the substrate 26a, and the other hand of the user holds the smartphone (not shown) to scan the encoded anti5 counterfeit image using a camera of the smartphone. The virtual lenticular lens system 4 processes the captured image according to the processing of Figure 5 and displays the processed images on the screen of the smartphone. When the smartphone is at an incorrect distance to the substrate 26a, as shown in Figure 9a(a), a processed image 27a of the encoded anti-counterfeit image is “blank” on the screen of the smartphone and no encoded information is revealed. As the distance between the camera of the smartphone and the substrate 26a is adjusted to a proper value and the angle between the substrate 26a and the camera is properly oriented, as illustrated in Figure 9a(b), a processed image 28a of the encoded anti-counterfeit image is displayed on the screen 23a of the smartphone and the hidden encoded information (i.e., letters “BHATIA”) is revealed in the processed image
28a.
The hidden encoded information disappears as the user further adjusts the angle between 1 the substrate 26a and the camera, as shown in Figure 9a(c). In particular, at this time, a
CM processed image 29a of the encoded anti-counterfeit image is “blank” as shown on the screen 24a of the smartphone and no encoded information is revealed. The encoded antiOO counterfeit image may contain additional encoded information, which is encoded at an angle
Ί of approximately 100 degrees with respect to the encoded information shown in Figure
9a(b). As shown in Figure 9a(d), when the user rotates the substrate 26a to around 100 degrees with respect to the position of the substrate 26a in Figure 9a(b), a processed image
30a of the encoded anti-counterfeit image is displayed on the screen 25a of the smartphone and the additional encoded information is revealed in the processed image 30a.
Figure 9b is a schematic illustration of the process shown in Figure 9a. The sub-figures of
Figure 9b correspond to those of Figure 9a, respectively. Figure 9b schematically depicts 30 how a smartphone is used to reveal encoded information from an encoded anti-counterfeit image which is provided on a substrate 26b. Images 22b to 25b correspond to images 22a to 25a, respectively, and schematically illustrate the screenshots of the smartphone. Images
27b to 30b correspond to images 27a to 30a, respectively, and schematically illustrate the processed images of the anti-counterfeit image displayed on the screen of the smartphone.
When comparing Figure 9 and Figure 3, it will be appreciated that use of a smartphone providing the virtual lenticular lens system 4 to reveal hidden encoding information of an encoded anti-counterfeit image is very similar to the way in which the lenticular lens 2 is used to reveal the hidden encoding information. However, it will be appreciated that when using physical lenticular lens 2, in order to decode a new anti-counterfeit image which has been encoded using a regularized periodic pattern which has a different line frequency, a new lenticular lens which has a lenticular frequency matching that line frequency of the periodic pattern is required. This causes inconvenience to the user. This problem, however, does not exist when the user uses the virtual lenticular lens system 4 described herein. The user need only to adjust the distance between the encoded anti-counterfeit image and the camera 1g of the smartphone (while maintaining the encoded anti-counterfeit image in focus), recapture an image of the encoded anti-counterfeit image, and the processing steps described above with reference to Figure 5 will reveal the hidden encoding information. This is because, as described above, the change in the distance between the encoded anticounterfeit image and the camera allows the image processing unit 6 to emulate a virtual lenticular lens having different lenticular frequencies.
As described above, in an encoded anti-counterfeit image, encoding information is hidden within a primary image using a regularized periodic pattern which has a particular line 1 frequency (i.e., the encoding frequency of the anti-counterfeit image). Therefore the
CM encoded anti-counterfeit image has a characteristic related to this encoding frequency. In using the virtual lenticular lens system 4 to reveal encoded information in an encoded antiOO counterfeit image, the virtual lenticular lens system 4, however, does not require any
Ί parameters (e.g., either input by the user via the keyboard 1f or obtained from any database) indicating characteristics of the encoded anti-counterfeit image itself. In particular, the virtual lenticular lens system 4 does not require the knowledge of any encoding parameter of the encoded anti-counterfeit image, particularly the encoding frequency. Further, the virtual lenticular lens system 4 does not try to calculate or extract any encoding parameters of the encoded anti-counterfeit image including the encoding frequency. To decode the encoded anti-counterfeit image, the virtual lenticular lens system 4 performs the processing as illustrated in Figure 5 to the anti-counterfeit image. The processing steps of Figure 5 are independent of the characteristics of the encoded anti-counterfeit image, in particular the encoding frequency or any kind of spatial frequency characteristic of the image. The virtual lenticular lens system 4 can therefore decode anti-counterfeit images having different encoding frequencies and/or different encoding information.
That is, the virtual lenticular lens system 4 emulates the optical effects provided by a physical lenticular lens to an image and therefore provides a digital form of the physical lenticular lens - a virtual lenticular lens. The lenticular frequency of the virtual lenticular lens is adjustable by adjusting a distance between the camera 1g of the smartphone (or other device on which the virtual lenticular lens system 4 is operating) and the encoded anticounterfeit image. The orientation of the virtual lenticular lens is adjustable by rotating the camera 1g of the smartphone with respect to the encoded anti-counterfeit image.
As described above, a video of the encoded anti-counterfeit image is provided by the camera 1g to the virtual lenticular lens system 4. After the virtual lenticular lens system 4 finishes processing of a frame of the video, the processed image will be displayed on the display 1e of the handheld electronic device. If the processing speed of the image processing unit 6 is fast enough, the user is able to see the processed image almost in real time while the video of the anti-counterfeit image is being taken by the camera 1g. After a processed frame is displayed, the data generated for that frame may be destroyed. No video or image of the encoded anti-counterfeit image need be stored anywhere else on the handheld electronic device.
Alternatively, it will be appreciated that the virtual lenticular lens system 4 may only receive a single image of the anti-counterfeit image and may not require a video to be provided. In 1 general terms, the received image is a digital presentation of the encoded anti-counterfeit
CM image. It will be appreciated that the received image may be produced by digital photography or digital scanning. Additionally, the received image may be an original digital file of the encoded anti-counterfeit image in the format of, for example, JPG, PNG, BMP or
Ί— GIF, etc.
The image processing unit 6 may be arranged to process the single received image of the encoded anti-counterfeit image so as to reveal the hidden encoding information. In order to ensure that the encoding information is visible in the output image, the image processing unit 6 may adjust the magnification of the single received image and rotate the single received image through different angles, either automatically or in response to user instructions.
It is understood that adjusting the magnification of the received image by the image processing unit 6 has similar effects to the user changing the distance between the encoded anti-counterfeit image and the camera 1g. For example, magnifying the received image is equivalent to having a smaller distance between the encoded anti-counterfeit image and the camera 1g. Changing the magnification of the received image similarly allows the image processing unit 6 to emulate a large range of lenticular frequencies. However, it will be appreciated that the quality (i.e., pixel density) of the image will decrease with the increasing of the magnification factor and, therefore, there is an upper limit for the magnification factor.
It is further understood that rotating the received image by the image processing unit 6 has similar effects to rotating the handheld electronic device by the user.
In this way, the user is only required to take one image of the anti-counterfeit image, and is 5 not required to move the handheld electronic device closer to or away from the anticounterfeit image or to rotate the handheld electronic device. However, it will be appreciated that by taking a real-time video and adjusting the handheld electronic device when the video is taken, the video provides more information for decoding the image than a single still image, and the user can use the handheld electronic device in a more intuitive manner, io
It will be appreciated that the virtual lenticular lens system 4 may be arranged that the virtual lenticular lens provided thereby has a lenticular frequency which is adjustable by changing a distance between the camera 1g of the handheld device and the encoded anti-counterfeit image, and also by changing the magnification of a particular received image. In this way, the virtual lenticular lens system 4 is able to emulate a wider range of lenticular frequencies. The user is also provided with more freedom in operating the virtual lenticular lens system 4.
1 It will be appreciated that the virtual lenticular lens system 4 (in particular the image
CM processing unit 6) may be performed by an external or remote computing device such as a server. For example, a server may receive a video or a single image from a handheld electronic device such as a smartphone, a tablet or a camera, and process the video or the i single image. After the processing is done, the server may transmit the processed image(s) to a display for the user to inspect. If the processing speed of the server and the communication speed between the server and the handheld electronic device are fast enough, the user may be able to see the processed image almost in real time while the video or the single image of the anti-counterfeit image is being taken by the handheld electronic device.
As the virtual lenticular lens system 4 provides a digital form of a physical lenticular lens - a virtual lenticular lens, it will be appreciated that the virtual lenticular lens system 4 is not limited to be used to decode encoded anti-counterfeit images. In particular, the virtual lenticular lens system 4 can also be used in other common applications of lenticular lenses, such as, creating a three-dimensional effect from a two-dimensional image, or making images that appear to change or move depending on the viewing angle, etc.
Embodiments of the present invention have been described above and it will be appreciated that the embodiments described are in no way limiting. Indeed, many variations to the described embodiments will be apparent to an ordinary skilled person, and such variations are within the scope of the present invention as set out in the accompanying claims.
Figure GB2555395A_D0021

Claims (26)

CLAIMS:
1. A method of processing an image to emulate an effect of viewing the image through a physical lenticular lens, the method comprising:
receiving a first digital representation of an input image, the input image comprising a primary image and secondary information encoded therein;
generating a supplementary image based on the first digital representation, wherein generating the supplementary image comprises translating each pixel of the first digital representation by a first predetermined amount in a first predetermined direction; and generating a first output image by combining the supplementary image with the digital representation of the input image, wherein the first output image emulates an effect of viewing the input image through a physical lenticular lens having a first lenticular frequency.
18 12 17
2. A method according to claim 1, wherein:
the first digital representation is generated by an image capturing device when the image capturing device is at a first distance to the input image; and the first output image emulates an effect of viewing the input image through a physical lenticular lens having the first lenticular frequency, the first lenticular frequency being associated with the first distance.
3. A method according to claim 2, further comprising:
receiving a second digital representation of the input image, wherein the second digital representation is generated by the image capturing device when the image capturing device is at a second distance to the input image, the second distance being different from the first distance;
generating a second supplementary image based on the second digital representation, wherein generating the second supplementary image comprises translating each pixel of the second digital representation by a second predetermined amount in a second predetermined direction; and generating a second output image by combining the second supplementary image with the second digital representation of the input image, wherein the second output image emulates an effect of viewing the input image through a physical lenticular lens having a second lenticular frequency associated with the second distance and different from the first lenticular frequency.
18 12 17
4. A method according to claim 3, wherein the second lenticular frequency is higher than the first lenticular frequency if the second distance is smaller than the first distance, and the second lenticular frequency is lower than the first lenticular frequency if the second distance is larger than the first distance.
5. A method according to any of claims 1 to 4, further comprising:
receiving a third digital representation of the input image, wherein the third digital representation corresponds to the first digital representation at a different level of magnification;
generating a third supplementary image based on the third digital representation, wherein generating the third supplementary image comprises translating each pixel of the third digital representation by a third predetermined amount in a third predetermined direction; and generating a third output image, wherein generating the third output image comprises combining the third supplementary image with the third digital representation, wherein the third output image emulates an effect of viewing the input image through a physical lenticular lens having a third lenticular frequency different from the first lenticular frequency.
6. A method according to claim 5, wherein the third lenticular frequency is higher than the first lenticular frequency if the third digital representation is obtained by increasing the magnification of the first digital representation, and the third lenticular frequency is lower than the first lenticular frequency if the third digital representation is obtained by decreasing the magnification of the first digital representation.
7. A method according to any of claims 2 to 6, wherein:
the first digital representation is generated by the image capturing device when the image capturing device is at a first angle with respect to the input image, the first output image emulates an effect of viewing the input image through a physical lenticular lens positioned at a second angle with respect to the input image, and the second angle is associated with the first angle.
8. A method according to claim 7, the method further comprising:
receiving a fourth digital representation of the input image, wherein the fourth digital representation is generated by the image capturing device when the image capturing device is at a third angle with respect to the input image, the third angle different from the first angle;
18 12 17 generating a fourth supplementary image based on the fourth digital representation, wherein generating the fourth supplementary image comprises translating the each pixel of the fourth digital representation by a fourth predetermined amount in a fourth predetermined direction; and generating a fourth output image, the fourth output image generated by combining the fourth supplementary image with the fourth digital representation of the input image, wherein the fourth output image emulates an effect of viewing the input image through a physical lenticular lens positioned at a fourth angle with respect to the input image, the fourth angle different from the second angle.
9. A method according to claim 8, wherein:
a difference between the third angle and the first angle is equivalent to a difference between the fourth angle and the second angle.
10. A method according to any of claims 8 to 9, the method further comprising: receiving a fifth digital representation of the input image, the fifth digital representation obtained by rotating the first digital representation by a rotation angle;
generating a fifth supplementary image based on the fifth digital representation, wherein generating the fifth supplementary image comprises translating each pixel of the fifth digital representation by a fifth predetermined amount in a fifth predetermined direction; and generating a fifth output image, the fifth output image generated by combining the fifth supplementary image with the fifth digital representation of the input image, wherein the fifth output image emulates an effect of viewing the input image through a physical lenticular lens positioned at a fifth angle with respect to the input image, the fifth angle different from the second angle by an amount equal to the rotation angle.
11. A method according to any preceding claim, further comprising: displaying the first output image on a display device.
12. A method according to claim 11, wherein the predetermined amount is determined by a screen parameter of the display device.
13. A method according to any preceding claim, further comprising generating a sixth supplementary image based on the first digital representation,
18 12 17 wherein the sixth supplementary image is generated by translating each pixel of the first digital representation by a sixth predetermined amount in a sixth predetermined direction.
14. A method according to claim 13, wherein the first output image is generated by combining the supplementary image and the sixth supplementary image with the first digital representation.
15. A method according to claim 14, wherein the output image is generated by subtracting the supplementary image and the sixth supplementary image from the digital representation of the input image.
16. A method according to any preceding claim, wherein receiving the first digital representation comprises simplifying a sixth digital representation to generate the first digital representation.
17. A method according to claim 16, wherein simplifying the sixth digital representation comprises converting the sixth digital representation to a greyscale image.
18. A method according to claim 17, wherein simplifying the sixth digital representation comprises performing edge detection on the greyscale image to generate an edge image.
19. A method according to any of claims 18, wherein simplifying the sixth digital representation comprises performing a thresholding operation on the edge image.
20. A method according to any preceding claim, wherein the first digital representation is a frame of a video of the input image; and the method comprises performing the generating of a supplementary image and generating of a first output image for each of a plurality of frames of the video.
21. A method of decoding an encoded image comprising a primary image and a secondary image incorporated into the primary image using at least one encoding parameter, the method comprising:
receiving a series of digital representations of the encoded image, each digital representation in the series of digital representations being captured by an image capturing device at a respective distance to the input image and at one of a set of different angles with respect to the input image; and
18 12 17 processing each of the digital representations to generate an output image according to the method of any one of claims 1 to 20;
wherein, if one of the digital representations is captured by the image capturing device when the image capturing device is at a particular distance and a particular angle to the input image that corresponds to the at least one encoding parameter, the secondary image is visible in the output image of that digital representation.
22. A method of according to claim 21, wherein receiving a series of digital representations comprises adjusting a magnification of the first digital representation and rotating the first digital representation by different rotation angles to generate each of the series of digital representations.
23. A computer readable storage medium having computer readable instructions recorded thereon, the instructions configured to cause a processor to carry out a method according to any of claims 1 to 22.
24. A virtual lenticular lens system comprising: a processor; and a memory storing computer readable instructions for causing the processor to perform the method of any one of claims 1 to 22.
25. A virtual lenticular lens system according to claim 24, further comprising: an image capture device arranged to capture the digital representation of input images.
26. A virtual lenticular lens system according to claim 24 or 25, further comprising: a display device arranged to display the output image.
Intellectual
Property
Office
Application No: Claims searched:
26. A virtual lenticular lens system according to claim 24 or 25, further comprising: a display device arranged to display the output image.
AMENDMENTS TO THE CLAIMS HAVE BEEN FILED AS FOLLOWS
CLAIMS:
1. A method of processing an image to emulate an effect of viewing the image through a physical lenticular lens, the method comprising:
receiving a first digital representation of an input image, the input image comprising a primary image and secondary information encoded therein;
generating a supplementary image based on the first digital representation, wherein generating the supplementary image comprises translating each pixel of the first digital representation by a first predetermined amount in a first predetermined direction; and generating a first output image by combining the supplementary image with the digital representation of the input image, wherein the first output image emulates an effect of viewing the input image through a physical lenticular lens having a first lenticular frequency.
08 03 18
2. A method according to claim 1, wherein:
the first digital representation is generated by an image capturing device when the image capturing device is at a first distance to the input image; and the first output image emulates an effect of viewing the input image through a physical lenticular lens having the first lenticular frequency, the first lenticular frequency being associated with the first distance.
3. A method according to claim 2, further comprising:
receiving a second digital representation of the input image, wherein the second digital representation is generated by the image capturing device when the image capturing device is at a second distance to the input image, the second distance being different from the first distance;
generating a second supplementary image based on the second digital representation, wherein generating the second supplementary image comprises translating each pixel of the second digital representation by a second predetermined amount in a second predetermined direction; and generating a second output image by combining the second supplementary image with the second digital representation of the input image, wherein the second output image emulates an effect of viewing the input image through a physical lenticular lens having a second lenticular frequency associated with the second distance and different from the first lenticular frequency.
08 03 18
4. A method according to claim 3, wherein the second lenticular frequency is higher than the first lenticular frequency if the second distance is smaller than the first distance, and the second lenticular frequency is lower than the first lenticular frequency if the second distance is larger than the first distance.
5. A method according to any of claims 1 to 4, further comprising:
receiving a third digital representation of the input image, wherein the third digital representation corresponds to the first digital representation at a different level of magnification;
generating a third supplementary image based on the third digital representation, wherein generating the third supplementary image comprises translating each pixel of the third digital representation by a third predetermined amount in a third predetermined direction; and generating a third output image, wherein generating the third output image comprises combining the third supplementary image with the third digital representation, wherein the third output image emulates an effect of viewing the input image through a physical lenticular lens having a third lenticular frequency different from the first lenticular frequency.
6. A method according to claim 5, wherein the third lenticular frequency is higher than the first lenticular frequency if the third digital representation is obtained by increasing the magnification of the first digital representation, and the third lenticular frequency is lower than the first lenticular frequency if the third digital representation is obtained by decreasing the magnification of the first digital representation.
7. A method according to any preceding claim, wherein:
the first digital representation is generated by the image capturing device when the image capturing device is at a first angle with respect to the input image, the first output image emulates an effect of viewing the input image through a physical lenticular lens positioned at a second angle with respect to the input image, and the second angle is associated with the first angle.
8. A method according to claim 7, the method further comprising:
receiving a fourth digital representation of the input image, wherein the fourth digital representation is generated by the image capturing device when the image capturing device is at a third angle with respect to the input image, the third angle different from the first angle;
08 03 18 generating a fourth supplementary image based on the fourth digital representation, wherein generating the fourth supplementary image comprises translating the each pixel of the fourth digital representation by a fourth predetermined amount in a fourth predetermined direction; and generating a fourth output image, the fourth output image generated by combining the fourth supplementary image with the fourth digital representation of the input image, wherein the fourth output image emulates an effect of viewing the input image through a physical lenticular lens positioned at a fourth angle with respect to the input image, the fourth angle different from the second angle.
9. A method according to claim 8, wherein:
a difference between the third angle and the first angle is equivalent to a difference between the fourth angle and the second angle.
10. A method according to any of claims 8 to 9, the method further comprising: receiving a fifth digital representation of the input image, the fifth digital representation obtained by rotating the first digital representation by a rotation angle;
generating a fifth supplementary image based on the fifth digital representation, wherein generating the fifth supplementary image comprises translating each pixel of the fifth digital representation by a fifth predetermined amount in a fifth predetermined direction; and generating a fifth output image, the fifth output image generated by combining the fifth supplementary image with the fifth digital representation of the input image, wherein the fifth output image emulates an effect of viewing the input image through a physical lenticular lens positioned at a fifth angle with respect to the input image, the fifth angle different from the second angle by an amount equal to the rotation angle.
11. A method according to any preceding claim, further comprising: displaying the first output image on a display device.
12. A method according to claim 11, wherein the first predetermined amount is determined by a screen parameter of the display device.
13. A method according to any preceding claim, further comprising generating a sixth supplementary image based on the first digital representation,
08 03 18 wherein the sixth supplementary image is generated by translating each pixel of the first digital representation by a sixth predetermined amount in a sixth predetermined direction.
14. A method according to claim 13, wherein the first output image is generated by combining the supplementary image and the sixth supplementary image with the first digital representation.
15. A method according to claim 14, wherein the output image is generated by subtracting the supplementary image and the sixth supplementary image from the digital representation of the input image.
16. A method according to any preceding claim, wherein receiving the first digital representation comprises simplifying a received image to generate the first digital representation.
17. A method according to claim 16, wherein simplifying the received image comprises converting the received image to a greyscale image.
18. A method according to claim 17, wherein simplifying the received image comprises performing edge detection on the greyscale image to generate an edge image.
19. A method according to any of claims 18, wherein simplifying the received image comprises performing a thresholding operation on the edge image.
20. A method according to any preceding claim, wherein the first digital representation is a frame of a video of the input image; and the method comprises performing the generating of a supplementary image and generating of a first output image for each of a plurality of frames of the video.
21. A method of decoding an encoded image comprising a primary image and a secondary image incorporated into the primary image using at least one encoding parameter, the method comprising:
receiving a series of digital representations of the encoded image, each digital representation in the series of digital representations being captured by an image capturing device at a respective distance to the input image and at one of a set of different angles with respect to the input image; and
08 03 18 processing each of the digital representations to generate an output image according to the method of any one of claims 1 to 20;
wherein, if one of the digital representations is captured by the image capturing device when the image capturing device is at a particular distance and a particular angle to the input image that corresponds to the at least one encoding parameter, the secondary image is visible in the output image of that digital representation.
22. A method of according to claim 21, wherein receiving a series of digital representations comprises adjusting a magnification of the first digital representation and rotating the first digital representation by different rotation angles to generate each of the series of digital representations.
23. A computer readable storage medium having computer readable instructions recorded thereon, the instructions configured to cause a processor to carry out a method according to any of claims 1 to 22.
24. A virtual lenticular lens system comprising: a processor; and a memory storing computer readable instructions for causing the processor to perform the method of any one of claims 1 to 22.
25. A virtual lenticular lens system according to claim 24, further comprising: an image capture device arranged to capture the digital representation of input images.
GB1617907.9A 2016-10-24 2016-10-24 Virtual lenticular lens Active GB2555395B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1617907.9A GB2555395B (en) 2016-10-24 2016-10-24 Virtual lenticular lens
PCT/GB2017/053170 WO2018078339A1 (en) 2016-10-24 2017-10-20 Virtual lenticular lens

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1617907.9A GB2555395B (en) 2016-10-24 2016-10-24 Virtual lenticular lens

Publications (3)

Publication Number Publication Date
GB201617907D0 GB201617907D0 (en) 2016-12-07
GB2555395A true GB2555395A (en) 2018-05-02
GB2555395B GB2555395B (en) 2019-02-20

Family

ID=57738087

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1617907.9A Active GB2555395B (en) 2016-10-24 2016-10-24 Virtual lenticular lens

Country Status (2)

Country Link
GB (1) GB2555395B (en)
WO (1) WO2018078339A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050237577A1 (en) * 2004-04-26 2005-10-27 Alasia Alfred V System and method for decoding digital encoded images
US20070076868A1 (en) * 2005-09-30 2007-04-05 Konica Minolta Systems Laboratory, Inc. Method and apparatus for image encryption and embedding and related applications
US20090003646A1 (en) * 2007-06-29 2009-01-01 The Hong Kong University Of Science And Technology Lossless visible watermarking
US20130258410A1 (en) * 2012-03-28 2013-10-03 Seiko Epson Corporation Print apparatus and image display method
US20140334665A1 (en) * 2010-10-11 2014-11-13 Graphic Security Systems Corporation System and method for creating an animation from a plurality of latent images encoded into a visible image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050237577A1 (en) * 2004-04-26 2005-10-27 Alasia Alfred V System and method for decoding digital encoded images
US20070076868A1 (en) * 2005-09-30 2007-04-05 Konica Minolta Systems Laboratory, Inc. Method and apparatus for image encryption and embedding and related applications
US20090003646A1 (en) * 2007-06-29 2009-01-01 The Hong Kong University Of Science And Technology Lossless visible watermarking
US20140334665A1 (en) * 2010-10-11 2014-11-13 Graphic Security Systems Corporation System and method for creating an animation from a plurality of latent images encoded into a visible image
US20130258410A1 (en) * 2012-03-28 2013-10-03 Seiko Epson Corporation Print apparatus and image display method

Also Published As

Publication number Publication date
GB2555395B (en) 2019-02-20
GB201617907D0 (en) 2016-12-07
WO2018078339A1 (en) 2018-05-03

Similar Documents

Publication Publication Date Title
Piva An overview on image forensics
Sinha et al. Image-based rendering for scenes with reflections
US8619098B2 (en) Methods and apparatuses for generating co-salient thumbnails for digital images
Jia et al. RIHOOP: Robust invisible hyperlinks in offline and online photographs
Herzog et al. NoRM: No‐reference image quality metric for realistic image synthesis
JP7190354B2 (en) A method for identifying and/or checking the integrity of a subject
US11620730B2 (en) Method for merging multiple images and post-processing of panorama
Ng et al. Discrimination of computer synthesized or recaptured images from real images
JP2019186762A (en) Video generation apparatus, video generation method, program, and data structure
CN105791793A (en) Image processing method and electronic device
Pal et al. 3D reconstruction for damaged documents: imaging of the great parchment book
Juarez-Sandoval et al. Digital image ownership authentication via camouflaged unseen-visible watermarking
Wang et al. A new method estimating linear gaussian filter kernel by image PRNU noise
JP5878451B2 (en) Marker embedding device, marker detecting device, marker embedding method, marker detecting method, and program
CN113628091B (en) Safety information extraction method and device for electronic display screen content reproduction scene
JP6006675B2 (en) Marker detection apparatus, marker detection method, and program
GB2555395A (en) Virtual lenticular lens
Dey Image Processing Masterclass with Python: 50+ Solutions and Techniques Solving Complex Digital Image Processing Challenges Using Numpy, Scipy, Pytorch and Keras (English Edition)
Thongkor et al. Robust image watermarking for camera-captured image using image registration technique
US20230368340A1 (en) Gating of Contextual Attention and Convolutional Features
US9349085B1 (en) Methods and system to decode hidden images
CN108712570B (en) Method for enhancing live performance and reality of intelligent mobile device for detecting hidden image
Ballabeni et al. Intensity histogram equalisation, a colour-to-grey conversion strategy improving photogrammetric reconstruction of urban architectural heritage
Xu et al. On Tracing Screen Photos-A Moiré Pattern-based Approach
Fanfani et al. Restoration and Enhancement of Historical Stereo Photos through Optical Flow