WO2015106635A1 - Unobtrusive data embedding in information displays and extracting unobtrusive data from camera captured images or videos - Google Patents

Unobtrusive data embedding in information displays and extracting unobtrusive data from camera captured images or videos Download PDF

Info

Publication number
WO2015106635A1
WO2015106635A1 PCT/CN2015/000025 CN2015000025W WO2015106635A1 WO 2015106635 A1 WO2015106635 A1 WO 2015106635A1 CN 2015000025 W CN2015000025 W CN 2015000025W WO 2015106635 A1 WO2015106635 A1 WO 2015106635A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
waveform
data
component
subset
Prior art date
Application number
PCT/CN2015/000025
Other languages
French (fr)
Inventor
Wenjian Huang
Wai Ho Mow
Original Assignee
The Hong Kong University Of Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Hong Kong University Of Science And Technology filed Critical The Hong Kong University Of Science And Technology
Publication of WO2015106635A1 publication Critical patent/WO2015106635A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark

Definitions

  • This disclosure relates to hiding data within an image or video, coding hidden data, and decoding hidden data.
  • a smartphone can be utilized as a barcode scanner.
  • the camera of a smartphone can be used to capture an image of a bar code embedded within an advertisement and decode the barcode to provide information to a user.
  • the practice of embedding a message, image or file within an image is useful for various purposes such as marketing, advertising or providing information to a consumer.
  • Existing technologies that facilitate embedding a message include Quick Response (QR) codes, digital watermarking, and steganography.
  • QR Quick Response
  • QR codes and other 2-dimensional bar codes do not embed within an image in an aesthetically appealing manner. Instead QR codes are presented on an image as an obtrusive cluster of black and white dots arranged in a square pattern. As such the QR code, when displayed in the middle of an advertisement, obfuscates and often distorts the design and symmetry of the entire image.
  • existing stenographic methods of embedding messages in images preclude the retrieval of embedded data using a smartphone or camera device.
  • the current technologies focus on rigid image processing techniques rather than the dynamic coding and data communication processes. Given the issues related to embedding data within images, conventional embedded message technologies remain dissatisfactory.
  • a system comprising a memory that stores executable components; and a processor, coupled to the memory, that executes the executable components to perform operations of the system, the executable components comprising: an image enhancement component, a conversion component, and a determination component.
  • an image enhancement component of the system is configured to at least one of eliminate a texture distortion of an image comprising a set of pixels, equalize a contrast value corresponding to the image, adjust a signal to noise ratio associated with the image, adjust a focus associated with the image, or remove a blurred image segment of the image determined to be blurry according to a defined blur criterion.
  • a conversion component is configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels.
  • a determination component is configured to determine a position of the watermark within the image based on a solution to a convex problem.
  • the system can employ a matching component configured to match a pixel of the set of pixels to a transform matrix wherein the solution to the convex problem is based on a match between the pixel and the transform matrix.
  • the system can employ a modulation component configured to modulate a first subset of values of the set of binary values with a first waveform and a second subset of values of the subset of binary values with a second waveform, wherein the first waveform and the second waveform are stored within a hidden watermark.
  • the system can employ a demodulation component configured to identify a set of frequency domain information associated with the first waveform and the second waveform using a matching filter.
  • a method comprising encoding, by a system comprising a processor, a message comprising a set of data with an error correction code that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data.
  • the method can comprise determining, by the system, a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, and wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable.
  • the method can comprise modulating, by the system, the first subset of data with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a modification of the first modifiable image intensity value, or another modification of the second image intensity value.
  • the method can comprise storing, by the system, the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the data.
  • the method can comprise embedding, by the system, the hidden watermark within the image subject to an intensity modification limit based on the set of image data such that the hidden watermark is invisible to a user.
  • FIG. 1 illustrates an example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 2 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 3 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 4 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 5 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 6 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 7 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 8 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 9 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 10 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 11 illustrates another example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 12 illustrates an example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 13 illustrates an example non-limiting system for embedding and hiding data within an image or video.
  • FIG. 14 illustrates an example non-limiting system for decoding embedded and hiding data within an image or video.
  • FIG. 15 illustrates an example non-limiting method for embedding hidden data within an image or video.
  • FIG. 16 illustrates another example non-limiting method for embedding hidden data within an image or video.
  • FIG. 17 illustrates another example non-limiting method for embedding hidden data within an image or video.
  • FIG. 18 illustrates a block diagram representing an exemplary non-limiting networked environment in which the various embodiments can be implemented.
  • FIG. 19 is a block diagram representing an exemplary non-limiting computing system or operating environment in which the various embodiments may be implemented.
  • this disclosure relates to systems and methods for hiding data in images and videos.
  • the systems and methods disclosed herein describe hiding data within an image or video in an unobstructive manner such that the data is hidden to the human eye.
  • an obstructive data code such as a QR code
  • the disclosed systems and methods allow for a user to obtain information corresponding to data embedded within an image simply by capturing a picture of the image. The user will not visually see the hidden data within the image or video, however, the user can rely on accessing the invisible, hidden data, and associated information within the image.
  • the information to be accessed can comprise a host of information such as a map, hyperlink, image, video, graphic, or interactive content.
  • the ability to display an image or video and simultaneously communicate data while not disturbing the image or video opens up several new possibilities for the mobile marketing and commerce fields.
  • the systems and methods disclosed herein facilitate the embedding of unobtrusive data in images and videos while also presenting systems and methods for decoding the unobtrusive data within the images and video via a camera (e.g., mobile device camera) .
  • a camera e.g., mobile device camera
  • the systems and methods of data communication provided herein allow for a higher capacity of data to be embedded within an image or video than traditional embedding methods.
  • the disclosed technology does not require the availability of a wireless broadband connection to communicate the data.
  • the data embedding systems and methods disclosed herein describe the capability of hiding data in an image or video such that a camera device can extract the data.
  • the data is stored in an image unobtrusively such that the hidden data can facilitate potential applications such as mobile advertisement and mobile payments.
  • the systems and methods utilize data communication and coding techniques to achieve the data hiding.
  • the systems and methods can employ unobtrusive 2-dimensional (2D) waveform modulation communication techniques to achieve data embedding within an image.
  • the systems and methods can utilize 2D Fast Fourier Transform (FFT) techniques to allow for efficient demodulation of data with respect to shifting, rotating, or geometrically distorting an image.
  • FFT Fast Fourier Transform
  • the disclosed systems and methods e.g., channel encoding, image preprocessing, corner detection, providing distributed training symbols and iterative synchronization, demodulation, estimation, decoding, decoding with multi-frame combining, etc.
  • the systems and methods also describe the process of coding and decoding data to allow for the recovery of data from randomly located image frames.
  • a data hiding system 100 comprises a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations of the system, the executable components comprising: a coding component 110 that encodes a set of data based on an error correction code; a generation component 130 that generates a first waveform comprising a first frequency and a second waveform comprising a second frequency, wherein the first waveform and second waveform represents a first subset of data of the set of data and a second subset of data of the set of data respectively; a storage component 140 that stores the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the set of data; and an embedding component 150 that embeds the hidden watermark into the image subject to an intensity modification limit based on the
  • coding component 110 can encode a set of data based on an error correction code.
  • the encoding component 110 can encode data input into source code via ASCII or other such source code encoding methods.
  • the source code can be configured to be used as a bit stream comprising a series of bits.
  • the encoding (e.g. using coding component 110) can comprise encoding the data with an error correction code, by making use of an error correction encoder such as a low-density parity-check (LDPC) code to generate a LDPC code word.
  • LDPC low-density parity-check
  • a code word can comprise a series of bits that convey a message and numerous code words can be presented in matrix form.
  • the LDPC code can be an irregular LDPC code optimized to a particular network node comprising irregular degrees and a wide range of coding rates.
  • An irregular LDPC code can correspond to a set of nodes according to a customized distribution.
  • Each node type e.g., variable node, constraint node, or check node
  • Each node type performs a decoding operation via a message passing algorithm operated between the nodes such that the LDPC code is capable of being iteratively decoded.
  • the irregular LDPC code can be illustrated with half the variable nodes having degree W and the other half having degree X, while half the constraint nodes have degree Y and the other half have degree Z, where W, X, Y, and Z are integers.
  • the irregular degree is related to the coding rate, where the coding rate can be controlled at different error correction levels (which can also represent users) .
  • the coding rate describes the proportion of a data-stream that is useful rather than redundant.
  • the coding rate can be described as k/n where for every k bits of useful information, coding component 110 generates n bits of total data, such that n-k are the redundant bits.
  • three error correction levels e.g., represented as a coding rate
  • the low error correction level can provide a 3/4 coding rate, the medium error correction level a 1/2 coding rate, and the high error correction level a 1/4 coding rate.
  • the error correction code (e.g., employed by coding component 110) can detect and also correct single-bit errors in the code) , as such, the low error correction level can correct around 3-6%bit errors, the medium error correction level can correct around 9-12%bit errors, and the high error correction level can correct around 15-20%bit errors.
  • LDPC code can detect and correct random errors and can perform well under low light intensity or unfocused image conditions. For instance, at a coding rate of 1/2, RS code can correct around 3-4%random bit errors, while LDPC code can correct around 9-12%random bit errors.
  • RS Reed Solomon
  • LDPC code is difficult to hack into (as compared to RS code) in that the LDPC code utilizes a complicated matrix H for code parity checks and the code is stored in a compiled C++ library, which provides relatively complex decoding algorithms.
  • the LDPC code can make use of soft information and thus facilitate the use of soft decision-making to decode the code rather than hard decision code used by QR, DataMatrix and other codes.
  • encoding component 110 can encode the data and corresponding information based on the intensity of pixels within an image in order to generate a symbol such as a waveform pattern, comprising frequency domain information, that represents the data and that can be hidden within the image.
  • the frequency domain information provides the data and corresponding information as an encoded wave and comprises information related to a frequency domain of a signal. The frequency domain demonstrates how much of each signal lies within a given frequency band over a range of frequencies.
  • the frequency domain information can comprise the frequency of a wave as expressed in speed units, a magnitude of a signal, a phase of a signal, or a set of frequency domain signals capable of being transformed into the set of data.
  • encoding component 110 can encode the data in waveform like patterns comprising frequency domain information.
  • the frequency domain information can also comprise information related to the phase shift to be applied to each sinusoid in order to recombine frequency components to recover the original time signal (which is absent from the frequency domain) during decoding of the encoded data.
  • system 100 can employ a modification component 120 that modifies a set of image data corresponding to an image resulting in a set of modified image data.
  • the set of image data can represent pixels of the image and characteristics of the pixels such as intensity.
  • the modification component 120 can modify the image by increasing or decreasing the intensity of the pixels.
  • the modification activities can be useful in generating waveforms to represent the data, such that the waveforms can blend into the image becoming virtually invisible to the naked eye of a user viewing the image.
  • modification component 120 can modify the intensity of a pixel within an intensity modification limit.
  • the intensity modification limit is established in order to increase performance of decoding tasks when the hidden data (e.g., hidden within the waveform pattern) is accessed and to protect the original image by limiting the degree by which it is distorted.
  • the intensity modification limit can be determined by applying a Gaussian average mask to the image I to generate a new image matrix I’.
  • the Gaussian average mask can take the form of a range of matrixes corresponding to a range of variances.
  • the Gaussian average mask can be a 3*3, 5*5, or 7*7 matrix with variances ranging from . 5 to 1.5.
  • the intensity modification limits can also be determined based on other mask technologies as well such as a median filter or Butterworth filter.
  • modification component 120 can modify the pixel intensity based on the determined intensity modification limits.
  • the image can be divided into blocks where each block is analyzed based on pixel intensity.
  • modification component 120 can modify a high intensity block by subtracting an intensity value from such block. Similarly, modification component 120 can modify a low intensity block by adding an intensity value to such block.
  • the maximum intensity value to be added or subtracted is referred to as the intensity modification limit. As described above, the intensity modification limit can be determined based on the Gaussian average of image I’a nd the original image I.
  • system 100 can employ a generation component 130 that generates a first waveform comprising a first frequency and a second waveform comprising a second frequency, wherein the first waveform represents a first subset of data of the set of data and a second subset of data of the set of data respectively.
  • generation component 130 can generate two different waveforms each waveform comprising different and unique frequency spectra. Each waveform can comprise a high-frequency property, such that the waveform is undetectable to the human eye.
  • generation component 130 can generate a first waveform and a second waveform both comprising a large enough difference in frequency spectrums to facilitate proper decoding. By generating a large enough difference in frequency spectrums, a decoder can detect such frequency spectrum differences to determine the various differences in code between each respective waveform pattern.
  • generation component 130 in connection with coding component 110 can encode the image with the waveform patterns on the vertical surface of the image.
  • Each waveform represents encoded bits represented by the numbers 0 and 1.
  • Each waveform is different and is located in different directions, pi/4 and –pi/4, respectively.
  • the waveforms are located in non-horizontal and non-vertical directions because the human eye is not sensitive to non-horizontal and non-vertical waveforms, however cameras can detect such waveforms.
  • the waveforms can be located in a variety of directions such as pi/6 or-5*pi/6.
  • the difference between the two directions of the waveforms can be pi/2.
  • the direction of the waveforms can promote efficacious demodulation of the waveforms in various devices.
  • the intensity modification on the original image is applied.
  • the waveforms can be generated on a modified image where the intensity modification (e.g., using modification component 120) occurs.
  • modification can be achieved by modifying the fewest points on the image to maintain the visual quality of the image, such that the modification occurs within limits (e.g., a tolerable modification range) in order to suppress the image from being distorted.
  • system 100 can employ a storage component 140 that stores the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the set of data.
  • a hidden watermark is a marker that can store the waveform patterns and associated data within the marker.
  • the watermark can carry various forms of data, such as data corresponding to a video, image, map, link, web data, or audio file. All such data can be accessed via the hidden watermark with a device comprising a camera and a decoder.
  • system 100 can employ an embedding component 150 that embeds the hidden watermark into the image subject to an intensity modification limit based on the set of image data such that the hidden watermark is invisible to a user.
  • the hidden watermark can be embedded (e.g., using embedding component 150) within an image or video comprising a set of images such that the watermark is invisible to the human eye.
  • the benefit of the invisible feature of the digital watermark is to facilitate the communication of data within an image or video in the absence of disturbing the aesthetic qualities of the image or video.
  • verification component 210 verifies that the set of image data is capable of demodulation.
  • modification component 120 modifies the intensity value of the image data
  • verification component 210 verifies or checks the result to determine whether “bad blocks” of image pixels still exist. A bad block is identified as one or more image pixels that cannot be reliably modulated because the difference in frequency between waveforms is too small or inverted.
  • verification component 210 can check whether a bad block continues to exist within the image. In the event a bad block still exists, modification component 120 can modify the intensity value of the pixels associated with the bad block to ensure that the respective block can be reliably demodulated. In the event increasing or decreasing the intensity modification value of the pixels fails to remedy the problems with the bad block, then the modification component 120 can modify more points within the bad block area to improve its performance. In an instance, modification component 120 can increase the waveform width, however such increase can lower the visual quality of the image. In an aspect, verification component 210 continues to verify whether or not bad blocks exist and modification component 120 can modify the properties of the pixels, based on the verification, associated with the bad blocks until there are no longer bad blocks.
  • the system 300 comprises: a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations, the executable components comprising: an image enhancement component 310 configured to at least one of eliminate a texture distortion of an image comprising a set of pixels, equalize a contrast value corresponding to the image, adjust a signal to noise ratio (SNR) associated with the image, adjust a focus associated with the image, or remove a blurred image segment of the image determined to be blurry according to a defined blur criterion; a conversion component 320 configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels; and a determination component 330 configured to determine a position of the watermark within the image based on a solution to a con
  • image enhancement component 310 performs various tasks to pre-process an image and improve the image quality prior to decoding the image.
  • the image pre-processing e.g. using enhancement component 310) is intended to diminish the Moiré effect on the display screen, which can negatively affect the decoding outcome of system 300.
  • the Moiré effect is a visual occurrence of a set of lines or dots superimposed on another set of lines or dots where each set of lines and dots differ in size, angle, or spacing.
  • the visual occurrence can be caused by an overlap of the grid structure of the display screen pixels and the sensor pixels of a camera used to capture the image.
  • the potential for the Moiré effect can be estimated by system 300 by analyzing the frequency response of the image.
  • image enhancement component 310 can estimate the frequency range of the Moiré effect by detecting the focus distance (e.g., the angle between the mobile device and the vertical direction of the image) using the mobile device sensor as well as camera angle viewpoint and camera resolution.
  • image enhancement component 310 can remove the Moiré effect while minimizing distortion of other image information using a Butterworth filter.
  • system 300 employs conversion component 320 configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels.
  • conversion component 320 can perform a histogram equalization technique to distribute the contrast intensities of image blocks and remove the effect of the background intensity shifts.
  • conversion component 320 can binarize a point of the image using nearby points based on the histogram equalization. For instance, conversion component 320 can consider the intensity of nearby points and determine a difference between the maximum intensity value and the minimum intensity value. Furthermore, conversion component can assign a 0 or 1 to a point based on the determined maximum intensity value or minimum intensity value.
  • the binarization can also be based on other image parameters that can vary given a background intensity characteristic.
  • system 300 can employ a determination component 330 configured to determine a position of the watermark within the image based on a solution to a convex problem.
  • determination component 330 can detect the entire image frame from the binarized image using an optimization method.
  • a camera e.g., mobile device camera
  • the determination component 330 can facilitate determining (e.g., using matching component 410) the best matched perspective transformed function of the image based on a convex optimization method.
  • the convex optimization method can make use of an interior point to determine the best matched perspective transform function to determine the position of the watermark with the original image accordingly.
  • system 400 can comprise the components of system 300 and further comprise a matching component 410 configured to match a pixel of the set of pixels to a transform matrix wherein the solution to the convex problem is based on a match between the pixel and the transform matrix.
  • matching component 410 can match a perspective transform function to the binarized image.
  • the matching can be a block matching to find an initial transformation using a normalized correlation. For instance, points (x, y) of a grid (e.g., 3x3) within an image can be matched to a point (x’, y’) , which maximizes the normalized correlation using a restricted window (e.g., 7x7, 11x11) .
  • a restricted window e.g., 7x7, 11x11
  • matching point pairs of (x, y) and (x’, y’) can be used to calculate a translation between the reference image I (x, y) and the target image I’(x’, y’) .
  • matching component 410 can choose a translation that maximizes a normalized correlation based on a block matching of the image.
  • system 500 can comprise the components of system 400 and further comprise modulation component 510 configured to modulate a first subset of values of the set of binary values with a first waveform and a second subset of values of the set of binary values with a second waveform, and wherein the first waveform and the second waveform are stored within a hidden watermark.
  • modulation component 510 can convert image data into a waveform suitable for transmission over a communication channel.
  • Modulation component 510 can convert image data into waveforms that are unique in frequency and other wave properties.
  • the waveforms can differ in either frequency or amplitude.
  • the waveform patterns can be designed to be invisible to the human eye where the waveform comprises a high frequency or oblique structure.
  • the image can have a low contrast and still work with the waveforms to provide an invisible waveform pattern within the image.
  • system 600 can comprise the components of system 500 and further comprise demodulation component 610 configured to identify a set of frequency domain information associated with the first waveform and the second waveform using a matching filter.
  • demodulation component 610 can receive a waveform pattern from an image via a device camera and convert the waveform into a code to be accessed by a decoder.
  • demodulation component 610 in connection with determination component 330 can identify or detect the entire image frame from the binarized image using an optimization method and a synchronization code extracted from points (e.g., points detected by determination component 330) within the image.
  • a matched filter can be used to facilitate the identification or detection of the binarized image, where the matched filter can maximize the signal to noise ratio resulting in a correct interpretation of a binary message within the image.
  • demodulation component 610 can demodulate sample points within the image based on an estimation of the synchronization quality between code words of a symbol stream (e.g., waveform pattern) .
  • the demodulation can be based on a calculated frequency difference between detected waveform patterns within the image.
  • the position of sample points within the image can be adjusted and provide a new basis for a re-estimate of the synchronization quality. For instance, a region of the image can be identified for synchronization based on a detected sample point (e.g., using determination component 330) within the image and corresponding blocks of the image are determined to be in a particular position relative to the detected sample point as estimated using synchronization codes from the detected point.
  • the synchronization quality can be re-estimated based on another sample detected point.
  • the synchronization quality can be increased, decreased or oscillated based on the re-estimates.
  • the re-estimation process can be terminated at the time the synchronization quality is observed to decrease or oscillate.
  • points nearby one another tend to have similar or same synchronization quality results.
  • a message passing algorithm can be utilized to build a message passing network between nearby points, which acts as a neural network to facilitate implementation of a global synchronization matching process of image blocks.
  • the synchronization and demodulation activities can be iteratively performed until they converge on a target point, after which the captured image is demodulated (e.g., using demodulation component 610) based on an error correction code decoder.
  • the demodulation process performed by also comprises the dividing of the captured image into numerous blocks within a vicinity on a space domain based on the synchronization results described above. Furthermore, each block individually undergoes a Fourier transform analysis, which is a technique to decompose each waveform signal into sinusoids.
  • system 700 can utilize a discrete Fourier transform (DFT) technique which can decompose digitized signals where the DFT uses numbers to represent input and output signals.
  • the DFT technique facilitates decomposition of each image block into its sine and cosine components in order to achieve a frequency domain corresponding to each waveform of a block.
  • DFT discrete Fourier transform
  • the image can then be represented by corresponding frequency domains, each point of the image representing a particular frequency contained in a spatial domain image.
  • the DFT technique can be used to describe a frequency response of the waveforms, where the frequency response is a description of how a waveform changes the amplitude and phase as the waveform passes through a point of the image.
  • the waveforms are capable of being decoded using DFT techniques.
  • system 700 can comprise the components of system 600 and further comprise an association component 710 configured to associate a first soft value with the first waveform and a second soft value with the second waveform based on a match of the first waveform and the second waveform to the set of template waveforms.
  • association component 710 can associate a first soft value with the first waveform and a second soft value with the second waveform based on the matching (e.g., using matching component 410) described above.
  • the first soft value and second soft value can take on a range of values greater than or equal to 0 and less than or equal to 1, which includes a range of values in between 0 and 1.
  • association component 710 via association of soft information with the waveforms, allows for system 700 to perform better in the presence of corrupted data than hard information.
  • each waveform (e.g., a first waveform and a second waveform) can comprise a high DFT value on its frequency response point, however each waveform can comprise the high DFT value at a different point position along the waveform because of each waveforms different frequency direction.
  • a first waveform can possess a synchronized horizontal frequency and vertical frequency while the second waveform can possess unsynchronized horizontal and vertical frequencies.
  • each waveform comprises a unique distribution on the frequency domain, such that a distribution of frequency information can be determined using matched filters employed by matching component 410.
  • the matched filters are utilized to determine which distribution best suits the DFT result by determining a difference between matched factors (e.g., a positive difference or negative difference) associated with a frequency domain of each waveform.
  • the result of the difference between matched factors can be used by demodulation component 610 to determine a demodulation result via a demodulation process.
  • the demodulation result can contain soft information corresponding to the waveforms.
  • system 800 can comprise the components of system 700 and further comprise a calculation component 810 configured to determine a set of statistics based on the set of frequency domain information, and wherein the set of statistics comprise a variance, a mean, a log ratio, a Gaussian approximation, and a standard deviation of the set of frequency domain information.
  • calculation component 810 can employ a channel estimator that uses the demodulation result described above to generate a histogram to represent the tonal distribution of pixels, corresponding to a tonal value) , within the image.
  • the histogram of the image can graphically illustrate the tonal distribution of pixels within the image.
  • calculation component 810 can determine whether the histogram represents a bi-Gaussian distribution of the two Gaussian distributions associated with the first waveform and the second waveform of an image block. In the event the histogram is not bi-Gaussian, then the results of the histogram are disregarded and the corresponding image frame is not decoded. Furthermore, the histogram of the next image frame is determined (e.g., using calculation component 810) to be bi-Gaussian-like or not bi-Gaussian-like to determine whether to decode such image frame. In an aspect, the absence of a bi-Gaussian-like histogram indicates a demodulating failure (e.g., using demodulation component 610) occurred.
  • calculation component 810 can estimate the variation and the mean of the information presented by such histogram.
  • the mean and variation of the histogram can be used to build an additive white Gaussian noise (AWGN) channel model comprising a channel that produces AWGN.
  • AWGN additive white Gaussian noise
  • the channel uses a matched filter to correlate an incoming signal with a reference copy of a transmit signal such that the matched filter maximizes the signal-to-noise ratio for a known signal (e.g., a first waveform or second waveform) .
  • An estimation of the AWGN channel can be used to calculate (e.g., using calculation component 810) a Log Likelihood Ratio according to various properties of the AWGN channel.
  • the Log Likelihood Ratio can be used to compare the fit of various channel models to determine a fit for respective strings of encoded code for decoding.
  • the Log Likelihood Ratio information can be passed to an LDPC decoder (e.g., decoding component 910) , however, prior to decoding, calculation component 810 can quantize the Log Likelihood Ratio to simplify the decoding process. For instance, calculation component 810 can compute a p-value or comparative critical value to decide whether to reject one AWGN channel model in favor of an alternative AWGN channel model. Furthermore, calculation component 810 can use an iterative method to optimize the quantization of the Log Likelihood Ratios.
  • system 900 can comprise the components of system 800 and further comprise a decoding component 910 configured to decode the hidden watermark based on an error correction code that uses a low density parity check code or a set of message passing rules wherein the low density parity check code is defined to facilitate cross checking a decoding of the hidden watermark for decoding errors and wherein the set of message passing rules define code constraints to inhibit decoding errors.
  • a decoding component 910 configured to decode the hidden watermark based on an error correction code that uses a low density parity check code or a set of message passing rules wherein the low density parity check code is defined to facilitate cross checking a decoding of the hidden watermark for decoding errors and wherein the set of message passing rules define code constraints to inhibit decoding errors.
  • decoding component 910 can decode the LDPC code using a simplified min-sum (MS) decoding algorithm, which is a message passing decoding algorithm used for LDPC code.
  • MS decoding algorithm facilitates faster decoding (e.g., using decoding component 910) performance over other algorithms such as a sum-product algorithm.
  • the MS decoding algorithm can reduce block error rates rather than bit error rates, where the block error rates provide meaningful and useful information.
  • decoding component 910 can decode the LDPC code by employing the decoding algorithms in many instances over many iterations.
  • system 900 can establish a limit to the number of instances decoding component 910 employs the decoding algorithm.
  • decoding component 910 can validate whether a valid code word is decoded. In the event the code word is valid the decoding ceases and moves to the next encoded string and in the event the code word is invalid the decoding component can perform another iteration of the decoding algorithm.
  • decoding component 910 can employ a source decoder to receive a corrected bit stream of data, which is then presented to a user.
  • a source decoder or source encoder can utilize ASCII or other such compressing methods to compress the decoded or coded information.
  • system 1000 can comprise the components of system 900 and further comprise an encoding component 1010 configured to encode the first waveform and the second waveform with a first code word and a second code word respectively, and wherein the first waveform and the second waveform are encoded based on error correcting code rules defined to facilitate detection of a code error during transmission of the first code word or the second code word via a channel.
  • system 1000 can employ encoding component 1010 for encoding the waveforms, comprising hidden data, with code words using LDPC coding methods.
  • the LDPC codes comprise code words, which comprise message bits and parity bits.
  • a code word, also referred to as code word bits can comprise a number of bits.
  • a code word bit can comprise three bit messages and another code word bit can comprise three parity-check bits.
  • a parity bit is used in error detecting code where the parity bit indicates whether the number of bits in a string of bits with the value of one is even or odd.
  • encoding component 1010 can impose code word constraints that define how to encode a message (e.g., using parity bits) .
  • code word constraints can be communicated in matrix form. Accordingly, the code word constraints can indicate whether an error has occurred during transmission of code words. For instance, a code word bit may have been flipped after transmission of the code words (e.g., using a camera device) .
  • a code word error can be detected in that every code word in the code must satisfy a constraint rule and any such code word determined to not satisfy such constraint rules can be deemed to be an error.
  • system 1100 can comprise the components of system 1000 and further comprise a weighting component 1110 configured to associate a weight to the first code word and the second code word respectively based on first channel state information and second channel state information corresponding to the first code word and the second code word respectively.
  • system 1100 can utilize useful information in previous frames that are unable to be decoded for use in a maximum ratio that incorporates a diverse array of information to pass to the next frame.
  • system 1100 can employ weighting component 1110 that associates a weight with a combination of Log Likelihood Ratios of two or more frames, such that the system 1100 can utilize previous decoded Log Likelihood Ratios to determine channel information of the current frame.
  • channel information can comprise channel properties such as how a signal propagates from a transmitter to a receiver in terms of scattering, fading and decay of power.
  • the weight of each frame can be determined (e.g., by weighting component 1110) based on corresponding channel estimation information.
  • the channel estimation weight is proportional to its channel signal to noise ratio (SNR) .
  • SNR channel signal to noise ratio
  • the weighting method employed my weighting component 1110 can be quite useful when the SNR of a channel is low relative to other channels.
  • system 1100 can make use of other cooperative diversity techniques to facilitate decoding based on a diversity of information.
  • system 1100 can employ a cooperative maximum-ratio combining (C-MRC) method and ⁇ –MRC method to achieve decoding based on a diversity of information.
  • the C-MRC method can comprise adding a diversity of signals from each channel together in order to achieve a single improved signal.
  • a system 1200 comprising a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations of the system, the executable components comprising: a receiver component 1202 that receives a data file and a video, wherein the data comprises a set of input data; a converter component 1204 that converts the data file and the video to a data bitstream and a set of image frames, respectively; an encoding component 1206 that generates a set of data packets based on an encoding of the data bitstream, wherein the set of data packets represent the set of input data, and wherein a subset of data packets of the set of data packets represent a subset of input data of the set of data; an embedding component 1208 that embeds as an imperceptible transmission object, a first subset of input data into a first image frame of the set of image frames and a second subset of input data into
  • system 1200 employs a receiver component 1202 that receives an input data file and an input video from a user.
  • the input video is converted, using converter component 1204 employed by system 1200, to a series of image frames.
  • a source encoding can be implemented (e.g., using encoding component 1206) on the input data file to get the bitstream of the entire data.
  • the source component 1206 can execute inter-frame network code encoding on the data bitstream and generate a series of data packets where each packet only carries partial information of the input data file.
  • intra-frame error correcting code encoding can be used to encode the data packets and distribute each data packet to a particular image frame of a series of image frames.
  • each data packet is embedded (e.g., using embedding component 1208) into a corresponding image frame as an imperceptible watermark using a waveform modulation technique.
  • various training symbols can be embedded (e.g., using embedding component 1208) in the image frames to facilitate a decoding of the embedded watermark and corresponding data.
  • an encoded video is generated (e.g., using integration component 1210) by integrating all the encoded image frames.
  • the encoded video generated by system 1200 provides hidden data that is virtually imperceptible to the human eye.
  • the encoded video generated by system 1200 has a high degree of visual quality and can be decoded by mobile devices at a high transmission rate.
  • system 1200 can be implemented as a display to camera form of communication where a camera can capture the video or images comprising embedded hidden data.
  • system 1200 can be implemented as a free-space optical data communication framework where the transmitter of the hidden data or the imperceptible transmission object can be any of a watermark, a printed code label, a light-emitting diode (LED) array, or an electronic display.
  • the receiver of the embedded data or the device that receives the embedded data can be any of a sensor, a mobile device, a camera, a tablet, or a closed circuit television.
  • the information coding described throughout can be spatial coding (e.g., coding over pixels in an image) as well as temporal coding (e.g., coding over frames of images) .
  • the channel coding employed by system 1200 can utilize any error control code including, but not limited to, Reed-Solomon codes, LDPC codes, rateless codes, and other such codes.
  • system 1200 can employ inter-frame network code encoding techniques.
  • a transmitter of system 1200 can emit data through the encoded video and the video consists of many image frames. Each frame only contains partial information of the data. If the transmitter broadcasts data and there is no feedback from the receiver, this means the system 1200 does not comprehend which frames have been successfully decoded by the user. Thus system 1200 cannot rearrange the image frame sequence according to an Automatic Repeat-reQuest (ARQ) scheme.
  • ARQ Automatic Repeat-reQuest
  • the purpose of the inter-frame encoding is not to correct errors, but only to reconstruct the data from the partial information in those image frames. This is due to the intra-frame decoding, which only uses the successfully decoded frames which means the partial information passed to the inter-frame decoder is already error-free.
  • the optimal inter-frame encoding scheme is the one that will not let the user receive redundant frames.
  • various training symbols can be embedded (e.g., using embedding component 1208) in the image frames to facilitate later decoding processes of the embedded watermark and corresponding data.
  • cameras sometimes receive two display frames in one exposure time and output a linear combination of the display frames.
  • the combined frame can make the process of demodulation and later intra-frame decoding fail.
  • training symbols have been uniformly separated on the encoded image.
  • the training symbol must be different from the one at the same position in the adjacent frame. This means that training symbols at the same position of two adjacent frames will carry two different waveforms. Thus, it can be determined whether two adjacent frames are combined or not.
  • the training symbols can assist later time-domain interfered frame decoding processes by allowing for the extraction of the weights of two frames in combination.
  • a system 1300 comprising the components of system 1200 and further comprising a distribution component 1310 that distributes a first LDPC encoded data packet and a second LDPC encoded data packet to the first image frame and the second image frame, respectively.
  • the LDPC encoded data packets can be distributed to each image frame.
  • each data packet is embedded (e.g., using embedding component 1208) into a corresponding image frame as an imperceptible watermark using a waveform modulation technique.
  • a system 1200 comprising a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations of the system, the executable components comprising: a receiving component 1402 that receives a set of image frames of a video from a device; an implementation component 1404 that corrects a perspective distortion of the set of image frames, corrects a noise distortion of the set of image frames, outputs a set of corrected image frames, and retrieves module information from the set of image frames based on a spatial domain synchronization and a demodulation of the set of image frames; a decoding component 1406 that retrieves a set of embedded data packets from the set of corrected image frames based on an intra-frame low-density parity check (LDPC) decoding of the set of corrected image frames; and a reconstruction component 1408 that reconstructs a data file from the set of embedded data packets based on a set of inter-frame network code
  • LDPC intra-frame
  • system 1400 employs a receiving component 1402 that receives a camera-captured video comprising a set of image frames from a device such as a mobile phone.
  • System 1200 can employ a decoder to extract the image frames from the video.
  • the encoded video can be seriously distorted during the camera capturing process, thus a spatial-domain synchronization and demodulation technique is implemented (e.g., using implementation component 1404) on each frame to correct perspective distortion, noise distortion and retrieve the module information in a log likelihood ratio (LLR) format.
  • LLR log likelihood ratio
  • the camera capturing speed may not be synchronized with the display refreshing speed, thus two adjacent frames, which carry different data packets on the display can be received in a single exposure time of the camera.
  • a time-domain interfered frame decoding method (e.g., using decoding component 1406) can be used to solve the problem.
  • normally received frames can be passed to an intra-frame LDPC decoder to retrieve the embedded data packets.
  • system 1400 employs reconstruction component 1408 that reconstructs the original data file by implementing the inter-frame network code decoding algorithm.
  • the camera capturing speed can sometimes occur faster than the display refreshing speed, in which case, the decoder can receive several similar samples which are originally the same frame in the encoded video.
  • This is an oversampling problem, which can take advantage of the multiple samples using an extended Tanner graph-based decoding algorithm.
  • decoding techniques decode every image sample received and dispose of those samples that failed to be decoded. Here, if there are two image samples and both failed to be decoded individually, both samples can still be decoded as an integral whole.
  • a network can be built by integrating the previously invalid coding result with a newly received undecoded code word.
  • a modified BP algorithm can be used to decode the joint code word.
  • a situation can occur where many users may decode the same video simultaneously. For example, in an instance, many users at the same time attempt to decode a video presented on a large LCD display screen at a shopping mall. In such a scenario, users decoding the same video can exchange their information to speed up each other’s decoding by building a local area network (LAN) .
  • the LAN can occur via Bluetooth or Wi-Fi and such exchange process, can shorten the average waiting time to decode a video.
  • FIG. 15-17 illustrated are methods or flow diagrams in accordance with certain aspects of this disclosure. While, for purposes of simplicity of explanation, the disclosed methods are shown and described as a series of acts, the disclosed subject matter is not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a method can alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a method in accordance with the disclosed subject matter.
  • a message comprising a set of data is encoded with an error correction code (e.g., using encoding component 1010) that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data.
  • an error correction code e.g., using encoding component 1010
  • system 1500 determines (e.g., using determination component 330) a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable.
  • the first subset of data is modulated (e.g., using modulation component 510) with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a modification of the first modifiable image intensity value, or another modification of the second image intensity value.
  • system 1500 stores (e.g., using storage component 140) the first waveform and the second waveform within a hidden watermark.
  • system 1500 embeds the hidden watermark within the image.
  • a message comprising a set of data is encoded with an error correction code (e.g., using encoding component 1010) that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data.
  • an error correction code e.g., using encoding component 1010
  • system 1600 determines (e.g., using determination component 330) a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable.
  • the first waveform in the image and the second waveform in the image are identified based on a first magnitude of the first waveform and a second magnitude of the second waveform.
  • the first subset of data is modulated (e.g., using modulation component 510) with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a modification of the first modifiable image intensity value, or another modification of the second image intensity value.
  • the first subset of data is modulated (e.g., using modulation component 510) with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user and capable of maintaining the first set of waveform characteristics and the second set of waveform characteristics upon occurrence of an image distortion event to the image, modification of the first modifiable image intensity value, or modification of the second image intensity value.
  • the image and a corresponding image signal associated with the hidden watermark embedded within the image are captured by a lens of the system.
  • the corresponding image signal associated with the hidden watermark is decoded, wherein the decoding results in a determination of the first subset of data and the second subset of data.
  • the first subset of data and the second subset of data corresponding to the first waveform and the second waveform respectively are accessed.
  • a suitable environment 1800 for implementing various aspects of the claimed subject matter includes a computer 1802.
  • the computer 1802 includes a processing unit 1804, a system memory 1806, a codec 1805, and a system bus 1808.
  • the system bus 1808 couples system components including, but not limited to, the system memory 1806 to the processing unit 1804.
  • the processing unit 1804 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1804.
  • the system bus 1808 can be any of several types of bus structure (s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA) , Micro-Channel Architecture (MSA) , Extended ISA (EISA) , Intelligent Drive Electronics (IDE) , VESA Local Bus (VLB) , Peripheral Component Interconnect (PCI) , Card Bus, Universal Serial Bus (USB) , Advanced Graphics Port (AGP) , Personal Computer Memory Card International Association bus (PCMCIA) , Firewire (IEEE 1394) , and Small Computer Systems Interface (SCSI) .
  • ISA Industrial Standard Architecture
  • MSA Micro-Channel Architecture
  • EISA Extended ISA
  • IDE Intelligent Drive Electronics
  • VLB VESA Local Bus
  • PCI Peripheral Component Interconnect
  • Card Bus Universal Serial Bus
  • USB Universal Serial Bus
  • AGP Advanced Graphics Port
  • PCMCIA
  • the system memory 1806 includes volatile memory 1810 and non-volatile memory 1812.
  • the basic input/output system (BIOS) containing the basic routines to transfer information between elements within the computer 1802, such as during start-up, is stored in non-volatile memory 1812.
  • codec 1805 may include at least one of an encoder or decoder, wherein the at least one of an encoder or decoder may consist of hardware, a combination of hardware and software, or software. Although, codec 1805 is depicted as a separate component, codec 1805 may be contained within non-volatile memory 1812.
  • non-volatile memory 1812 can include read only memory (ROM) , programmable ROM (PROM) , electrically programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , or flash memory.
  • Volatile memory 1810 includes random access memory (RAM) , which acts as external cache memory. According to present aspects, the volatile memory may store the write operation retry logic (not shown in FIG. 18) and the like.
  • RAM is available in many forms such as static RAM (SRAM) , dynamic RAM (DRAM) , synchronous DRAM (SDRAM) , double data rate SDRAM (DDR SDRAM) , and enhanced SDRAM (ESDRAM.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Computer 1802 may also include removable/non-removable, volatile/non-volatile computer storage medium.
  • FIG. 18 illustrates, for example, disk storage 1814.
  • Disk storage 1814 includes, but is not limited to, devices like a magnetic disk drive, solid state disk (SSD) floppy disk drive, tape drive, Jaz drive, Zip drive, LS-70 drive, flash memory card, or memory stick.
  • disk storage 1814 can include storage medium separately or in combination with other storage medium including, but not limited to, an optical disk drive such as a compact disk ROM device (CD-ROM) , CD recordable drive (CD-R Drive) , CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM) .
  • a removable or non-removable interface is typically used, such as interface 1816.
  • FIG. 18 describes software that acts as an intermediary between users and the basic computer resources described in the suitable operating environment 1800.
  • Such software includes an operating system 1818.
  • Operating system 1818 which can be stored on disk storage 1814, acts to control and allocate resources of the computer system 1802.
  • Applications 1820 take advantage of the management of resources by the operating system through program modules 1824, and program data 1826, such as the boot/shutdown transaction table and the like, stored either in system memory 1806 or on disk storage 1814. It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems.
  • Input devices 1828 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like.
  • These and other input devices connect to the processing unit 1804 through the system bus 1808 via interface port (s) 1830.
  • Interface port (s) 1830 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB) .
  • Output device (s) 1836 use some of the same type of ports as input device (s) 1828.
  • a USB port may be used to provide input to computer 1802, and to output information from computer 1802 to an output device 1836.
  • Output adapter 1834 is provided to illustrate that there are some output devices 1836 like monitors, speakers, and printers, among other output devices 1836, which require special adapters.
  • the output adapters 1834 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1836 and the system bus 1808. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer (s) 1838.
  • Computer 1802 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer (s) 1838.
  • the remote computer (s) 1838 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, a smart phone, a tablet, or other network node, and typically includes many of the elements described relative to computer 1802. For purposes of brevity, only a memory storage device 1840 is illustrated with remote computer (s) 1838.
  • Remote computer (s) 1838 is logically connected to computer 1802 through a network interface 1842 and then connected via communication connection (s) 1844.
  • Network interface 1842 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN) and cellular networks.
  • LAN technologies include Fiber Distributed Data Interface (FDDI) , Copper Distributed Data Interface (CDDI) , Ethernet, Token Ring and the like.
  • WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL) .
  • ISDN Integrated Services Digital Networks
  • DSL Digital Subscriber Lines
  • Communication connection (s) 1844 refers to the hardware/software employed to connect the network interface 1842 to the bus 1808. While communication connection 1844 is shown for illustrative clarity inside computer 1802, it can also be external to computer 1802.
  • the hardware/software necessary for connection to the network interface 1842 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and wired and wireless Ethernet cards, hubs, and routers.
  • the system 1900 includes one or more client (s) 1902 (e.g., laptops, smart phones, PDAs, media players, computers, portable electronic devices, tablets, and the like) .
  • the client (s) 1902 can be hardware and/or software (e.g., threads, processes, computing devices) .
  • the system 1900 also includes one or more server (s) 1904.
  • the server (s) 1904 can also be hardware or hardware in combination with software (e.g., threads, processes, computing devices) .
  • the servers 1904 can house threads to perform transformations by employing aspects of this disclosure, for example.
  • One possible communication between a client 1902 and a server 1904 can be in the form of a data packet transmitted between two or more computer processes wherein the data packet may include video data.
  • the data packet can include a metadata, such as associated contextual information for example.
  • the system 1900 includes a communication framework 1906 (e.g., a global communication network such as the Internet, or mobile network (s) ) that can be employed to facilitate communications between the client (s) 1902 and the server (s) 1904.
  • a communication framework 1906 e.g., a global communication network such as the Internet, or mobile network (s)
  • the client (s) 1902 include or are operatively connected to one or more client data store (s) 1908 that can be employed to store information local to the client (s) 1902 (e.g., associated contextual information) .
  • the server (s) 1904 are operatively include or are operatively connected to one or more server data store (s) 1910 that can be employed to store information local to the servers 1904.
  • a client 1902 can transfer an encoded file, in accordance with the disclosed subject matter, to server 1904.
  • Server 1904 can store the file, decode the file, or transmit the file to another client 1902. It is to be appreciated, that a client 1902 can also transfer uncompressed file to a server 1904 and server 1904 can compress the file in accordance with the disclosed subject matter.
  • server 1904 can encode video information and transmit the information via communication framework 1906 to one or more clients 1902.
  • the illustrated aspects of the disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network.
  • program modules can be located in both local and remote memory storage devices.
  • various components described in this description can include electrical circuit (s) that can include components and circuitry elements of suitable value in order to implement the various embodiments.
  • many of the various components can be implemented on one or more integrated circuit (IC) chips.
  • IC integrated circuit
  • a set of components can be implemented in a single IC chip.
  • one or more of respective components are fabricated or implemented on separate IC chips.
  • the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent) , even though not structurally equivalent to the disclosed structure, which performs the function in the disclosure illustrated exemplary aspects of the claimed subject matter.
  • the various embodiments include a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
  • a component may be, but is not limited to being, a process running on a processor (e.g., digital signal processor) , a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • a processor e.g., digital signal processor
  • an application running on a controller and the controller can be a component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • a “device” can come in the form of specially designed hardware; generalized hardware made specialized by the execution of software thereon that enables the hardware to perform specific function; software stored on a computer readable storage medium; software transmitted on a computer readable transmission medium; or a combination thereof.
  • example or “exemplary” are used in this disclosure to mean serving as an example, instance, or illustration. Any aspect or design described in this disclosure as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion.
  • the term “or” is intended to mean an inclusive “or” rather than an exclusive “or” . That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations.
  • Computer-readable storage media can be any available storage media that can be accessed by the computer, is typically of a non-transitory nature, and can include both volatile and nonvolatile media, removable and non-removable media.
  • Computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data.
  • Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information.
  • Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium.
  • communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal that can be transitory such as a modulated data signal, e.g., a carrier wave or other transport mechanism, and includes any information delivery or transport media.
  • modulated data signal or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals.
  • communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Computer Security & Cryptography (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

Disclosed herein are systems and methods for embedding, encoding, decoding, communicating and transmitting data embedded within an image or video. In an aspect, the data embedded image or video cannot be differentiated from the original image or video by the human eye. Also, unlike watermarking, the systems and methods disclosed herein allow for the recovery of hidden data within an image or video from a camera captured image. Furthermore, in an aspect, the systems and methods disclosed can utilize waveform modulation techniques and pattern recognition-based demodulation techniques to overcome challenges such as perspective distortion associated with a camera captured image.

Description

UNOBTRUSIVE DATA EMBEDDING IN INFORMATION DISPLAYS AND EXTRACTING UNOBTRUSIVE DATA FROM CAMERA CAPTURED IMAGES OR VIDEOS TECHNICAL FIELD
This disclosure relates to hiding data within an image or video, coding hidden data, and decoding hidden data.
BACKGROUND
As a result of the growing popularity and widespread use of smartphones and other mobile devices by consumers, there have arisen numerous technological advancements relating to such devices. One such technological advancement is the capability of a smartphone to be utilized as a barcode scanner. For instance, the camera of a smartphone can be used to capture an image of a bar code embedded within an advertisement and decode the barcode to provide information to a user. Currently, the practice of embedding a message, image or file within an image is useful for various purposes such as marketing, advertising or providing information to a consumer. Existing technologies that facilitate embedding a message include Quick Response (QR) codes, digital watermarking, and steganography.
While each of the listed technologies facilitates the embedding of a message within an image, they each contain various limitations. For instance, QR codes and other 2-dimensional bar codes do not embed within an image in an aesthetically appealing manner. Instead QR codes are presented on an image as an obtrusive cluster of black and white dots arranged in a square pattern. As such the QR code, when displayed in the middle of an advertisement, obfuscates and often distorts the design and symmetry of the entire image. Furthermore, existing stenographic methods of embedding messages in images preclude the retrieval of embedded data using a smartphone or camera device. Also, the current technologies focus on rigid image processing techniques rather than the dynamic coding and data communication processes. Given the issues related to embedding data within images, conventional embedded message technologies remain dissatisfactory.
SUMMARY
The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. This summary is not an extensive overview of the disclosure. It is intended to neither identify key or critical elements of the disclosure nor delineate any scope of particular embodiments of the disclosure, or any scope of the claims. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
In accordance with one or more embodiments and corresponding disclosure, various non-limiting aspects are described in connection with systems and methods for embedding hidden data within an image or video such that the hidden data is invisible to a user. In an embodiment, a system is provided comprising a memory that stores executable components; and a processor, coupled to the memory, that executes the executable components to perform operations of the system, the executable components comprising: an image enhancement component, a conversion component, and a determination component.
In an aspect, an image enhancement component of the system, is configured to at least one of eliminate a texture distortion of an image comprising a set of pixels, equalize a contrast value corresponding to the image, adjust a signal to noise ratio associated with the image, adjust a focus associated with the image, or remove a blurred image segment of the image determined to be blurry according to a defined blur criterion. Furthermore, a conversion component is configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels. In another aspect, a determination component is configured to determine a position of the watermark within the image based on a solution to a convex problem.
In yet another aspect, the system can employ a matching component configured to match a pixel of the set of pixels to a transform matrix wherein the solution to the convex problem is based on a match between the pixel and the transform matrix. Also, the system can employ a modulation component configured to modulate a first subset of values of the set of binary values with a first waveform and a second subset of values of the subset of binary values with a second waveform, wherein the first waveform  and the second waveform are stored within a hidden watermark. In another aspect, the system can employ a demodulation component configured to identify a set of frequency domain information associated with the first waveform and the second waveform using a matching filter.
In another embodiment, a method is provided comprising encoding, by a system comprising a processor, a message comprising a set of data with an error correction code that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data. In an aspect, the method can comprise determining, by the system, a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, and wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable.
Furthermore, the method can comprise modulating, by the system, the first subset of data with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a modification of the first modifiable image intensity value, or another modification of the second image intensity value. In another aspect, the method can comprise storing, by the system, the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the data. Also, the method can comprise embedding, by the system, the hidden watermark within the image subject to an intensity modification limit based on the set of image data such that the hidden watermark is invisible to a user.
The following description and the annexed drawings set forth certain illustrative aspects of the disclosure. These aspects are indicative, however, of but a few of the various ways in which the principles of the disclosure may be employed. Other aspects of the disclosure will become apparent from the following detailed description of the disclosure when considered in conjunction with the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates an example non-limiting system for embedding and hiding data within an image or video.
FIG. 2 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 3 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 4 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 5 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 6 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 7 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 8 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 9 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 10 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 11 illustrates another example non-limiting system for embedding and hiding data within an image or video.
FIG. 12 illustrates an example non-limiting system for embedding and hiding data within an image or video.
FIG. 13 illustrates an example non-limiting system for embedding and hiding data within an image or video.
FIG. 14 illustrates an example non-limiting system for decoding embedded and hiding data within an image or video.
FIG. 15 illustrates an example non-limiting method for embedding hidden data within an image or video.
FIG. 16 illustrates another example non-limiting method for embedding hidden data within an image or video.
FIG. 17 illustrates another example non-limiting method for embedding hidden data within an image or video.
FIG. 18 illustrates a block diagram representing an exemplary non-limiting networked environment in which the various embodiments can be implemented.
FIG. 19 is a block diagram representing an exemplary non-limiting computing system or operating environment in which the various embodiments may be implemented.
DETAILED DESCRIPTION
OVERVIEW
The various embodiments are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It may be evident, however, that the various embodiments can be practiced without these specific details. In other instances, well-known structures and components are shown in block diagram form in order to facilitate describing the various embodiments.
By way of introduction, this disclosure relates to systems and methods for hiding data in images and videos. In particular, the systems and methods disclosed herein describe hiding data within an image or video in an unobstructive manner such that the data is hidden to the human eye. Currently, embedding data within an image or video requires the insertion of an obstructive data code such as a QR code, which distracts the user from viewing the primary message conveyed by the image or video. The disclosed systems and methods allow for a user to obtain information corresponding to data  embedded within an image simply by capturing a picture of the image. The user will not visually see the hidden data within the image or video, however, the user can rely on accessing the invisible, hidden data, and associated information within the image. The information to be accessed can comprise a host of information such as a map, hyperlink, image, video, graphic, or interactive content.
The ability to display an image or video and simultaneously communicate data while not disturbing the image or video opens up several new possibilities for the mobile marketing and commerce fields. The systems and methods disclosed herein facilitate the embedding of unobtrusive data in images and videos while also presenting systems and methods for decoding the unobtrusive data within the images and video via a camera (e.g., mobile device camera) . Furthermore, in an aspect the systems and methods of data communication provided herein allow for a higher capacity of data to be embedded within an image or video than traditional embedding methods. Also, unlike other technologies, the disclosed technology does not require the availability of a wireless broadband connection to communicate the data.
The data embedding systems and methods disclosed herein describe the capability of hiding data in an image or video such that a camera device can extract the data. The data is stored in an image unobtrusively such that the hidden data can facilitate potential applications such as mobile advertisement and mobile payments. In an aspect, the systems and methods utilize data communication and coding techniques to achieve the data hiding. For instance, the systems and methods can employ unobtrusive 2-dimensional (2D) waveform modulation communication techniques to achieve data embedding within an image.
Furthermore, the systems and methods can utilize 2D Fast Fourier Transform (FFT) techniques to allow for efficient demodulation of data with respect to shifting, rotating, or geometrically distorting an image. In another aspect, the disclosed systems and methods (e.g., channel encoding, image preprocessing, corner detection, providing distributed training symbols and iterative synchronization, demodulation, estimation, decoding, decoding with multi-frame combining, etc. ) can be implemented in a mobile device. The systems and methods also describe the process of coding and decoding data to allow for the recovery of data from randomly located image frames. Thus, in general disclosed herein are systems and methods to provide simultaneous  display and communication of data in an unobtrusive manner, which provides commercial benefits to many industry sectors.
Disclosed herein are systems and methods for hiding data within videos and images in an unobstructive and invisible manner to users. Referring initially to FIG. 1, a data hiding system 100 is illustrated. The system 100 comprises a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations of the system, the executable components comprising: a coding component 110 that encodes a set of data based on an error correction code; a generation component 130 that generates a first waveform comprising a first frequency and a second waveform comprising a second frequency, wherein the first waveform and second waveform represents a first subset of data of the set of data and a second subset of data of the set of data respectively; a storage component 140 that stores the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the set of data; and an embedding component 150 that embeds the hidden watermark into the image subject to an intensity modification limit based on the set of image data such that the hidden watermark is invisible to a user.
In an aspect, coding component 110 can encode a set of data based on an error correction code. The encoding component 110 can encode data input into source code via ASCII or other such source code encoding methods. In an aspect, the source code can be configured to be used as a bit stream comprising a series of bits. The encoding (e.g. using coding component 110) can comprise encoding the data with an error correction code, by making use of an error correction encoder such as a low-density parity-check (LDPC) code to generate a LDPC code word. A code word can comprise a series of bits that convey a message and numerous code words can be presented in matrix form.
In an aspect, the LDPC code can be an irregular LDPC code optimized to a particular network node comprising irregular degrees and a wide range of coding rates. An irregular LDPC code can correspond to a set of nodes according to a customized distribution. Each node type (e.g., variable node, constraint node, or check node) performs a decoding operation via a message passing algorithm operated between the  nodes such that the LDPC code is capable of being iteratively decoded. As a graphical representation the irregular LDPC code can be illustrated with half the variable nodes having degree W and the other half having degree X, while half the constraint nodes have degree Y and the other half have degree Z, where W, X, Y, and Z are integers.
In an aspect, the irregular degree is related to the coding rate, where the coding rate can be controlled at different error correction levels (which can also represent users) . The coding rate describes the proportion of a data-stream that is useful rather than redundant. The coding rate can be described as k/n where for every k bits of useful information, coding component 110 generates n bits of total data, such that n-k are the redundant bits. In an aspect, three error correction levels (e.g., represented as a coding rate) can exist including a low, medium, and high correction level. In an instance, the low error correction level can provide a 3/4 coding rate, the medium error correction level a 1/2 coding rate, and the high error correction level a 1/4 coding rate. In an aspect, the error correction code (e.g., employed by coding component 110) can detect and also correct single-bit errors in the code) , as such, the low error correction level can correct around 3-6%bit errors, the medium error correction level can correct around 9-12%bit errors, and the high error correction level can correct around 15-20%bit errors.
There are several benefits to using LDPC code including the ability to decode LDPC codes in parallel, which can accomplish decoding at high speeds, compared to other codes. Another benefit of using LDPC code is that decoders (e.g., decoders of a smartphone camera) of varying simplicities or complexities can be utilized to decode the LDPC code. Also, as compared to Reed Solomon (RS) code, LDPC code can detect and correct random errors and can perform well under low light intensity or unfocused image conditions. For instance, at a coding rate of 1/2, RS code can correct around 3-4%random bit errors, while LDPC code can correct around 9-12%random bit errors. In another aspect, LDPC code is difficult to hack into (as compared to RS code) in that the LDPC code utilizes a complicated matrix H for code parity checks and the code is stored in a compiled C++ library, which provides relatively complex decoding algorithms. In yet another aspect, the LDPC code can make use of soft information and thus facilitate the use of soft decision-making to decode the code rather than hard decision code used by QR, DataMatrix and other codes.
In an aspect, encoding component 110 can encode the data and corresponding information based on the intensity of pixels within an image in order to generate a symbol such as a waveform pattern, comprising frequency domain information, that represents the data and that can be hidden within the image. In an aspect, the frequency domain information provides the data and corresponding information as an encoded wave and comprises information related to a frequency domain of a signal. The frequency domain demonstrates how much of each signal lies within a given frequency band over a range of frequencies. The frequency domain information can comprise the frequency of a wave as expressed in speed units, a magnitude of a signal, a phase of a signal, or a set of frequency domain signals capable of being transformed into the set of data. In an aspect, encoding component 110 can encode the data in waveform like patterns comprising frequency domain information. The frequency domain information can also comprise information related to the phase shift to be applied to each sinusoid in order to recombine frequency components to recover the original time signal (which is absent from the frequency domain) during decoding of the encoded data.
In another aspect, system 100 can employ a modification component 120 that modifies a set of image data corresponding to an image resulting in a set of modified image data. The set of image data can represent pixels of the image and characteristics of the pixels such as intensity. In an aspect the modification component 120 can modify the image by increasing or decreasing the intensity of the pixels. The modification activities can be useful in generating waveforms to represent the data, such that the waveforms can blend into the image becoming virtually invisible to the naked eye of a user viewing the image. In an aspect, modification component 120 can modify the intensity of a pixel within an intensity modification limit. The intensity modification limit is established in order to increase performance of decoding tasks when the hidden data (e.g., hidden within the waveform pattern) is accessed and to protect the original image by limiting the degree by which it is distorted.
In an aspect, the intensity modification limit can be determined by applying a Gaussian average mask to the image I to generate a new image matrix I’. The Gaussian average mask can take the form of a range of matrixes corresponding to a range of variances. For instance, the Gaussian average mask can be a 3*3, 5*5, or 7*7 matrix with variances ranging from . 5 to 1.5. The intensity modification limits can also be  determined based on other mask technologies as well such as a median filter or Butterworth filter. In an aspect, modification component 120 can modify the pixel intensity based on the determined intensity modification limits. In an aspect, the image can be divided into blocks where each block is analyzed based on pixel intensity. In an aspect, modification component 120 can modify a high intensity block by subtracting an intensity value from such block. Similarly, modification component 120 can modify a low intensity block by adding an intensity value to such block. The maximum intensity value to be added or subtracted is referred to as the intensity modification limit. As described above, the intensity modification limit can be determined based on the Gaussian average of image I’a nd the original image I.
In another aspect, system 100 can employ a generation component 130 that generates a first waveform comprising a first frequency and a second waveform comprising a second frequency, wherein the first waveform represents a first subset of data of the set of data and a second subset of data of the set of data respectively. In an aspect, generation component 130 can generate two different waveforms each waveform comprising different and unique frequency spectra. Each waveform can comprise a high-frequency property, such that the waveform is undetectable to the human eye. Furthermore, in an aspect, generation component 130 can generate a first waveform and a second waveform both comprising a large enough difference in frequency spectrums to facilitate proper decoding. By generating a large enough difference in frequency spectrums, a decoder can detect such frequency spectrum differences to determine the various differences in code between each respective waveform pattern.
In an aspect, generation component 130 in connection with coding component 110 can encode the image with the waveform patterns on the vertical surface of the image. Each waveform represents encoded bits represented by the numbers 0 and 1. Each waveform is different and is located in different directions, pi/4 and –pi/4, respectively. The waveforms are located in non-horizontal and non-vertical directions because the human eye is not sensitive to non-horizontal and non-vertical waveforms, however cameras can detect such waveforms. In another aspect, the waveforms can be located in a variety of directions such as pi/6 or-5*pi/6. In an aspect, the difference between the two directions of the waveforms can be pi/2. In an aspect, the direction of  the waveforms can promote efficacious demodulation of the waveforms in various devices.
In an aspect, as the waveforms are generated (e.g., using generation component 130) and modulated on the original image, the intensity modification on the original image is applied. In another aspect, the waveforms can be generated on a modified image where the intensity modification (e.g., using modification component 120) occurs. In an aspect, modification can be achieved by modifying the fewest points on the image to maintain the visual quality of the image, such that the modification occurs within limits (e.g., a tolerable modification range) in order to suppress the image from being distorted.
In another aspect, system 100 can employ a storage component 140 that stores the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the set of data. In an aspect, a hidden watermark is a marker that can store the waveform patterns and associated data within the marker. The watermark can carry various forms of data, such as data corresponding to a video, image, map, link, web data, or audio file. All such data can be accessed via the hidden watermark with a device comprising a camera and a decoder.
Furthermore, in an aspect, system 100 can employ an embedding component 150 that embeds the hidden watermark into the image subject to an intensity modification limit based on the set of image data such that the hidden watermark is invisible to a user. In an aspect, the hidden watermark can be embedded (e.g., using embedding component 150) within an image or video comprising a set of images such that the watermark is invisible to the human eye. The benefit of the invisible feature of the digital watermark is to facilitate the communication of data within an image or video in the absence of disturbing the aesthetic qualities of the image or video.
Turning now to FIG. 2, illustrated is a non-limiting example of system 200 comprising the components of system 100 and further comprising a verification component 210. In an aspect, verification component 210 verifies that the set of image data is capable of demodulation. In an aspect, after modification component 120 modifies the intensity value of the image data, verification component 210 verifies or checks the result to determine whether “bad blocks” of image pixels still exist. A bad  block is identified as one or more image pixels that cannot be reliably modulated because the difference in frequency between waveforms is too small or inverted.
In an aspect, verification component 210 can check whether a bad block continues to exist within the image. In the event a bad block still exists, modification component 120 can modify the intensity value of the pixels associated with the bad block to ensure that the respective block can be reliably demodulated. In the event increasing or decreasing the intensity modification value of the pixels fails to remedy the problems with the bad block, then the modification component 120 can modify more points within the bad block area to improve its performance. In an instance, modification component 120 can increase the waveform width, however such increase can lower the visual quality of the image. In an aspect, verification component 210 continues to verify whether or not bad blocks exist and modification component 120 can modify the properties of the pixels, based on the verification, associated with the bad blocks until there are no longer bad blocks.
Referring to FIG. 3, illustrated is a non-limiting example system 300 for decoding an image with hidden data. The system 300 comprises: a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations, the executable components comprising: an image enhancement component 310 configured to at least one of eliminate a texture distortion of an image comprising a set of pixels, equalize a contrast value corresponding to the image, adjust a signal to noise ratio (SNR) associated with the image, adjust a focus associated with the image, or remove a blurred image segment of the image determined to be blurry according to a defined blur criterion; a conversion component 320 configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels; and a determination component 330 configured to determine a position of the watermark within the image based on a solution to a convex problem.
In an aspect, image enhancement component 310 performs various tasks to pre-process an image and improve the image quality prior to decoding the image. In an aspect, the image pre-processing (e.g. using enhancement component 310) is intended to diminish the Moiré effect on the display screen, which can negatively affect the  decoding outcome of system 300. The Moiré effect is a visual occurrence of a set of lines or dots superimposed on another set of lines or dots where each set of lines and dots differ in size, angle, or spacing. The visual occurrence can be caused by an overlap of the grid structure of the display screen pixels and the sensor pixels of a camera used to capture the image. The potential for the Moiré effect can be estimated by system 300 by analyzing the frequency response of the image. In an aspect, image enhancement component 310 can estimate the frequency range of the Moiré effect by detecting the focus distance (e.g., the angle between the mobile device and the vertical direction of the image) using the mobile device sensor as well as camera angle viewpoint and camera resolution. In another aspect, image enhancement component 310 can remove the Moiré effect while minimizing distortion of other image information using a Butterworth filter.
In another aspect, system 300 employs conversion component 320 configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels. In an aspect, conversion component 320 can perform a histogram equalization technique to distribute the contrast intensities of image blocks and remove the effect of the background intensity shifts. In an aspect, conversion component 320 can binarize a point of the image using nearby points based on the histogram equalization. For instance, conversion component 320 can consider the intensity of nearby points and determine a difference between the maximum intensity value and the minimum intensity value. Furthermore, conversion component can assign a 0 or 1 to a point based on the determined maximum intensity value or minimum intensity value. The binarization can also be based on other image parameters that can vary given a background intensity characteristic.
Also, system 300 can employ a determination component 330 configured to determine a position of the watermark within the image based on a solution to a convex problem. In an aspect, determination component 330 can detect the entire image frame from the binarized image using an optimization method. In an aspect, a camera (e.g., mobile device camera) captures a picture of the watermarked image wherein the watermarked image is a perspective transformed version of the original non-distorted image. The determination component 330 can facilitate determining (e.g., using matching component 410) the best matched perspective transformed function of the  image based on a convex optimization method. The convex optimization method can make use of an interior point to determine the best matched perspective transform function to determine the position of the watermark with the original image accordingly.
Referring now to FIG. 4, system 400 can comprise the components of system 300 and further comprise a matching component 410 configured to match a pixel of the set of pixels to a transform matrix wherein the solution to the convex problem is based on a match between the pixel and the transform matrix. As disclosed above, matching component 410 can match a perspective transform function to the binarized image. In an aspect, the matching can be a block matching to find an initial transformation using a normalized correlation. For instance, points (x, y) of a grid (e.g., 3x3) within an image can be matched to a point (x’, y’) , which maximizes the normalized correlation using a restricted window (e.g., 7x7, 11x11) . The matching point pairs of (x, y) and (x’, y’) can be used to calculate a translation between the reference image I (x, y) and the target image I’(x’, y’) . In an aspect, matching component 410 can choose a translation that maximizes a normalized correlation based on a block matching of the image.
Referring now to FIG. 5, system 500 can comprise the components of system 400 and further comprise modulation component 510 configured to modulate a first subset of values of the set of binary values with a first waveform and a second subset of values of the set of binary values with a second waveform, and wherein the first waveform and the second waveform are stored within a hidden watermark. In an aspect, modulation component 510 can convert image data into a waveform suitable for transmission over a communication channel. Modulation component 510 can convert image data into waveforms that are unique in frequency and other wave properties. For instance, the waveforms can differ in either frequency or amplitude. Furthermore, in an aspect, the waveform patterns can be designed to be invisible to the human eye where the waveform comprises a high frequency or oblique structure. Furthermore, the image can have a low contrast and still work with the waveforms to provide an invisible waveform pattern within the image.
Referring now to FIG. 6, system 600 can comprise the components of system 500 and further comprise demodulation component 610 configured to identify a set of frequency domain information associated with the first waveform and the second  waveform using a matching filter. In an aspect, demodulation component 610 can receive a waveform pattern from an image via a device camera and convert the waveform into a code to be accessed by a decoder. In an aspect, demodulation component 610 in connection with determination component 330 can identify or detect the entire image frame from the binarized image using an optimization method and a synchronization code extracted from points (e.g., points detected by determination component 330) within the image. In an aspect, a matched filter can be used to facilitate the identification or detection of the binarized image, where the matched filter can maximize the signal to noise ratio resulting in a correct interpretation of a binary message within the image.
Furthermore, demodulation component 610 can demodulate sample points within the image based on an estimation of the synchronization quality between code words of a symbol stream (e.g., waveform pattern) . In another aspect, the demodulation can be based on a calculated frequency difference between detected waveform patterns within the image. Also, in an aspect, the position of sample points within the image can be adjusted and provide a new basis for a re-estimate of the synchronization quality. For instance, a region of the image can be identified for synchronization based on a detected sample point (e.g., using determination component 330) within the image and corresponding blocks of the image are determined to be in a particular position relative to the detected sample point as estimated using synchronization codes from the detected point. Thus, the synchronization quality can be re-estimated based on another sample detected point. The synchronization quality can be increased, decreased or oscillated based on the re-estimates. The re-estimation process can be terminated at the time the synchronization quality is observed to decrease or oscillate.
In another aspect, points nearby one another tend to have similar or same synchronization quality results. A message passing algorithm can be utilized to build a message passing network between nearby points, which acts as a neural network to facilitate implementation of a global synchronization matching process of image blocks. The synchronization and demodulation activities can be iteratively performed until they converge on a target point, after which the captured image is demodulated (e.g., using demodulation component 610) based on an error correction code decoder.
In another aspect, the demodulation process performed by (e.g., demodulation component 610) also comprises the dividing of the captured image into  numerous blocks within a vicinity on a space domain based on the synchronization results described above. Furthermore, each block individually undergoes a Fourier transform analysis, which is a technique to decompose each waveform signal into sinusoids. In an aspect, system 700 can utilize a discrete Fourier transform (DFT) technique which can decompose digitized signals where the DFT uses numbers to represent input and output signals. In an aspect, the DFT technique facilitates decomposition of each image block into its sine and cosine components in order to achieve a frequency domain corresponding to each waveform of a block. The image can then be represented by corresponding frequency domains, each point of the image representing a particular frequency contained in a spatial domain image. The DFT technique can be used to describe a frequency response of the waveforms, where the frequency response is a description of how a waveform changes the amplitude and phase as the waveform passes through a point of the image. In another aspect, the waveforms are capable of being decoded using DFT techniques.
Referring now to FIG. 7, system 700 can comprise the components of system 600 and further comprise an association component 710 configured to associate a first soft value with the first waveform and a second soft value with the second waveform based on a match of the first waveform and the second waveform to the set of template waveforms. As discussed above the DFT uses numbers to represent input and output signals. In an aspect, association component 710 can associate a first soft value with the first waveform and a second soft value with the second waveform based on the matching (e.g., using matching component 410) described above. The first soft value and second soft value can take on a range of values greater than or equal to 0 and less than or equal to 1, which includes a range of values in between 0 and 1. Unlike hard information, which takes on values of only 0 or 1, the soft information can take on a range of values in between 0 and 1 where such extra information indicates the reliability of each input data point thus facilitating better estimates of the original data corresponding to the original image. Thus, association component 710, via association of soft information with the waveforms, allows for system 700 to perform better in the presence of corrupted data than hard information.
In an aspect, each waveform (e.g., a first waveform and a second waveform) can comprise a high DFT value on its frequency response point, however each  waveform can comprise the high DFT value at a different point position along the waveform because of each waveforms different frequency direction. For instance, a first waveform can possess a synchronized horizontal frequency and vertical frequency while the second waveform can possess unsynchronized horizontal and vertical frequencies. Furthermore, each waveform comprises a unique distribution on the frequency domain, such that a distribution of frequency information can be determined using matched filters employed by matching component 410. The matched filters are utilized to determine which distribution best suits the DFT result by determining a difference between matched factors (e.g., a positive difference or negative difference) associated with a frequency domain of each waveform. The result of the difference between matched factors can be used by demodulation component 610 to determine a demodulation result via a demodulation process. The demodulation result can contain soft information corresponding to the waveforms.
Referring now to FIG. 8, system 800 can comprise the components of system 700 and further comprise a calculation component 810 configured to determine a set of statistics based on the set of frequency domain information, and wherein the set of statistics comprise a variance, a mean, a log ratio, a Gaussian approximation, and a standard deviation of the set of frequency domain information. In an aspect, calculation component 810 can employ a channel estimator that uses the demodulation result described above to generate a histogram to represent the tonal distribution of pixels, corresponding to a tonal value) , within the image. In an aspect, the histogram of the image can graphically illustrate the tonal distribution of pixels within the image.
In another aspect, calculation component 810 can determine whether the histogram represents a bi-Gaussian distribution of the two Gaussian distributions associated with the first waveform and the second waveform of an image block. In the event the histogram is not bi-Gaussian, then the results of the histogram are disregarded and the corresponding image frame is not decoded. Furthermore, the histogram of the next image frame is determined (e.g., using calculation component 810) to be bi-Gaussian-like or not bi-Gaussian-like to determine whether to decode such image frame. In an aspect, the absence of a bi-Gaussian-like histogram indicates a demodulating failure (e.g., using demodulation component 610) occurred.
Regarding a histogram determined to be bi-Gaussian-like, calculation component 810 can estimate the variation and the mean of the information presented by such histogram. The mean and variation of the histogram can be used to build an additive white Gaussian noise (AWGN) channel model comprising a channel that produces AWGN. In an aspect the channel uses a matched filter to correlate an incoming signal with a reference copy of a transmit signal such that the matched filter maximizes the signal-to-noise ratio for a known signal (e.g., a first waveform or second waveform) . An estimation of the AWGN channel can be used to calculate (e.g., using calculation component 810) a Log Likelihood Ratio according to various properties of the AWGN channel. The Log Likelihood Ratio can be used to compare the fit of various channel models to determine a fit for respective strings of encoded code for decoding. In an aspect, the Log Likelihood Ratio information can be passed to an LDPC decoder (e.g., decoding component 910) , however, prior to decoding, calculation component 810 can quantize the Log Likelihood Ratio to simplify the decoding process. For instance, calculation component 810 can compute a p-value or comparative critical value to decide whether to reject one AWGN channel model in favor of an alternative AWGN channel model. Furthermore, calculation component 810 can use an iterative method to optimize the quantization of the Log Likelihood Ratios.
Referring now to FIG. 9, system 900 can comprise the components of system 800 and further comprise a decoding component 910 configured to decode the hidden watermark based on an error correction code that uses a low density parity check code or a set of message passing rules wherein the low density parity check code is defined to facilitate cross checking a decoding of the hidden watermark for decoding errors and wherein the set of message passing rules define code constraints to inhibit decoding errors.
In an aspect, as mentioned above, an advantage of the disclosed system is that the LDPC code can make use of soft information to facilitate decoding the encoded code with greater accuracy. In an aspect, decoding component 910 can decode the LDPC code using a simplified min-sum (MS) decoding algorithm, which is a message passing decoding algorithm used for LDPC code. The MS decoding algorithm facilitates faster decoding (e.g., using decoding component 910) performance over other algorithms such as a sum-product algorithm. Furthermore, the MS decoding algorithm can reduce block  error rates rather than bit error rates, where the block error rates provide meaningful and useful information. In an aspect, decoding component 910 can decode the LDPC code by employing the decoding algorithms in many instances over many iterations. In another aspect, system 900 can establish a limit to the number of instances decoding component 910 employs the decoding algorithm.
After each iteration (e.g., performed using decoding component 910) decoding component 910 can validate whether a valid code word is decoded. In the event the code word is valid the decoding ceases and moves to the next encoded string and in the event the code word is invalid the decoding component can perform another iteration of the decoding algorithm. In an aspect, decoding component 910 can employ a source decoder to receive a corrected bit stream of data, which is then presented to a user. A source decoder or source encoder can utilize ASCII or other such compressing methods to compress the decoded or coded information.
Referring now to FIG. 10, system 1000 can comprise the components of system 900 and further comprise an encoding component 1010 configured to encode the first waveform and the second waveform with a first code word and a second code word respectively, and wherein the first waveform and the second waveform are encoded based on error correcting code rules defined to facilitate detection of a code error during transmission of the first code word or the second code word via a channel. In an aspect, system 1000 can employ encoding component 1010 for encoding the waveforms, comprising hidden data, with code words using LDPC coding methods. The LDPC codes comprise code words, which comprise message bits and parity bits. A code word, also referred to as code word bits, can comprise a number of bits. For instance a code word bit can comprise three bit messages and another code word bit can comprise three parity-check bits. A parity bit is used in error detecting code where the parity bit indicates whether the number of bits in a string of bits with the value of one is even or odd.
In an aspect, encoding component 1010 can impose code word constraints that define how to encode a message (e.g., using parity bits) . Furthermore, numerous code word constraints can be communicated in matrix form. Accordingly, the code word constraints can indicate whether an error has occurred during transmission of code words. For instance, a code word bit may have been flipped after transmission of the code words (e.g., using a camera device) . A code word error can be detected in that every code word  in the code must satisfy a constraint rule and any such code word determined to not satisfy such constraint rules can be deemed to be an error.
Referring now to FIG. 11, system 1100 can comprise the components of system 1000 and further comprise a weighting component 1110 configured to associate a weight to the first code word and the second code word respectively based on first channel state information and second channel state information corresponding to the first code word and the second code word respectively. In an aspect, system 1100 can utilize useful information in previous frames that are unable to be decoded for use in a maximum ratio that incorporates a diverse array of information to pass to the next frame. For instance, system 1100 can employ weighting component 1110 that associates a weight with a combination of Log Likelihood Ratios of two or more frames, such that the system 1100 can utilize previous decoded Log Likelihood Ratios to determine channel information of the current frame. In an aspect, channel information can comprise channel properties such as how a signal propagates from a transmitter to a receiver in terms of scattering, fading and decay of power.
In another aspect, the weight of each frame can be determined (e.g., by weighting component 1110) based on corresponding channel estimation information. In an aspect, the channel estimation weight is proportional to its channel signal to noise ratio (SNR) . The weighting method employed my weighting component 1110 can be quite useful when the SNR of a channel is low relative to other channels. In another aspect, system 1100 can make use of other cooperative diversity techniques to facilitate decoding based on a diversity of information. In an aspect, system 1100 can employ a cooperative maximum-ratio combining (C-MRC) method and λ–MRC method to achieve decoding based on a diversity of information. The C-MRC method can comprise adding a diversity of signals from each channel together in order to achieve a single improved signal.
Turning now to FIG. 12, illustrated is a system 1200 comprising a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations of the system, the executable components comprising: a receiver component 1202 that receives a data file and a video, wherein the data comprises a set of input data; a converter component 1204 that converts the data file and the video to a data bitstream  and a set of image frames, respectively; an encoding component 1206 that generates a set of data packets based on an encoding of the data bitstream, wherein the set of data packets represent the set of input data, and wherein a subset of data packets of the set of data packets represent a subset of input data of the set of data; an embedding component 1208 that embeds as an imperceptible transmission object, a first subset of input data into a first image frame of the set of image frames and a second subset of input data into a second image frame of the set of image frames, wherein an embedding of the first subset of input data and the second subset of input data is based on a waveform pattern modulation; and an integration component 1210 that generates an encoded video based on an integration of the first image frame and the second image frame.
In an aspect, system 1200 employs a receiver component 1202 that receives an input data file and an input video from a user. The input video is converted, using converter component 1204 employed by system 1200, to a series of image frames. In another aspect, a source encoding can be implemented (e.g., using encoding component 1206) on the input data file to get the bitstream of the entire data. Also, the source component 1206 can execute inter-frame network code encoding on the data bitstream and generate a series of data packets where each packet only carries partial information of the input data file. Furthermore, in an aspect, intra-frame error correcting code encoding can be used to encode the data packets and distribute each data packet to a particular image frame of a series of image frames.
In another aspect, each data packet is embedded (e.g., using embedding component 1208) into a corresponding image frame as an imperceptible watermark using a waveform modulation technique. Furthermore, in an aspect, various training symbols can be embedded (e.g., using embedding component 1208) in the image frames to facilitate a decoding of the embedded watermark and corresponding data. In another aspect, an encoded video is generated (e.g., using integration component 1210) by integrating all the encoded image frames. The encoded video generated by system 1200 provides hidden data that is virtually imperceptible to the human eye. Furthermore, the encoded video generated by system 1200 has a high degree of visual quality and can be decoded by mobile devices at a high transmission rate.
In an aspect, system 1200 can be implemented as a display to camera form of communication where a camera can capture the video or images comprising embedded  hidden data. In another aspect, system 1200 can be implemented as a free-space optical data communication framework where the transmitter of the hidden data or the imperceptible transmission object can be any of a watermark, a printed code label, a light-emitting diode (LED) array, or an electronic display. In another aspect, the receiver of the embedded data or the device that receives the embedded data can be any of a sensor, a mobile device, a camera, a tablet, or a closed circuit television. Furthermore, in an aspect, the information coding described throughout can be spatial coding (e.g., coding over pixels in an image) as well as temporal coding (e.g., coding over frames of images) . Also, in an aspect, the channel coding employed by system 1200 can utilize any error control code including, but not limited to, Reed-Solomon codes, LDPC codes, rateless codes, and other such codes.
In an aspect, system 1200 can employ inter-frame network code encoding techniques. In an aspect, a transmitter of system 1200 can emit data through the encoded video and the video consists of many image frames. Each frame only contains partial information of the data. If the transmitter broadcasts data and there is no feedback from the receiver, this means the system 1200 does not comprehend which frames have been successfully decoded by the user. Thus system 1200 cannot rearrange the image frame sequence according to an Automatic Repeat-reQuest (ARQ) scheme. In this situation, the purpose of the inter-frame encoding is not to correct errors, but only to reconstruct the data from the partial information in those image frames. This is due to the intra-frame decoding, which only uses the successfully decoded frames which means the partial information passed to the inter-frame decoder is already error-free. The optimal inter-frame encoding scheme is the one that will not let the user receive redundant frames.
In another aspect, as mentioned above, various training symbols can be embedded (e.g., using embedding component 1208) in the image frames to facilitate later decoding processes of the embedded watermark and corresponding data. In various situations, cameras sometimes receive two display frames in one exposure time and output a linear combination of the display frames. The combined frame can make the process of demodulation and later intra-frame decoding fail. Thus, training symbols have been uniformly separated on the encoded image. However, the training symbol must be different from the one at the same position in the adjacent frame. This means that training symbols at the same position of two adjacent frames will carry two different  waveforms. Thus, it can be determined whether two adjacent frames are combined or not. Also, in an aspect, the training symbols can assist later time-domain interfered frame decoding processes by allowing for the extraction of the weights of two frames in combination.
Referring now to FIG. 13, illustrated is a system 1300 comprising the components of system 1200 and further comprising a distribution component 1310 that distributes a first LDPC encoded data packet and a second LDPC encoded data packet to the first image frame and the second image frame, respectively. In an aspect, the LDPC encoded data packets can be distributed to each image frame. In another aspect, each data packet is embedded (e.g., using embedding component 1208) into a corresponding image frame as an imperceptible watermark using a waveform modulation technique.
Turning now to FIG. 14, illustrated is a system 1200 comprising a memory 102 that stores executable components; and a processor 104, coupled to the memory 102, that executes or facilitates execution of the executable components to perform operations of the system, the executable components comprising: a receiving component 1402 that receives a set of image frames of a video from a device; an implementation component 1404 that corrects a perspective distortion of the set of image frames, corrects a noise distortion of the set of image frames, outputs a set of corrected image frames, and retrieves module information from the set of image frames based on a spatial domain synchronization and a demodulation of the set of image frames; a decoding component 1406 that retrieves a set of embedded data packets from the set of corrected image frames based on an intra-frame low-density parity check (LDPC) decoding of the set of corrected image frames; and a reconstruction component 1408 that reconstructs a data file from the set of embedded data packets based on a set of inter-frame network code decoding rules.
In an aspect, system 1400 employs a receiving component 1402 that receives a camera-captured video comprising a set of image frames from a device such as a mobile phone. System 1200 can employ a decoder to extract the image frames from the video. In an aspect, the encoded video can be seriously distorted during the camera capturing process, thus a spatial-domain synchronization and demodulation technique is implemented (e.g., using implementation component 1404) on each frame to correct perspective distortion, noise distortion and retrieve the module information in a log likelihood ratio (LLR) format. Sometimes, the camera capturing speed may not be  synchronized with the display refreshing speed, thus two adjacent frames, which carry different data packets on the display can be received in a single exposure time of the camera. This can cause an interfered frame, which is a combination of two adjacent frames on the display. By utilizing properties of the network code, a time-domain interfered frame decoding method (e.g., using decoding component 1406) can be used to solve the problem. In another aspect, normally received frames can be passed to an intra-frame LDPC decoder to retrieve the embedded data packets.
Also, in an aspect, if multiple captured frames of a single display frame are captured, then a Tanner graph-based decoding diversity algorithm can be used to decode the data packets. Furthermore, in an aspect, if multiple users are decoding the same video, a collaborative decoding and sharing scheme based on network coding can be utilized to improve the overall decoding speed. Subsequent to decoding and receiving enough data packets, system 1400 employs reconstruction component 1408 that reconstructs the original data file by implementing the inter-frame network code decoding algorithm.
In an aspect, during the process of video watermark communication, the camera capturing speed can sometimes occur faster than the display refreshing speed, in which case, the decoder can receive several similar samples which are originally the same frame in the encoded video. This is an oversampling problem, which can take advantage of the multiple samples using an extended Tanner graph-based decoding algorithm. Traditionally, decoding techniques decode every image sample received and dispose of those samples that failed to be decoded. Here, if there are two image samples and both failed to be decoded individually, both samples can still be decoded as an integral whole. A network can be built by integrating the previously invalid coding result with a newly received undecoded code word. Next, a modified BP algorithm can be used to decode the joint code word.
Also, in an aspect, a situation can occur where many users may decode the same video simultaneously. For example, in an instance, many users at the same time attempt to decode a video presented on a large LCD display screen at a shopping mall. In such a scenario, users decoding the same video can exchange their information to speed up each other’s decoding by building a local area network (LAN) . The LAN can occur  via Bluetooth or Wi-Fi and such exchange process, can shorten the average waiting time to decode a video.
Turning now to FIG’s . 15-17, illustrated are methods or flow diagrams in accordance with certain aspects of this disclosure. While, for purposes of simplicity of explanation, the disclosed methods are shown and described as a series of acts, the disclosed subject matter is not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from that shown and described herein. For example, those skilled in the art will understand and appreciate that a method can alternatively be represented as a series of interrelated states or events, such as in a state diagram. Moreover, not all illustrated acts may be required to implement a method in accordance with the disclosed subject matter.
Referring now to FIG. 15, presented is a flow diagram of a non-limiting example of a method 1500 to encode hidden data within an image. At 1502, a message comprising a set of data is encoded with an error correction code (e.g., using encoding component 1010) that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data. At 1504, system 1500 determines (e.g., using determination component 330) a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable.
At 1506, the first subset of data is modulated (e.g., using modulation component 510) with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first  set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a modification of the first modifiable image intensity value, or another modification of the second image intensity value. At 1508, system 1500 stores (e.g., using storage component 140) the first waveform and the second waveform within a hidden watermark. At 1510, system 1500 embeds the hidden watermark within the image.
Referring now to FIG. 16, presented is a flow diagram of a non-limiting example of a method 1600 to encode hidden data within an image. At 1602, a message comprising a set of data is encoded with an error correction code (e.g., using encoding component 1010) that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data. At 1604, system 1600 determines (e.g., using determination component 330) a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable.
At 1606, the first waveform in the image and the second waveform in the image are identified based on a first magnitude of the first waveform and a second magnitude of the second waveform. At 1608, the first subset of data is modulated (e.g., using modulation component 510) with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a modification of the first  modifiable image intensity value, or another modification of the second image intensity value.
Referring now to FIG. 17, presented is a flow diagram of a non-limiting example of a method 1700 to encode hidden data within an image. At 1702, the first subset of data is modulated (e.g., using modulation component 510) with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user and capable of maintaining the first set of waveform characteristics and the second set of waveform characteristics upon occurrence of an image distortion event to the image, modification of the first modifiable image intensity value, or modification of the second image intensity value.
At 1704, the image and a corresponding image signal associated with the hidden watermark embedded within the image are captured by a lens of the system. At 1706, the corresponding image signal associated with the hidden watermark is decoded, wherein the decoding results in a determination of the first subset of data and the second subset of data. At 1708, the first subset of data and the second subset of data corresponding to the first waveform and the second waveform respectively are accessed.
EXAMPLE OPERATING ENVIRONMENTS
The systems and processes described below can be embodied within hardware, such as a single integrated circuit (IC) chip, multiple ICs, an application specific integrated circuit (ASIC) , or the like. Further, the order in which some or all of the process blocks appear in each process should not be deemed limiting. Rather, it should be understood that some of the process blocks can be executed in a variety of orders, not all of which may be explicitly illustrated in this disclosure.
With reference to FIG. 18, a suitable environment 1800 for implementing various aspects of the claimed subject matter includes a computer 1802. The computer 1802 includes a processing unit 1804, a system memory 1806, a codec 1805, and a system bus 1808. The system bus 1808 couples system components including, but not limited to, the system memory 1806 to the processing unit 1804. The processing unit  1804 can be any of various available processors. Dual microprocessors and other multiprocessor architectures also can be employed as the processing unit 1804.
The system bus 1808 can be any of several types of bus structure (s) including the memory bus or memory controller, a peripheral bus or external bus, and/or a local bus using any variety of available bus architectures including, but not limited to, Industrial Standard Architecture (ISA) , Micro-Channel Architecture (MSA) , Extended ISA (EISA) , Intelligent Drive Electronics (IDE) , VESA Local Bus (VLB) , Peripheral Component Interconnect (PCI) , Card Bus, Universal Serial Bus (USB) , Advanced Graphics Port (AGP) , Personal Computer Memory Card International Association bus (PCMCIA) , Firewire (IEEE 1394) , and Small Computer Systems Interface (SCSI) .
The system memory 1806 includes volatile memory 1810 and non-volatile memory 1812. The basic input/output system (BIOS) , containing the basic routines to transfer information between elements within the computer 1802, such as during start-up, is stored in non-volatile memory 1812. In addition, according to various embodiments, codec 1805 may include at least one of an encoder or decoder, wherein the at least one of an encoder or decoder may consist of hardware, a combination of hardware and software, or software. Although, codec 1805 is depicted as a separate component, codec 1805 may be contained within non-volatile memory 1812. By way of illustration, and not limitation, non-volatile memory 1812 can include read only memory (ROM) , programmable ROM (PROM) , electrically programmable ROM (EPROM) , electrically erasable programmable ROM (EEPROM) , or flash memory. Volatile memory 1810 includes random access memory (RAM) , which acts as external cache memory. According to present aspects, the volatile memory may store the write operation retry logic (not shown in FIG. 18) and the like. By way of illustration and not limitation, RAM is available in many forms such as static RAM (SRAM) , dynamic RAM (DRAM) , synchronous DRAM (SDRAM) , double data rate SDRAM (DDR SDRAM) , and enhanced SDRAM (ESDRAM.
Computer 1802 may also include removable/non-removable, volatile/non-volatile computer storage medium. FIG. 18 illustrates, for example, disk storage 1814. Disk storage 1814 includes, but is not limited to, devices like a magnetic disk drive, solid state disk (SSD) floppy disk drive, tape drive, Jaz drive, Zip drive, LS-70 drive, flash memory card, or memory stick. In addition, disk storage 1814 can include storage medium separately or in combination with other storage medium including, but not  limited to, an optical disk drive such as a compact disk ROM device (CD-ROM) , CD recordable drive (CD-R Drive) , CD rewritable drive (CD-RW Drive) or a digital versatile disk ROM drive (DVD-ROM) . To facilitate connection of the disk storage devices 1814 to the system bus 1808, a removable or non-removable interface is typically used, such as interface 1816.
It is to be appreciated that FIG. 18 describes software that acts as an intermediary between users and the basic computer resources described in the suitable operating environment 1800. Such software includes an operating system 1818. Operating system 1818, which can be stored on disk storage 1814, acts to control and allocate resources of the computer system 1802. Applications 1820 take advantage of the management of resources by the operating system through program modules 1824, and program data 1826, such as the boot/shutdown transaction table and the like, stored either in system memory 1806 or on disk storage 1814. It is to be appreciated that the claimed subject matter can be implemented with various operating systems or combinations of operating systems.
A user enters commands or information into the computer 1802 through input device (s) 1828. Input devices 1828 include, but are not limited to, a pointing device such as a mouse, trackball, stylus, touch pad, keyboard, microphone, joystick, game pad, satellite dish, scanner, TV tuner card, digital camera, digital video camera, web camera, and the like. These and other input devices connect to the processing unit 1804 through the system bus 1808 via interface port (s) 1830. Interface port (s) 1830 include, for example, a serial port, a parallel port, a game port, and a universal serial bus (USB) . Output device (s) 1836 use some of the same type of ports as input device (s) 1828. Thus, for example, a USB port may be used to provide input to computer 1802, and to output information from computer 1802 to an output device 1836. Output adapter 1834 is provided to illustrate that there are some output devices 1836 like monitors, speakers, and printers, among other output devices 1836, which require special adapters. The output adapters 1834 include, by way of illustration and not limitation, video and sound cards that provide a means of connection between the output device 1836 and the system bus 1808. It should be noted that other devices and/or systems of devices provide both input and output capabilities such as remote computer (s) 1838.
Computer 1802 can operate in a networked environment using logical connections to one or more remote computers, such as remote computer (s) 1838. The remote computer (s) 1838 can be a personal computer, a server, a router, a network PC, a workstation, a microprocessor based appliance, a peer device, a smart phone, a tablet, or other network node, and typically includes many of the elements described relative to computer 1802. For purposes of brevity, only a memory storage device 1840 is illustrated with remote computer (s) 1838. Remote computer (s) 1838 is logically connected to computer 1802 through a network interface 1842 and then connected via communication connection (s) 1844. Network interface 1842 encompasses wire and/or wireless communication networks such as local-area networks (LAN) and wide-area networks (WAN) and cellular networks. LAN technologies include Fiber Distributed Data Interface (FDDI) , Copper Distributed Data Interface (CDDI) , Ethernet, Token Ring and the like. WAN technologies include, but are not limited to, point-to-point links, circuit switching networks like Integrated Services Digital Networks (ISDN) and variations thereon, packet switching networks, and Digital Subscriber Lines (DSL) .
Communication connection (s) 1844 refers to the hardware/software employed to connect the network interface 1842 to the bus 1808. While communication connection 1844 is shown for illustrative clarity inside computer 1802, it can also be external to computer 1802. The hardware/software necessary for connection to the network interface 1842 includes, for exemplary purposes only, internal and external technologies such as, modems including regular telephone grade modems, cable modems and DSL modems, ISDN adapters, and wired and wireless Ethernet cards, hubs, and routers.
Referring now to FIG. 19, there is illustrated a schematic block diagram of a computing environment 1900 in accordance with this disclosure. The system 1900 includes one or more client (s) 1902 (e.g., laptops, smart phones, PDAs, media players, computers, portable electronic devices, tablets, and the like) . The client (s) 1902 can be hardware and/or software (e.g., threads, processes, computing devices) . The system 1900 also includes one or more server (s) 1904. The server (s) 1904 can also be hardware or hardware in combination with software (e.g., threads, processes, computing devices) . The servers 1904 can house threads to perform transformations by employing aspects of this disclosure, for example. One possible communication between a client 1902 and a  server 1904 can be in the form of a data packet transmitted between two or more computer processes wherein the data packet may include video data. The data packet can include a metadata, such as associated contextual information for example. The system 1900 includes a communication framework 1906 (e.g., a global communication network such as the Internet, or mobile network (s) ) that can be employed to facilitate communications between the client (s) 1902 and the server (s) 1904.
Communications can be facilitated via a wired (including optical fiber) and/or wireless technology. The client (s) 1902 include or are operatively connected to one or more client data store (s) 1908 that can be employed to store information local to the client (s) 1902 (e.g., associated contextual information) . Similarly, the server (s) 1904 are operatively include or are operatively connected to one or more server data store (s) 1910 that can be employed to store information local to the servers 1904.
In one embodiment, a client 1902 can transfer an encoded file, in accordance with the disclosed subject matter, to server 1904. Server 1904 can store the file, decode the file, or transmit the file to another client 1902. It is to be appreciated, that a client 1902 can also transfer uncompressed file to a server 1904 and server 1904 can compress the file in accordance with the disclosed subject matter. Likewise, server 1904 can encode video information and transmit the information via communication framework 1906 to one or more clients 1902.
The illustrated aspects of the disclosure may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
Moreover, it is to be appreciated that various components described in this description can include electrical circuit (s) that can include components and circuitry elements of suitable value in order to implement the various embodiments. Furthermore, it can be appreciated that many of the various components can be implemented on one or more integrated circuit (IC) chips. For example, in one embodiment, a set of components can be implemented in a single IC chip. In other embodiments, one or more of respective components are fabricated or implemented on separate IC chips.
What has been described above includes examples of the embodiments of the present invention. It is, of course, not possible to describe every conceivable combination of components or methods for purposes of describing the claimed subject matter, but it is to be appreciated that many further combinations and permutations of the various embodiments are possible. Accordingly, the claimed subject matter is intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Moreover, the above description of illustrated embodiments of the subject disclosure, including what is described in the Abstract, is not intended to be exhaustive or to limit the disclosed embodiments to the precise forms disclosed. While specific embodiments and examples are described in this disclosure for illustrative purposes, various modifications are possible that are considered within the scope of such embodiments and examples, as those skilled in the relevant art can recognize.
In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component (e.g., a functional equivalent) , even though not structurally equivalent to the disclosed structure, which performs the function in the disclosure illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the various embodiments include a system as well as a computer-readable storage medium having computer-executable instructions for performing the acts and/or events of the various methods of the claimed subject matter.
The aforementioned systems/circuits/modules have been described with respect to interaction between several components/blocks. It can be appreciated that such systems/circuits and components/blocks can include those components or specified sub-components, some of the specified components or sub-components, and/or additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical) . Additionally, it should be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub- components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described in this disclosure may also interact with one or more other components not specifically described in this disclosure but known by those of skill in the art.
In addition, while a particular feature of the various embodiments may have been disclosed with respect to only one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes, ” “including, ” “has, ” “contains, ” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.
As used in this application, the terms “component, ” “module, ” “system, ” or the like are generally intended to refer to a computer-related entity, either hardware (e.g., a circuit) , a combination of hardware and software, software, or an entity related to an operational machine with one or more specific functionalities. For example, a component may be, but is not limited to being, a process running on a processor (e.g., digital signal processor) , a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a controller and the controller can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers. Further, a “device” can come in the form of specially designed hardware; generalized hardware made specialized by the execution of software thereon that enables the hardware to perform specific function; software stored on a computer readable storage medium; software transmitted on a computer readable transmission medium; or a combination thereof.
Moreover, the words “example” or “exemplary” are used in this disclosure to mean serving as an example, instance, or illustration. Any aspect or design described in this disclosure as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this  application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or” . That is, unless specified otherwise, or clear from context, “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then “X employs A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. 
Computing devices typically include a variety of media, which can include computer-readable storage media and/or communications media, in which these two terms are used in this description differently from one another as follows. Computer-readable storage media can be any available storage media that can be accessed by the computer, is typically of a non-transitory nature, and can include both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable storage media can be implemented in connection with any method or technology for storage of information such as computer-readable instructions, program modules, structured data, or unstructured data. Computer-readable storage media can include, but are not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible and/or non-transitory media which can be used to store desired information. Computer-readable storage media can be accessed by one or more local or remote computing devices, e.g., via access requests, queries or other data retrieval protocols, for a variety of operations with respect to the information stored by the medium.
On the other hand, communications media typically embody computer-readable instructions, data structures, program modules or other structured or unstructured data in a data signal that can be transitory such as a modulated data signal, e.g.,a carrier wave or other transport mechanism, and includes any information delivery or transport media. The term “modulated data signal” or signals refers to a signal that has one or more of its characteristics set or changed in such a manner as to encode information in one or more signals. By way of example, and not limitation,  communication media include wired media, such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. 
In view of the exemplary systems described above, methods that may be implemented in accordance with the described subject matter will be better appreciated with reference to the flowcharts of the various figures. For simplicity of explanation, the methods are depicted and described as a series of acts. However, acts in accordance with this disclosure can occur in various orders and/or concurrently, and with other acts not presented and described in this disclosure. Furthermore, not all illustrated acts may be required to implement the methods in accordance with certain aspects of this disclosure. In addition, those skilled in the art will understand and appreciate that the methods could alternatively be represented as a series of interrelated states via a state diagram or events. Additionally, it should be appreciated that the methods disclosed in this disclosure are capable of being stored on an article of manufacture to facilitate transporting and transferring such methods to computing devices. The term article of manufacture, as used in this disclosure, is intended to encompass a computer program accessible from any computer-readable device or storage media.

Claims (36)

  1. A system, comprising:
    a memory that stores executable components; and
    a processor, coupled to the memory, that executes the executable components to perform operations of the system, the executable components comprising:
    a coding component that encodes a set of data based on an error correction code;
    a generation component that generates a first waveform comprising a first frequency and a second waveform comprising a second frequency, wherein the first waveform and second waveform represents a first subset of data of the set of data and a second subset of data of the set of data respectively;
    a storage component that stores the first waveform and the second waveform within a hidden watermark comprising a set of modules that represent the set of data; and
    an embedding component that embeds the hidden watermark into the image subject to an intensity modification limit based on the set of image data such that the hidden watermark is invisible to a user.
  2. The system of claim 1, wherein the first waveform and the second waveform comprise frequency domain information comprising a set of frequency domain signal responses capable of being transformed into the set of data.
  3. The system of claim 1, wherein the executable components further comprise a verification component that verifies that the set of image data is capable of demodulation.
  4. A system, comprising:
    a memory that stores executable components; and
    a processor, coupled to the memory, that executes or facilitates execution of the executable components to perform operations, the executable components comprising:
    an image enhancement component configured to at least one of eliminate a  texture distortion of an image comprising a set of pixels, equalize a contrast value corresponding to the image, adjust a signal to noise ratio associated with the image, adjust a focus associated with the image, or remove a blurred image segment of the image determined to be blurry according to a defined blur criterion;
    a conversion component configured to create a binarized image from the image by assigning a set of binary values to the set of pixels based on a successive comparison of defined windowed subsets of pixels of the set of pixels; and
    a determination component configured to determine a position of the watermark within the image based on a solution to an optimization problem.
  5. The system of claim 4 wherein the executable components further comprise a matching component configured to match a pixel of the set of pixels to a transform matrix wherein the solution to the optimization problem is based on a match between the pixel and the transform matrix.
  6. The system of claim 5, wherein the determination component is further configured to determine the position of the watermark within the image based on a match of the pixel to the transform matrix.
  7. The system of claim 4, wherein the executable components further comprise a modulation component configured to modulate a first subset of values of the set of binary values with a first waveform and a second subset of values of the set of binary values with a second waveform, and wherein the first waveform and the second waveform are stored within a hidden watermark comprising a set of modules that represent the set of binary values.
  8. The system of claim 7, wherein the executable components further comprise a demodulation component configured to identify a set of frequency domain information associated with the first waveform and the second waveform using a matching filter.
  9. The system of claim 8, wherein the executable components further comprise a matching component configured to employ the match filter to match the first waveform  and the second waveform to a set of template waveforms.
  10. The system of claim 9, wherein the executable components further comprise an association component configured to associate a first soft value with the first waveform and a second soft value with the second waveform based on a match of the first waveform and the second waveform to the set of template waveforms.
  11. The system of claim 8, wherein the executable components further comprise a calculation component configured to determine a set of statistics based on the set of frequency domain information, and wherein the set of statistics comprise a variance, a mean, a log ratio, a Gaussian approximation, and a standard deviation of the set of frequency domain information.
  12. The system of claim 11, wherein the executable components further comprise a decoding component configured to decode the hidden watermark based on an error correcting code, comprising binary and non-binary low density parity check codes, that uses a parity check matrix description or a set of message passing rules, wherein the parity check matrix is defined to facilitate cross checking a decoding of the hidden watermark for decoding errors, and wherein the set of message passing rules define code constraints to be abided by the decoding component to inhibit decoding errors.
  13. The system of claim 7, wherein the executable components further comprise an encoding component configured to encode the first waveform and the second waveform with a first code word and a second code word respectively, and wherein the first waveform and the second waveform are encoded based on error correcting code rules defined to facilitate detection of a code error during transmission of the first code word or the second code word via a channel.
  14. The system of claim 13, wherein the executable components further comprise a weighting component configured to associate a weight to the first code word and the second code word respectively based on first channel state information and second channel state information corresponding to the first code word and the second code word  respectively.
  15. A system, comprising:
    a memory that stores executable components; and
    a processor, coupled to the memory, that executes the executable components to perform operations of the system, the executable components comprising:
    a receiver component that receives a data file and a video, wherein the data comprises a set of input data;
    a converter component that converts the data file and the video to a data bitstream and a set of image frames, respectively;
    an encoding component that generates a set of data packets based on an encoding of the data bitstream, wherein the set of data packets represent the set of input data, and wherein a subset of data packets of the set of data packets represent a subset of input data of the set of data;
    an embedding component that embeds as an imperceptible transmission object, a first subset of input data into a first image frame of the set of image frames and a second subset of input data into a second image frame of the set of image frames, wherein an embedding of the first subset of input data and the second subset of input data is based on a waveform pattern modulation; and
    an integration component that generates an encoded video based on an integration of the first image frame and the second image frame.
  16. The system of claim 15, wherein the encoding component further comprises encoding the set of data packets based on any of an intra-frame error correcting code encoding, a spatial coding, or a temporal coding, wherein the spatial coding comprises a coding over pixels in a two-dimensional image, and wherein the temporal coding comprises a coding over frames of images.
  17. The system of claim 16, wherein the executable components further comprise a distribution component that distributes a first intra-frame correcting code encoded data packet and a second intra-frame error correcting code encoded data packet to the first image frame and the second image frame, respectively.
  18. The system of claim 15, wherein the embedding component further comprises embedding a first set of training symbols into the first image frame and a second set of training symbols into the second image frame, wherein the first set of training symbols and the second set of training symbols are capable of being decoded by a device.
  19. The system of claim 15, wherein the imperceptible transmission object is any of a watermark, a printed code label, a light-emitting diode array, or an electronic display, and wherein the device is any of a sensor, a mobile device, a camera, a tablet, or a closed circuit television.
  20. A system, comprising:
    a memory that stores executable components; and
    a processor, coupled to the memory, that executes the executable components to perform operations of the system, the executable components comprising:
    a receiving component that receives a set of image frames of a video from a device;
    an implementation component that corrects a perspective distortion of the set of image frames, corrects a noise distortion of the set of image frames, outputs a set of corrected image frames, and retrieves module information from the set of image frames based on a spatial domain synchronization and a demodulation of the set of image frames;
    a decoding component that retrieves a set of embedded data packets from the set of corrected image frames based on an intra-frame error correcting code decoding of the set of corrected image frames; and
    a reconstruction component that reconstructs a data file from the set of embedded data packets based on a set of inter-frame maximum-distance-separable code or network code decoding rules.
  21. The system of claim 20, wherein the decoding component further decodes the set of corrected image frames based on a time-domain interfered frame decoding.
  22. The system of claim 20, wherein the module information is represented in a log likelihood ratio format.
  23. The system of claim 20, wherein the decoding component further encodes a set of multiple captured snapshots of an image frame of the set of image frames based on as set of Tanner graph-based decoding diversity rules.
  24. The system of claim 20, wherein the decoding component further decodes the set of corrected image frames based on a collaborative network code decoding.
  25. A method, comprising:
    encoding, by a system comprising a processor, a message comprising a set of data with an error correction code that facilitates detection and reconstruction of the set of data based on a set of soft value inputs comprising a range of values equal to or greater than zero and less than or equal to one, wherein the range of values facilitates an accurate encoding of the set of data and wherein the soft value inputs are associated with the set of data;
    determining, by the system, a first upper limit of a first modifiable image intensity value of a first subset of data of the set of data and a second upper limit of a second modifiable image intensity value of a second subset of data of the set of data, wherein the first subset of data represents a first image block corresponding to an image and the second subset of data represents a second image block corresponding to the image, wherein the first modifiable image intensity value and the second modifiable image intensity value represent image characteristics comprising an image contrast, an image color, or an image brightness, and wherein the image characteristics are adjustable;
    modulating, by the system, the first subset of data with a first waveform comprising a first frequency and a first set of waveform characteristics and the second subset of data with a second waveform comprising a second frequency and a second set of waveform characteristics, wherein the first waveform and the second waveform are invisible to a user eye and maintain the first set of waveform characteristics and the second set of waveform characteristics during an image distortion event to the image, a  modification of the first modifiable image intensity value, or another modification of the second image intensity value;
    storing, by the system, the first waveform and the second waveform within a hidden watermark; and
    embedding, by the system, the hidden watermark within the image.
  26. The method of claim 25, wherein the first waveform is characterized by a set of first characteristics comprising a first position of the first waveform at an angle orthogonal to the second waveform as measured via an amplitude spectrum between the first waveform and the second waveform or a second position of the first waveform at a distance from the second waveform, and wherein the distance inhibits a misidentification between the first waveform and the second waveform.
  27. The method of claim 26, further comprising identifying, by the system, the first waveform in the image and the second waveform in the image based on a first magnitude of the first waveform and a second magnitude of the second waveform.
  28. The method of claim 27, further comprising increasing, by the system, a first magnitude corresponding to the first waveform or a second magnitude corresponding to the second waveform based on the identifying, wherein the increasing promotes distinguishing between the first waveform and the second waveform.
  29. The method of claim 25, wherein the error correction code is a low density parity check code.
  30. The method of claim 25, further comprising:
    capturing, by the system, the image and a corresponding image signal associated with the hidden watermark embedded within the image by a lens of the system;
    decoding, by the system, the corresponding image signal associated with the hidden watermark, wherein the decoding results in a determination of the first subset of data and the second subset of data; and
    accessing, by the system, the first subset of data and the second subset of data corresponding to the first waveform and the second waveform respectively.
  31. The system of claim 7, wherein the executable components further comprise a classification component that classifies the set of modules resulting in a set of classified modules based on a message passing technique that iteratively passes information between modules of the set of modules.
  32. The system of claim 31, wherein a subset of classified modules of the set of classified modules is represented by a log-likelihood ratio based on the message passing technique.
  33. The system of claim 32, wherein the executable components further comprise a synchronization component that performs an iterative synchronization and matching of a first subset of classified modules of the set of classified modules and a second subset of classified modules of the set of classified modules, and wherein the iterative synchronization and matching converges on a target module of the set of classified modules.
  34. The system of claim 8, wherein the demodulation component further demodulates sample points of the image based on a synchronization quality between the first waveform and the second waveform, a difference in frequency between the first waveform and the second waveform, or an error correcting code.
  35. The system of claim 1, wherein a difference in frequency between the first frequency and the second frequency is greater than or equal to a threshold frequency value.
  36. The system of claim 1, wherein the first waveform is oriented in a first direction within the hidden watermark and the second waveform is oriented in a second direction within the hidden watermark, and wherein the second direction is different from the first direction.
PCT/CN2015/000025 2014-01-15 2015-01-15 Unobtrusive data embedding in information displays and extracting unobtrusive data from camera captured images or videos WO2015106635A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461964794P 2014-01-15 2014-01-15
US61/964,794 2014-01-15

Publications (1)

Publication Number Publication Date
WO2015106635A1 true WO2015106635A1 (en) 2015-07-23

Family

ID=53542389

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/000025 WO2015106635A1 (en) 2014-01-15 2015-01-15 Unobtrusive data embedding in information displays and extracting unobtrusive data from camera captured images or videos

Country Status (1)

Country Link
WO (1) WO2015106635A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770404A (en) * 2016-08-19 2018-03-06 中国人民解放军信息工程大学 Implicit imaging communication method and apparatus based on alpha channel parameters
WO2018169170A1 (en) * 2017-03-14 2018-09-20 국민대학교 산학협력단 Mimo-ofdm of optical wireless system using screen
CN108933602A (en) * 2017-05-26 2018-12-04 爱思开海力士有限公司 Deep learning for ldpc decoding
WO2019028386A1 (en) * 2017-08-03 2019-02-07 Shouty, LLC Method and system for aggregating content streams based on sensor data
CN112565779A (en) * 2020-12-12 2021-03-26 四川大学 Video steganography method based on distortion drift
CN112767236A (en) * 2020-12-30 2021-05-07 稿定(厦门)科技有限公司 Image distortion effect rendering method and device
CN113709455A (en) * 2021-09-27 2021-11-26 北京交通大学 Multilevel image compression method using Transformer
US11356493B2 (en) 2017-10-12 2022-06-07 Streaming Global, Inc. Systems and methods for cloud storage direct streaming

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1462548A (en) * 2001-04-24 2003-12-17 株式会社东芝 Digital watermark burying method and device, and digital watermark detecting method and device
US6778678B1 (en) * 1998-10-02 2004-08-17 Lucent Technologies, Inc. High-capacity digital image watermarking based on waveform modulation of image components
CN101238731A (en) * 2005-08-08 2008-08-06 皇家飞利浦电子股份有限公司 Method and system for video copyright protection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6778678B1 (en) * 1998-10-02 2004-08-17 Lucent Technologies, Inc. High-capacity digital image watermarking based on waveform modulation of image components
CN1462548A (en) * 2001-04-24 2003-12-17 株式会社东芝 Digital watermark burying method and device, and digital watermark detecting method and device
CN101238731A (en) * 2005-08-08 2008-08-06 皇家飞利浦电子股份有限公司 Method and system for video copyright protection

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107770404A (en) * 2016-08-19 2018-03-06 中国人民解放军信息工程大学 Implicit imaging communication method and apparatus based on alpha channel parameters
CN107770404B (en) * 2016-08-19 2019-12-03 中国人民解放军信息工程大学 Implicit imaging communication method and apparatus based on alpha channel parameters
US11044017B2 (en) 2017-03-14 2021-06-22 Kookmin University Industry Academy Cooperation Foundation MIMO-OFDM of optical wireless system using screen
WO2018169170A1 (en) * 2017-03-14 2018-09-20 국민대학교 산학협력단 Mimo-ofdm of optical wireless system using screen
CN108933602A (en) * 2017-05-26 2018-12-04 爱思开海力士有限公司 Deep learning for ldpc decoding
CN108933602B (en) * 2017-05-26 2021-09-10 爱思开海力士有限公司 Deep learning for low density parity check decoding
WO2019028386A1 (en) * 2017-08-03 2019-02-07 Shouty, LLC Method and system for aggregating content streams based on sensor data
US10574715B2 (en) 2017-08-03 2020-02-25 Streaming Global, Inc. Method and system for aggregating content streams based on sensor data
US11196788B2 (en) 2017-08-03 2021-12-07 Streaming Global, Inc. Method and system for aggregating content streams based on sensor data
US11356493B2 (en) 2017-10-12 2022-06-07 Streaming Global, Inc. Systems and methods for cloud storage direct streaming
CN112565779A (en) * 2020-12-12 2021-03-26 四川大学 Video steganography method based on distortion drift
CN112565779B (en) * 2020-12-12 2021-10-29 四川大学 Video steganography method based on distortion drift
CN112767236A (en) * 2020-12-30 2021-05-07 稿定(厦门)科技有限公司 Image distortion effect rendering method and device
CN113709455A (en) * 2021-09-27 2021-11-26 北京交通大学 Multilevel image compression method using Transformer
CN113709455B (en) * 2021-09-27 2023-10-24 北京交通大学 Multi-level image compression method using transducer

Similar Documents

Publication Publication Date Title
WO2015106635A1 (en) Unobtrusive data embedding in information displays and extracting unobtrusive data from camera captured images or videos
Fang et al. Deep template-based watermarking
US11410261B2 (en) Differential modulation for robust signaling and synchronization
Jiang et al. Wireless semantic communications for video conferencing
Zhong et al. An automated and robust image watermarking scheme based on deep neural networks
Mandal et al. Digital image steganography: A literature survey
JP5539348B2 (en) Structured multi-pattern watermark generation apparatus and method, watermark embedding apparatus and method using the same, and watermark detection apparatus and method
US20110128353A1 (en) Robust image alignment for distributed multi-view imaging systems
CN104992207B (en) A kind of mobile phone two-dimensional bar code decoding method
Merhav et al. Optimal watermark embedding and detection strategies under limited detection resources
JP2022549031A (en) Transformation of data samples into normal data
Yin et al. Robust adaptive steganography based on dither modulation and modification with re-compression
Chen et al. Robust and unobtrusive display-to-camera communications via blue channel embedding
Su et al. Fast and secure steganography based on J-UNIWARD
Fang et al. An optimization model for aesthetic two-dimensional barcodes
US9922263B2 (en) System and method for detection and segmentation of touching characters for OCR
Duan et al. Robust image steganography against lossy JPEG compression based on embedding domain selection and adaptive error correction
JP7212543B2 (en) Decoding device, hologram reproducing device, and decoding method
Hu et al. A Transfer Attack to Image Watermarks
Abdullatif et al. Robust image watermarking scheme by discrete wavelet transform
Kaur et al. A secure and high payload digital audio watermarking using features from iris image
US20240153146A1 (en) Encoding data matrices into color channels of images using neural networks and deep learning
Kabir et al. Intensity gradient based edge detection for pixelated communication systems
Lv et al. Using QRCode to Enhance Extraction Efficiency of Video Watermark Algorithm
Li et al. QR Code Data Hiding Algorithm Based on Cell Splitting Fusion with Contrast Stretching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15736945

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15736945

Country of ref document: EP

Kind code of ref document: A1