WO2014131448A1 - Method and apparatus for embedding data in video - Google Patents

Method and apparatus for embedding data in video

Info

Publication number
WO2014131448A1
WO2014131448A1 PCT/EP2013/053978 EP2013053978W WO2014131448A1 WO 2014131448 A1 WO2014131448 A1 WO 2014131448A1 EP 2013053978 W EP2013053978 W EP 2013053978W WO 2014131448 A1 WO2014131448 A1 WO 2014131448A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
video
colour
data
area
modified
Prior art date
Application number
PCT/EP2013/053978
Other languages
French (fr)
Inventor
Michael Huber
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0085Time domain based watermarking, e.g. watermarks spread over several images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0028Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data, processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Structure of client; Structure of client peripherals using Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. Global Positioning System [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8358Generation of protective data, e.g. certificates involving watermark
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0051Embedding of the watermark in the spatial domain

Abstract

Accordingly, there is provided a method of embedding data in a video. The method comprises receiving a video and receiving data to be embedded in the video. The method further comprises encoding the data in a colour modification filter, wherein the colour modification filter is arranged to modify at least one colour component of the video separately from the other colour components. The method further comprises selecting at least one area of the video, and applying the colour modification filter to the at least one area to create modified video.

Description

METHOD AND APPARATUS FOR

EMBEDDING DATA IN VIDEO

Technical field

The present application relates to: a method of embedding data in a video; an apparatus for embedding data in a video; a method of analysing video; a device for analysing video; a method of processing a video with embedded data; and a computer-readable medium. Background

It is becoming increasingly common for users to watch television while using a smartphone or tablet, sometimes referred to as a "second screen". The popularity of this practice is driven by at least two trends. The first trend is the popularity of online social networks and the use of these to share comments with others while viewing a program. The second trend is the tendency to search for online information related to the television program while watching it. The online information may be current sports results, details of a product that has just been advertised, or the name of an actress in the current program or film.

Existing methods for delivering additional information to a user's smartphone or tablet from the television or set top box require some form of local communication such as Bluetooth™ or WiFi™. This requires compatible communication apparatus to be present on both the television and

smartphone, and so this may not be possible between all devices. This may also be difficult for some users to set-up. Further, this is less feasible when a user is watching television away from home, say in a hotel or a public place such as a sports bar, where the user may not have appropriate access to local communication networks.

US 2013/0028465, assigned to Fujitsu Ltd, describes a digital watermarking system for watermarking video. The system modifies the brightness of certain areas of the video image to convey data to a reading apparatus, which may be a smartphone. The document describes an anti-flicker mechanism to reduce the user perception of the brightness changes.

The Fujitsu system presents a solution to the above problems by using the video image itself to deliver embedded watermark data to a smartphone. However, the data rate is limited and special care is required to reduce the perceptibility of the introduced flicker.

A problem with such systems is that a trade-off must be made between the amount of data that can be delivered and how significantly the video is altered. The addition of data to the video image can be thought of as the addition of noise to the picture. Content creators, network operators, and users all have a low tolerance for any such noise perceived in the video image. As such the bandwidth of the additional data signal is limited by this constraint that the introduced noise is imperceptible to the viewer.

Summary

A system for high bandwidth delivery of data from a video image to a smart device having a camera is provided herein. The system operates by modifying each colour component of a video signal independently, and using advanced signal processing in the smart device to decode data embedded in the video. This system allows for high bandwidth delivery of data to a smart device having a camera, and does so with minimal perceptible impact on the video.

High bandwidth delivery of data allows for faster cycling of a set amount of data giving shorter lags on data capture and presentation in the smart device. High bandwidth delivery in this context also allows different types of data to be delivered, and can also reduce reliance on a smart device's internet connection; prior art systems have limited bandwidth suitable primarily for sending URLs. Alternatively, the method described herein can be used to send a low bandwidth signal with a minimal alteration of the video image. Accordingly, there is provided a method of embedding data in a video. The method comprises receiving a video and receiving data to be embedded in the video. The method further comprises encoding the data in a colour modification filter, wherein the colour modification filter is arranged to modify at least one colour component of the video separately from the other colour components. The method further comprises selecting at least one area of the video, and applying the colour modification filter to the at least one area to create modified video. Because the embedded data is carried in the video image itself, no extension to standardized transport streams, receivers, decoders, or displays is needed to enable the delivery of the embedded data.

By encoding data in the video image using small changes to at least one colour signal of a video, the small changes having spatial and temporal frequency dissimilar to that of the video, the data can be encoded in a way that is undetectable by the naked eye, but which a device such as a smart phone can detect, decode, and present to the user. The method may further comprise encoding the modified video.

The colour variation filter modifies at least two colour components, each modified separately from the other colour components. Separate but related modifications may be made to each colour component, the related modifications used to jointly encode a data value (such as a particular binary word).

The data to be encoded in the video may comprise two types of data, and respective colour modification filters may be encoded for each type of data, each colour modification filter arranged to apply to a different colour component. The data to be encoded in the video may be split into a plurality of parts, and respective colour modification filters may be encoded for each part of the data, each colour modification filter arranged to apply to a different colour component.

The data to be encoded in the video may be split into a plurality of parts, and wherein respective colour modification filters may be encoded for each part of the data, each colour modification filter arranged to apply to a different area of the video.

No colour modification filter may be applied to at least one further area of the video.

The colour modified areas and the non-modified areas of the video may be interspersed. The colour modified areas from each of a plurality of colour modification filters may be interspersed. The interspersion may be a chequered pattern.

The set of colour components may comprise red, green and blue. The set of colour components may comprise a luminance and two chrominance components. The two chrominance components may comprise colour difference components such as blue-luminance and red-luminance differences. At least one area of the video to which the colour modification filter is applied may comprise a first area, wherein the first area is adjacent to a second area of video. The colour components modified by the colour modification filter in the first area may be unmodified in the second area. There is further provided an apparatus for embedding data in a video, the apparatus comprising at least one input, a data encoder, and a video processor. The at least one input is arranged to receive a video and to receive data to be embedded in the video. The data encoder is arranged to encode the data in a colour modification filter, wherein the colour modification filter is arranged to modify at least one colour component of the video separately from the other colour components. The video processor is arranged to select at least one area of the video, and applying the colour modification filter to the at least one area.

The colour variation filter may modify at least two colour components, each may be modified separately from the other colour components.

No colour modification filter may be applied to at least one further area of the video. The colour modified areas and the non-modified areas of the video may be interspersed. The colour modified areas from each of a plurality of colour modification filters may be interspersed. The interspersion may be a chequered pattern. The set of colour components comprises red, green and blue.

The apparatus may further comprise a video encoder arranged to encode the modified video.

There is further provided a method of analysing video, the method comprising: capturing a video image with a camera; and decomposing the captured image into at least one spatial scale. The method further comprises extracting temporal variations at a predetermined frequency range; and decoding embedded data from the temporal variations. To differentiate the embedded data from any noise in the video signal, spatial pooling is used. By looking at the average colour change over numerous pixels in a particular area of the image, a very small shift applied to all the pixels in that area can be distinguished from any noise, and also from the video image.

The predetermined frequency may be the frame rate of the video being analysed. Other than at a scene change, each video frame is substantially similar to the next. This principle is the basis for modern video encoding standards. So in most video there are not many spatially significant changes between frames, it is in this empty spatio-temporal territory of the video signal that the additional data is embedded, and from which it is detected.

The method may further comprise detecting a relative size of the video image compared to the total captured image area. If the relative size of the video image is determined to be less than a threshold value, an error message is created.

The method may further comprise decomposing the captured image into at least one spatial scale, the spatial scale chosen as a fraction of the relative size of the video image compared to the captured image area.

The size of the video image is determined using an edge detection algorithm. If the video image is determined to be larger than the captured image area, at least one predetermined spatial scale is used. The at least one spatial scale is chosen to correspond to an expected spatial scale of the screen areas to which a colour modification filter has been applied. The spatial scale may be chosen to be the shortest of the expected horizontal and expected vertical scale of the colour modification filter.

The method may further comprise outputting the decoded embedded data to a buffer.

There is further provided a device for analysing video, the device comprising a camera and a processor. The camera is arranged to capturing a video image. The processor is arranged to: decomposing the captured image into at least one spatial scale; extracting temporal variations at a predetermined frequency range; and decoding embedded data from the temporal variations.

To differentiate the embedded data from any noise in the video signal, spatial pooling is used. By looking at the average colour change over numerous pixels in a particular area of the image, a very small shift applied to all the pixels in that area can be distinguished from any noise, and also from the video image. The predetermined frequency may be the frame rate of the video being analysed. Other than at a scene change, each video frame is substantially similar to the next. This principle is the basis for modern video encoding standards. In most video there are not many spatially significant changes between frames, it is in this empty spatio-temporal territory of the video signal that the additional data is embedded, and from which it is detected.

The processor may be further arranged to detecting a relative size of the video image compared to the total captured image area. If the relative size of the video image is determined to be less than a threshold value, an error message is created.

The processor may be further arranged to decompose the captured image into at least one spatial scale, the spatial scale chosen as a fraction of the relative size of the video image compared to the captured image area. The size of the video image is determined using an edge detection algorithm. If the video image is determined to be larger than the captured image area, at least one predetermined spatial scale is used. The at least one spatial scale is chosen to correspond to an expected spatial scale of the screen areas to which a colour modification filter has been applied.

The device may further comprise a display for outputting the decoded embedded data.

There is further provided a method of processing a video with embedded data. The method comprises receiving a video with first embedded data, and detecting the first embedded data within the video, and identifying which colour signal has been modified to carry the first embedded data. The method further comprises receiving additional data to be embedded in the video, and encoding the additional data in a colour modification filter, wherein the colour modification filter is arranged to modify a different colour component than the one modified to embed the first embedded data. The method further comprises selecting at least one area of the video, and applying the colour modification filter to the at least one area.

The at least one area modified to carry the additional data may be the same as or different to the at least one area modified to carry the first embedded data. There is further provided a computer-readable medium, carrying instructions, which, when executed by computer logic, causes said computer logic to carry out any of the methods defined herein.

There is further provided a computer-readable storage medium, storing instructions, which, when executed by computer logic, causes said computer logic to carry out any of the methods defined herein. The computer program product may be in the form of a non-volatile memory or volatile memory, e.g. an EEPROM (Electrically Erasable Programmable Read-only Memory), a flash memory, a disk drive or a RAM (Random-access memory).

Brief description of the drawings

A method and apparatus for embedding data in video will now be described, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 illustrates an overview of the system described herein;

Figure 2 illustrates a method for embedding data in a video;

Figure 3 illustrates one example of how a video image may be modified to embed data therein;

Figure 4 illustrates an alternative example of an arrangement for embedding data into a video;

Figure 5 illustrates an alternative video modification pattern;

Figure 6 illustrates a further alternative arrangement for embedding data in a video; Figure 7 illustrates a method by which a device decodes embedded data from a displayed modified video image;

Figure 8 illustrates an alternative method for deriving embedded video data from a video;

Figure 9 illustrates and apparatus for performing the above decoding methods;

Figure 10 illustrates an example of a captured video image comprising a video display having a screen area; and

Figure 1 1 illustrates an additional method whereby a modified video encoded with a first set of embedded data is processed and a second set of data is additionally embedded therein.

Detailed description

Figure 1 illustrates an overview of the system described herein. The video 1 10 and data 1 15 to be transmitted with the video 1 10 are input to the system.

At 1 17 a colour modification filter is created using data 1 15. The colour modification filter is applied to video 1 10 at 1 19 to create modified video 120.

Modified video 120 incorporates video 1 10 with data 1 15 embedded therein.

The modified video 120 should appear substantially the same as original video 1 10 as observed by the naked eye.

The modified video 120 is encoded 130 and transmitted 140 to a decoder 150 which outputs the modified video on a display 190. It should be noted that the encode 130, the transmit 140, and the decode 150 are optional processes.

The modified video 120 as displayed by display 190 is captured by a camera 210. Device 200 comprises a camera 210 and a processor 220. The modified video imagine captured by camera 210 is sent to the processor 220 which analyses the captured image of the modified video 120 and deciphers the data 1 15 embedded therein. Processor 220 then outputs data 1 15.

In this way, additional data 1 15 is embedded in a video, the additional data 1 15 can be decoded from the video by an appropriate device 200 comprising a camera 210 and a processor 220. Figure 2 illustrates a method for embedding data in a video. Video is received at 251 and data is received at 252. The received data is used to create a colour modification filter 260, the colour modification filter arranged to modify at least one colour component of the video separately from the other colour components and to encode the received data on the at least one modified colour component. At 270 the colour modification filter is applied to a selected area of the video and in this way the video received at 251 is modified so as to embed the data received at 252 into a modified version of the original video.

Figure 3 illustrates one example of how a video image may be modified to embed data therein. In this example the colour modification filter acts upon only the green colour component of the video signal. This colour modification filter is applied to only half of the video area into selected areas labelled 301 and 304 in figure 3. The remaining areas of the video 302 and 303 are unmodified. The boundaries between the video areas horizontally and vertically bisect the video area, and video areas 301 and 304 are diagonally opposite. Similarly video areas 302 and 303 are diagonally opposite. As such each modified area has a boundary with at least one unmodified area.

By applying the colour modification filter to an area of the video that spans many pixels, a relatively minor colour shift to the green component of the video signal can be identified. Furthermore, that certain areas of the video area are unmodified also improves the detectability of the colour shift to the green component of the video signal in the modified areas. These factors allow for a very minor change to the video signal to be made and yet still detectable by a camera 210 of device 200. In this way, data may be embedded in the video without the viewer being able to recognise any difference between the original and the modified video.

In an alternative arrangement to that of Figure 3, the video area is divided into rows, to give a plurality of horizontal strips of the image. Each sub-area of the video is thus a strip. The first strip has a colour signal modified, the second strip is unmodified, the third strip also has a colour signal modified, and the fourth strip is unmodified. The expected spatial frequency of the pattern is the height of each strip. In a further alternative, the horizontal strip pattern is rotated through 90 degrees to create a vertical stripe pattern. Figure 4 illustrates an alternative example of an arrangement for embedding data into a video. Here the video area is separated into four rows of equal height and four columns of equal width to create sixteen equal areas numbered 401 to 416. In this embodiment half of the sixteen video areas have data embedded on the red component of the video signal and the other half of the video areas are unchanged. Alternating areas of the video image are modified by the filter to generate a chequered pattern of modified and unmodified areas. Both the increased number of areas and the increased distribution of the video areas that are modified make it harder for a user to perceive any change in the video for a given magnitude of signal modification.

Figure 5 illustrates an alternative video modification pattern. Video area 500 is partitioned into four rows of equal height and four columns of equal width to generate 16 areas of equal size. The sixteen areas are divided between two sets; a first set 510 in which the red and green colour signals of the video are modified and a second set of areas 520 where the colour signals are unmodified. The set of first areas 510 and set of second areas 520 are distributed to form a chequered pattern, as in the black and which squares of a chess board or checkers board. Each area having modified colour signals is adjacent to at least one other area where the colour signals area not modified such that a device 200 seeking to detect the embedded data may make a comparison between the different areas in order to detect an average colour signal shift.

In an alternative embodiment, similar to figure 5 but not illustrated, a different arrangement of the modified colour signals is provided. The same

arrangement of a first set of areas and a second set of areas is used as in Figure 5, but in this arrangement only the red colour signal of the first set of video areas is modified and the green colour signal of a second set of video areas is modified. In this way, each video area having the red signal modified is adjacent another signal area which does not have the red signal modified, and similarly each screen having the green signal modified is adjacent at least one area which does not have the green signal modified. As such a detecting device 200 can detect a minor colour shift over the pixels of an area.

In the embodiments described above the first set of video areas have the same colour modification filter applied thereto. However, in an alternative embodiment the set of first video areas is split into a plurality of subsets wherein each subset of video areas will have a different colour modification filter applied. The different colour modification filters may modify either the same or different combinations of video colour signals. For example, referring back to Figure 4 the set of video areas to which the red colour signal is modified may be separated into four subsets comprising video areas 402 and 405, 404 and 407, 410 and 413, and 412 and 415. The greater the number of colour modification filters applied to the video, the greater the amount of data that may be carried. However, because each individual colour modification filter is applied to a smaller area of the video the threshold for the magnitude of colour signal shift that is detectable by a device 200 is reduced. Figure 6 illustrates a further alternative arrangement for embedding data in a video. The video area 600 is split up into six columns of equal width and six rows of equal height to create 36 sub-areas. Eighteen of these sub-areas labelled "U" in the drawing have the colour signals of the video unmodified. Six of the sub-areas labelled "R" have the red colour component of the video modified, six sub-areas labelled "G" have the green colour component of the video modified, and six sub-areas labelled "B" have the blue colour

component of the video signal modified. The distribution of the U, R, G and B sub-areas is such that each sub-area having a colour component modified is adjacent at least one other sub-area which does not have that colour component modified. The distribution of sub-areas is also arranged such that sub-areas having the same colour component modified are spread out within the video area. Figure 7 illustrates a method by which a device 200 decodes embedded data from a displayed modified video image. At 710 an image of the displayed video is captured via a camera. The camera captures a video of a scene which includes the display upon which the modified video is being shown. The device may then perform a basic video processing operation to isolate the displayed video from the rest of the scene. This is relatively easy to achieve because the area to be isolated has four straight sides and in most cases will be much brighter than the rest of the scene. At 720 the captured video is decomposed into a spatial scale. The spatial scale into which the video is decomposed corresponds to the expected spatial of the sub-areas of the video image to which the colour modification filters are applied. With reference to Figures 4 and 5 the expected spatial scale is one quarter of the display height or one quarter of the display width. If the spatial scale of the sub-areas to which the embedded data is embedded is not known a best fit or best guess spatial scale is used, for example one eighth of the screen height.

At 730 temporal variations of the spatial scale are extracted. The temporal variations are extracted using an expected frequency at which the colour signal of a sub-area of the video is modulated. It is expected that the colour modification of signal will be updated once per frame. Broadcast television typically comprises 25 frames per second, giving a frequency of 25 Hertz. As such, the expected frequency range of the temporal variations of the modified colour signal will be between 10 and 100 hertz. By looking for a colour variation across a particular spatial scale and a particular temporal scale the colour variation signal can be identified and separated from what is, in the context of the embedded data, the noise of the video. In this way a subtle signal change can stand out over original video signal, and thus, at 740 the embedded data is decoded from the video image by way of the identified variation in the modified colour signal.

Figure 8 illustrates an alternative method for deriving embedded video data from a video. At 810 captured video of a scene which includes a display upon which modified video is being displayed is received. Optionally, and not illustrated, the area of the video display is isolated from the rest of the captured scene. This may be done using a brightness based filtering algorithm, for example. At 820 a spatial average of the captured video is taken. The spatial average taken coincides with an expected spatial distribution of sub-areas to which different colour modification filters are applied within the displayed video. Subsequently, at 830 temporal filtering is applied to the spatial averages. The temporal filtering is performed at the expected frequency of changes in the embedded data within the displayed video. The embedded data pattern would be expected to update once per frame and therefore have an update frequency of 25 hertz and so the temporal filtering maybe applied at a range between 20 and 80 hertz.

At 840 the colour shifts applied to different sub-areas within the displayed video are identified. If the colour modification pattern applied to different sub- areas within the displayed video is not known then at this stage a correlation process may be performed to identify which sub-areas, if any, have had a common colour modification filter applied thereto. If any correlations are identified then the signal may be further refined by taking an average of correlated sub-areas, the average providing increased accuracy of signal detection. Where appropriate a weighted average may be applied.

At 850 the identified colour shift signals derived from at least one sub-area of the displayed video are translated into bit values in order to obtain the data that was embedded in the modified video.

Figure 9 illustrates and apparatus for performing the above decoding methods. The apparatus comprises a camera 910 a processor 920 a memory 930 a buffer 940 a display 950 and a user interface 960. The user interface 960 may be a touch sensitive surface overlaid the display 950. The processor 920 interacts with each of the other components of device 900. In operation camera 910 captures a video of a scene which includes a display showing modified video. The captured video is passed to processor 920. Processor 920 is arranged to receive instructions which, when executed, cause the processor 920 to carry out the above described method. The instructions may be stored on the memory 930. Processor 920 is arranged to receive and instruction to capture video using the camera 910 from user interface 960. The data that is de-embedded from the modified video is temporarily stored in buffer 940. Once sufficient data is decoded for the processor 920 to pass it an output of the decoded data is made on display 950.

Figure 10 illustrates an example of a captured video image 1000 comprising a video display 1010 have a screen area 1029. For reference, sixteen sub- areas 1030 are illustrated on the screen area to represent the boundaries between areas having different colour modification filters applied thereto. It should be noted that sub-areas 1030 would not be visible in this way. As mentioned above when processing the captured video 1000, the receiving device 200 (or 900) may isolate the display area 1020 before beginning the spatial and temporal processing, or it may simply process the whole received image. In the case of the latter, it should be noted that the temporal variation of the scene outside of the display area 1020 can be expected to be minimal and so this causes little disturbance to the decoding process. Where the spatial processing is applied to the whole captured scene as opposed to some otherwise identified screen area the frequency of the spatial analysis is increased from that expected on the video image alone. In one embodiment the spatial frequency is doubled.

Figure 1 1 illustrates an additional method whereby a modified video encoded with a first set of embedded data is processed and a second set of data is additionally embedded therein. At 1 1 10 video with first embedded data is received at 1 120 the embedded data within the received video is detected and the modified areas and colour signals are identified. At 1 130 additional data is received for embedding in the received video, this will comprise the second embedded data. At 1 140 a colour modification filter is created corresponding to the received additional data. The colour modification filter is created so as to not conflict with the modified colour signals embedding the first embedded data. As such the colour modification filter created at 1 140 is preferably created on a colour signal different to the colour signal upon which the first embedded data is encoded. Where the first embedded data has been encoded across a plurality of all the colour signals of the received video, then the created colour modification filter is applied to unmodified colour signals in particular sub-areas of the received video. If at 1 140 it is determined that no further colour modification may be applied to the received video without preventing the first embedded data from being decoded or without creating a perceptible change in the received video then an error message is output. However, where an appropriate colour modification filter is created at 1 140 then at 1 150 this colour modification filter is applied to the received video and the further modified video is output comprising both the first and second embedded data.

It will be apparent to the skilled person that the exact order and content of the actions carried out in the method described herein may be altered according to the requirements of a particular set of execution parameters. Accordingly, the order in which actions are described and/or claimed is not to be construed as a strict limitation on order in which actions are to be performed.

It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. The word "comprising" does not exclude the presence of elements or steps other than those listed in a claim, "a" or "an" does not exclude a plurality, and a single processor or other unit may fulfil the functions of several units recited in the claims. Any reference signs in the claims shall not be construed so as to limit their scope

Claims

Claims
1 . A method of embedding data in a video, the method comprising:
receiving a video;
receiving data to be embedded in the video;
encoding the data in a colour modification filter, wherein the colour modification filter is arranged to modify at least one colour component of the video separately from the other colour components; and
selecting at least one area of the video, and applying the colour modification filter to the at least one area to create modified video.
2. The method of claim 1 , wherein the colour variation filter modifies at least two colour components, each modified separately from the other colour components.
3. The method of claim 1 or 2, wherein no colour modification filter is applied to at least one further area of the video.
4. The method of any preceding claim, wherein the set of colour components comprises red, green and blue.
5. The method of any preceding claim, wherein the at least one area of the video to which the colour modification filter is applied comprises a first area, wherein the first area adjacent to a second area of video, and wherein the colour components modified by the colour modification filter in the first area are not modified in the second area.
6. An apparatus for embedding data in a video, the apparatus comprising: at least one input arranged to receive a video and to receive data to be embedded in the video;
a data encoder arranged to encode the data in a colour modification filter, wherein the colour modification filter is arranged to modify at least one colour component of the video separately from the other colour components; and a video processor arranged to select at least one area of the video, and applying the colour modification filter to the at least one area.
7. The apparatus of claim 6, wherein the colour variation filter modifies at least two colour components, each modified separately from the other colour components.
8. The apparatus of claims 6 or 7, further comprising a video encoder arranged to encode the modified video.
9. A method of analysing video, the method comprising:
capturing a video image with a camera;
decomposing the captured image into at least one spatial scale;
extracting temporal variations at a predetermined frequency range; and decoding embedded data from the temporal variations.
10. The method of claim 9, wherein the predetermined frequency is the frame rate of the video being analysed.
1 1 . The method of claim 9 or 10, further comprising detecting a relative size of the video image compared to the total captured image area.
12. The method of any of claims 9 to 1 1 , further comprising decomposing the captured image into at least one spatial scale, the spatial scale chosen as a fraction of the relative size of the video image compared to the captured image area.
13. The method of any of claims 9 to 12, further comprising outputting the decoded embedded data to a buffer.
14. A device for analysing video, the device comprising:
a camera arranged to capturing a video image; and
a processor arranged to: decomposing the captured image into at least one spatial scale; extracting temporal variations at a predetermined frequency range; and
decoding embedded data from the temporal variations.
15. The device of claim 14, wherein the predetermined frequency is the frame rate of the video being analysed.
16. The device of claim 14 or 15, wherein the processor is further arranged to detecting a relative size of the video image compared to the total captured image area.
17. The device of any of claims 14 to 16, wherein the processor is further arranged to decompose the captured image into at least one spatial scale, the spatial scale chosen as a fraction of the relative size of the video image compared to the captured image area.
18. The device of any of claims 14 to 17, the device further comprising a display for outputting the decoded embedded data.
19. A method of processing a video with embedded data, the method comprising:
receiving a video with first embedded data;
detecting the first embedded data within the video, and identifying which colour signal has been modified to carry the first embedded data;
receiving additional data to be embedded in the video;
encoding the data in a colour modification filter, wherein the colour modification filter is arranged to modify a different colour component than the one modified to embed the first embedded data; and
selecting at least one area of the video, and applying the colour modification filter to the at least one area.
20. A computer-readable medium, carrying instructions, which, when executed by computer logic, causes said computer logic to carry out any of the methods defined by claims 1 to 6, 9 to 13, and 19.
PCT/EP2013/053978 2013-02-27 2013-02-27 Method and apparatus for embedding data in video WO2014131448A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/053978 WO2014131448A1 (en) 2013-02-27 2013-02-27 Method and apparatus for embedding data in video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2013/053978 WO2014131448A1 (en) 2013-02-27 2013-02-27 Method and apparatus for embedding data in video

Publications (1)

Publication Number Publication Date
WO2014131448A1 true true WO2014131448A1 (en) 2014-09-04

Family

ID=47827182

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2013/053978 WO2014131448A1 (en) 2013-02-27 2013-02-27 Method and apparatus for embedding data in video

Country Status (1)

Country Link
WO (1) WO2014131448A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141441A (en) * 1998-09-28 2000-10-31 Xerox Corporation Decoding data from patterned color modulated image regions in a color image
US20020027612A1 (en) * 2000-09-07 2002-03-07 Brill Michael H. Spatio-temporal channel for images
WO2007011889A2 (en) * 2005-07-19 2007-01-25 Etv Corporation Methods and apparatus for providing content and services coordinated with television content
US20130028465A1 (en) 2011-07-28 2013-01-31 Fujitsu Limited Digital watermark embedding apparatus and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6141441A (en) * 1998-09-28 2000-10-31 Xerox Corporation Decoding data from patterned color modulated image regions in a color image
US20020027612A1 (en) * 2000-09-07 2002-03-07 Brill Michael H. Spatio-temporal channel for images
WO2007011889A2 (en) * 2005-07-19 2007-01-25 Etv Corporation Methods and apparatus for providing content and services coordinated with television content
US20130028465A1 (en) 2011-07-28 2013-01-31 Fujitsu Limited Digital watermark embedding apparatus and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIAOQIANG LI ET AL: "Multi-channel data hiding scheme for color images", INFORMATION TECHNOLOGY: CODING AND COMPUTING [COMPUTERS AND COMMUNICAT IONS], 2003. PROCEEDINGS. ITCC 2003. INTERNATIONAL CONFERENCE ON APRIL 28-30, 2003, PISCATAWAY, NJ, USA,IEEE, 28 April 2003 (2003-04-28), pages 569-573, XP010638682, ISBN: 978-0-7695-1916-6 *

Similar Documents

Publication Publication Date Title
US20120210349A1 (en) Multiple-screen interactive screen architecture
US20120321273A1 (en) Video display control using embedded metadata
US20030098869A1 (en) Real time interactive video system
US20100188572A1 (en) Systems and methods for providing closed captioning in three-dimensional imagery
US20120218471A1 (en) Content Source Identification Using Matrix Barcode
US20110088075A1 (en) System and method for distributing auxiliary data embedded in video data
US20120182320A1 (en) Utilizing Matrix Codes to Install a Display Device
US20100026783A1 (en) Method and apparatus to encode and decode stereoscopic video data
US20120185905A1 (en) Content Overlay System
US20110188700A1 (en) Apparatus for inserting watermark and method therefor
US8553146B2 (en) Visually imperceptible matrix codes utilizing interlacing
US20100053305A1 (en) Stereoscopic video delivery
US20140047475A1 (en) Method and apparatus for processing digital service signals
US20120300046A1 (en) Method and System for Directed Light Stereo Display
US20110229106A1 (en) System for playback of ultra high resolution video using multiple displays
US20130283318A1 (en) Dynamic Mosaic for Creation of Video Rich User Interfaces
JP2007274246A (en) Receiver and program
US20110280434A1 (en) Method and system for watermakr insertin using video start codes
US20090235321A1 (en) Television content from multiple sources
US20120300031A1 (en) Apparatus and Method for Processing Video Content
US20160073144A1 (en) Image domain compliance
Li et al. Real-time screen-camera communication behind any scene
US20110292175A1 (en) Broadcast receiver and 3d subtitle data processing method thereof
US8827150B2 (en) 3-D matrix barcode presentation
US20080252781A1 (en) Imcorporation and Extraction of a Seed Linked to a Television Signal for Pseudo-Random Noise Generation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13707608

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct app. not ent. europ. phase

Ref document number: 13707608

Country of ref document: EP

Kind code of ref document: A1