GB2508242A - Instructions to modify video for the inclusion of product placement objects - Google Patents

Instructions to modify video for the inclusion of product placement objects Download PDF

Info

Publication number
GB2508242A
GB2508242A GB1221327.8A GB201221327A GB2508242A GB 2508242 A GB2508242 A GB 2508242A GB 201221327 A GB201221327 A GB 201221327A GB 2508242 A GB2508242 A GB 2508242A
Authority
GB
United Kingdom
Prior art keywords
video data
additional
objects
instructions
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1221327.8A
Other versions
GB201221327D0 (en
GB2508242B (en
Inventor
Julien Fauqueur
Simon Cuff
Philip Mclauchlan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mirriad Ltd
Original Assignee
Mirriad Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mirriad Ltd filed Critical Mirriad Ltd
Priority to GB1221327.8A priority Critical patent/GB2508242B/en
Publication of GB201221327D0 publication Critical patent/GB201221327D0/en
Priority to US14/091,294 priority patent/US9402096B2/en
Publication of GB2508242A publication Critical patent/GB2508242A/en
Application granted granted Critical
Publication of GB2508242B publication Critical patent/GB2508242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • H04N5/2723Insertion of virtual advertisement; Replacing advertisements physical present in the scene by virtual advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H20/00Arrangements for broadcast or for distribution combined with broadcast
    • H04H20/10Arrangements for replacing or switching information during the broadcast or the distribution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04HBROADCAST COMMUNICATION
    • H04H60/00Arrangements for broadcast applications with a direct linking to broadcast information or broadcast space-time; Broadcast-related systems
    • H04H60/02Arrangements for generating broadcast information; Arrangements for generating broadcast-related information with a direct linking to broadcast information or to broadcast space-time; Arrangements for simultaneous generation of broadcast information and broadcast-related information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/812Monomedia components thereof involving advertisement data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications

Abstract

A system and method for incorporating additional video objects into input video data. The method includes steps of retrieving the input video data A-E, analysing the input video data and creating instructions for generating additional video data comprising the one or more additional video objects having desired visual attributes identified by the video analysis. The instructions are transmitted to a first remote system that has already received part of the input video data. The algorithm or instructions are used to generate S604 the additional video data B, at the remote system, which may be incorporated into the input video data to produce an output video S606. The instructions allow a remote hub to generate the additional video content locally rendering object artwork to suit the source video. The video objects may be representations of products which are inserted into the video to create product placement advertising.

Description

Producing \Tideo Data
Technical Field
The present invention relates to producing video data. In particular, but not exclusively, the present invention relates to methods for, and for use in, incorporating one or more additional video objects into source video data to produce output video data, to computer programs, computer program products arranged and systems comprising apparatus adapted for performing such methods.
Background
The television broadcast industry has changed significantly in recent years.
Prior to these changes, television programmes were often recorded on video tape, either in a television studio or on location. With videotape there is no file structure; just linear picture information. The availability of digital technologies has resulted in media which arc structured with directories and files. The number of processcs between raw captured material and the final material is constantly increasing as., in the file-based domain, it is possible to create worktlows by concatenating several processes.
Up until recently, branded products could be incorporated into video material by physical or prop placement at the time the video material vas recorded to generate revenue for the content producer and contcnt provider via product placement. If it were desired to include a product in a given scene, the physical product, or a very good facsimile, would have to be placed in the scene when it was recorded. Whilst this was very simple, it was highly inflexible.
With digital file processing, many new processes become possible that can be used to embed a branded product within a scene retrospectively. This may involve digitally post-processing a captured scene to add a representation ot for example, a branded drinks container on a table or shelf US-A1-2009/026573? describes a system that allows key frames of a video content item to be identified, such as low value frames, high value frames and product placement fl-ames. Advertisements such as advert fl-ames may be inserted in to a sequence of key frames of the video content item, for example before or after one or more high value frames or by modifring one or more of the high value frames to include an advertisement. The key frame sequence including the advertisements can then be published for viewing by others.
It would be desirable to provide improved arrangements for producing video data.
Siitnntnrv In one embodiment, the method may comprise incorporating one or more additional video objects into input video data to produce output video data, the method comprising: retrieving the input video data; analysing the input video data to identify one or more desired visual attributes for the one or more additional video objects to possess when incorporated into the input video data; creating instructions for generating additional video data comprising the one or more additional video objects having the one or more desired visual attributes; and transmitting at least the instructions to a first remote system that has already received at least part of the input video data, the instructions being useable by the remote system to generate the additional video data for incorporation into the input video data.
By transmitting the instructions to a remote system that has already received at least part of the input data in preference to generating and transmitting the additional video data, the bandwidth requirements and transfer time are both reduced. As the input video data is analysed to identify one or more desired visual attributes, and the instructions comprise the data needed for generating additional video data comprising the one or more additional video objects having the one or more desired visual attributes, the additional video objects will have the desired appearance within output video data.
In one embodiment, the method may comprise receiving the input video data from a second remote system, the second remote system being different from the flist remote system.
An advantage of receiving the input video data from a different remote system to the one in which the instructions were transmitted to is that this allows increased flexibility in the process. There may be different resources at each rcmote system and a more secure and/or more efficient data transfer connection from the second remote system.
In onc embodiment, the method may compnsc transmitting additional media data associated with the one or more additional video objects for use in generating the additional video data to the first or second remote system. The media data may comprise high quality artwork to represent the product. By transmitting the additional media data associated with the additional video objects, it can be utilised by the instructions for generating the additional video data, giving a more realistic video object.
In one embodiment, generating the additional video data comprises generating one or more virtual products corresponding to the one or more additional video objects and applying the additional media data to the one or more virtual products.
Generating one or more virtual products to which the additional media data is applied to, allows more flexibility to generating the additional video data, as the virtual products may have different media data applied for different circumstances.
In one embodiment, the method may comprise transmitting at least part of the additional media data prior to transmitting the instructions. This allows for overall transfer time to be reduced as the instructions may take less time to transfer compared with the additional media data.
In one embodiment, the one or more desircd visual attributes include the position of the one or more additional video objects in the input video data, the method comprising: transmitting instructions speei'ing the horizontal and vertical position at which the one or more additional video objects are to be positioned in the additional video data. This allows for the generating the additional video data becomes a more automated process, requiring less user intcraction.
In one embodiment, the one or more desired visLial attributes include obscuring at least part of the one or more additional video objects, the method comprising: transmitting instructions specifying that at least part of the one or more additional video objects is to be masked such that one or more foreground objects appear in front of the at least pail of the one or more additional video obj ects in the output video data. By transmitting instructions for this, generating the additional video data becomes a more automated process, requiring less user interaction.
In one embodiment the one or more desired visual attributes include the visual appearance of the one or more additional video objects, the method comprising: transmitting instructions specifying one or more appearance effects to be used in relation to the one or more additional video objects when generating the additional video data. By transmitting instructions for this, generating the additional video data becomes a more automated process, requiring less user interaction.
In one embodiment, the first remote system has already received all of the input video data. This allows for overall transfer time to be reduced as the instructions may take less time to transfer compared with the input video data.
This allows for overall transfer timc to be reduced as the instructions may take less time to transfer compared with the input video data.
In one embodiment, the method may comprise transmitting data identifying one or more locations in a video processing system at which the output video data is to be stored when produced. This provides an automated process for the output data to be stored in the correct locations.
In one embodiment, the instructions comprise overlay generation instructions for generating a video overlay comprising the one or more additional video objects.
As the video overlay comprises one or more additional vidco objects, it can be viewed in conjunction with the input video data to provide an in context video of the additional video object allowing for in context checks of the additional video objects.
This also allows for a plurality of additional video objects to be added to input video data separately, allowing for multiple combinations of the additional video objects in the output video data.
In one embodiment, the input video data comprises an intermediate working version of source video data, the intermediate working version including at least video material corresponding to one or more selected segments within the source video data, the selected one or more segments having been selected for the inclusion of the one or more additional video objects. As a distributed network is used for incorporating additional video objects with source video data, the proccss is affordcd greater flexibility and security. The intermediate working version is not necessarily in the same order as the source video data, allowing for similar segments to be grouped together, simplifying the process of adding in additional video data, and reducing the risk of the video content being exposed prior to airing.
In onc embodiment the method may comprise receiving metadata associated with the intermediate working version, the metadata identifying at least one frame within the source video data which corresponds to the selected one or more frames.
An advantage of this is that the intermediate working version can allow for similar segments to be grouped together, benefiting the process of adding in additional video data.
Further features and advantages of the invention will become apparent from the following description of preferred embodiments of the invention, given by way of example only, which is made with reference to the accompanying drawings.
Brief Description of the Drawings
Figure 1 is a schematic diagram showing a system in accordance with some embodiments.
Figure 2 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into source video data to produce output video data in accordance with some embodiments.
Figure 3 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into input video data to produce additional video data in accordance with some embodiments.
Figure 4 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into input video data to produce additional video data in accordance with some embodiments.
Figure 5 is a sequence timing diagram showing the flow of messag associated with adding one or more additional video objects into source video data to produce mitput video data in accordance with sonic embodiments.
Figure 6 is a diagram that illustrates schematically a method for incorporating S one or more additional video objects into source video data to produce output video data in accordance with sonic embodiments.
Figure 7 is a schematic diagram showing a system in accordance with some embodiments.
Figure 8 is a schematic diagram showing a system in accordance with some embodiments.
Figure 9 is a schematic diagram showing a system in accordance with some embodiments.
Figure 10 is a schematic diagram showing a system in accordance with some embodiments.
Detailed Description
Figure 1 is a schematic diagram showing a video processing system 100 in accordance with sonic cmbodimcnts.
The video processing system 100 includes four sub-systems 102, 104, 106, 108 (referred to herein as "hubs"). Each hub performs one or more video processing tasks or functions within the video processing system 100. Each hub 102, 104, 106, 108 is situated in one or more geographical locations. In some embodiments, each of the hubs 102, 104, 106, 108 comprises computer hardware which has access to a local data storage system and preferably a cluster of Graphics Processing Unit (GPU)-enabled computers for video processing. It is known that video processing can be carried out on alternatives to OPUs and embodiments of the invention should not be limited to carrying out the video processing on OPUs only.
Each hub 102, 104, 106, 108 is connected to one or more other of the hubs 102, 104, 106, 108 via one or more data communication networks 110. In some embodiments, the hubs 102, 104, 106, 108 are connected to each othcr via the Internet. The hubs 102, 104, 106, 108 may each be located on a different Local Area Network (LAN). The LANs may be interconnected by a Virtual Private Network (VPN); a private network that uses the one or more data communication networks 110 to connect the hubs 102, 104, 106, 105 together securely over a potentially insecure network such as the Internet. Alternatively, some or all of the hubs 102, 104, 106, 108 may be interconnected using leased lines or other private network connections.
Hub 102, which is referred to herein as the "source" hub, performs, amongst other things, video data capture and video data analysis in the video processing system 100.
The source hub 102 may retrieve source video data as one or more digital files, supplied, for example, on video or data tape, on digital versatile disc (DVD), over a high-speed computer network, via the network 110, on one or more removable disc drives or in other ways.
The source hub 102 may be located on the same LAN as a media asset management server system 112 associated with a video content provider. This allows data transfer between the media asset management server system 112 and the source hub 102 to benefit from the speed and security of a LAN-based connection, rather than potentially suffer the limited bandwidth and access latency common with Internet data transfers.
In some embodiments, the source hub 102 comprises a video data analysis module 102a, which performs pre-analysis in relation to source video data. Such analysis may be performed using appropriate software which allows products to be placed digitally into existing video material.
The pre-analysis may be frilly automated in that it does not involve any human intervention.
In some embodiments, the video data analysis module 102a is used perform a pre-analysis pass in relation to the source video data to identi one or more segments in the source video data. This may involve using shot detection andlor continuity detection which will now be described in more detail.
Pre-analysis may comprise using a video format detection algorithm to identify the format of the source video data, and if necessary, convert the source video data into a format capable of receiving one or more additional video objects.
Pre-analysis may comprise using a shot detection function to identiz the boundaries between different shots in video data. For example, the video data
S
analysis module I 02a automatically detects "hard" and "soft" cuts between different shots, which correspond to hard and soft transitions respectively. Hard cuts correspond to an abrupt change in visual similarity between two consecutive frames in the video data. Soft cuts correspond to the beginning or the end of a soft transition (for example wipe and cross fading transitions), which is characterised by a significant but gradual change in visual appearance across scveral frames.
Pre-analysis may comprise using a continuity detection function to iden* similar shots (once detected) in video data. This can be used to maximise the likelihood that each (similar) shot in a given scene is identified -this may be a benefit in the context of digital product placement For each detected shot; a shot similarity algorithm detects automatically visually similar shots within the source video data.
The similarity detection is based on matching between frames, which captures an overall global similarity of background and lighting. It may be used to identifr shots which are part of a given scene in order to speed up the process of selecting shots that should be grouped together on the basis that they are similar to each other.
Pre-analysis may comprise using an object and/or locale template recognition function and/or a face detection and recognition function. Object template recognition involves identifring objects which reappear across, for example, multiple episodes of a television program, and which are appropriate for digital product placement; so that they can automatically be found in other episodes of the program.
Locale template recognition allows a template to be built for a certain locale in a television program and automatically detect the appearance of the locale in subsequent episodes of the program. A locale is a location (e.g. a room) which appears regularly in the program across multiple episodet Face detection and recognition involve identifring characters which, for example, reappear across multiple episodes of a television programme. This allows for characters to be associated with a particular digital product placement Pre-analysis may comprise using a tracking (such as 2D point tracking) function to detect and track multiple point features in video data. This involves using a tracking algorithm to detect and track featum points between consecutive frames.
Feature points correspond to locations within an image which are characteristic in visual appearance; in other words they exhibit a strong contrast (such as a dark corner on a bright background). A feature is tracked by finding its location in the next frame by comparing the similarity of its neighbouring pixels.
Pm-analysis may comprise using a planar tracking function to follow image regions over lime and determine their motion under the assumption that the surface is a plane. This may involve tracking 2D regions defined by splines, calculating their 2D translation, rotation, scale, shear and foreshortening through the video data. This process creates motion information that can be exploited by other video analysis functions, Pre-analysis may comprise using a motion-from-features detection function which involves using the tracked 2D points to determine 2D motion in the video data.
Given a set of tracked feature points, motion-from-features detection involves detecting which points move together according to the same rigid motion.
Pm-analysis may comprise using a 3D tracking function which involves using the tacked 2D points to determine 3D motion in the video data. 3D tracking involves extracting geometric information from a video shot, for example the camera focal distance, position and orientation as it moved. The other information recovered is the 3D shape of the viewed scene, represented as 3D points.
Pm-analysis may comprise using an autokeying function to separate background and foreground areas, allowing products to be digitally placed while respecting any occluding (foreground) objects to provide a natural-looking embedded image. When a foreground object moves in front of the background where it is desired to place a product digitally, the area into which the product is to be placed should stop at the boundary between the foreground and background areas. In general, the digitally placed product should cover the "mask" area of the background data. The correct mask can be especially difficult to create when the edge of the foreground object is very detailed or blurred. The autokey algorithm uses the planar tracker to create motion information so that known background or foreground areas can be propagated forwards and backwards through the video in time.
Pm-analysis may comprise region segmentation which is used to split the video data into regions that span both time and space. Region segmentation involves using an algorithm that detects regions of similar pixels within and across frames of a given video scene, for example to select point features for motion estimation.
Pm-analysis may comprise using a black border detection function, which is used to find the bordeis around the video image part of video data. This involves using an algorithm that detects the presence of black bars around the flames in a video sequence, which can interfere with various video processing algorithms.
Pm-analysis may comprise proxy creation, which involves creating a lower resolution and/or compressed version of the source video data.
The source hub 102 also comprises a segment sorting module 102b, which is used to sort the identified segments in the source video data.
As explained above, the video data analysis module 102a may be used to identifr the shots in the source video data and to find similar shots once the shots have been identified. The segment sorting module 102b is used to group identified segments together, for example on the basis that they all share one or more common characteristics. The segment sorting module lO2b may group identified segments together on the basis that they all correspond to a given scene in the source video data (even if they were dispersed throughout the source video data originally). Other suitable characteristics may include a common object, locale or suchlike.
The source hub 102 also comprises a digital product placement assessment module 102c, which is used to identifr and assess opportunities for digital product placement into the source video data. Identi'ing and assessing opportunities may involve human interaction. Identifring and assessing may comprise one or more of: * identifying opportunities for digital product placement * creating a mock-up of some or all of the source video data with one or more digitally placed products; * rendering preview imagery for the opportunity for digital product placement, for example with blue boxes indicating where the product could be digitally placed; and * generating an assessment report.
Hub 104, which is referred to herein as the "creative" hub, is used for creative work in the video processing system 100. The creative hub 104 is provided with appropriate creative software for use in the creative process.
The creative hub 104 comprises a tracking module 104a, which maybe part of the creative software. The tracking module 104a may be used to determine how the position of a digitally placed product should vary when added into video material, for example to take into account any movement of the camera that recorded the video material. Tracking may be automated and/or may involve human intervention.
Thc creative hub 104 also comprises a masking module 104b, which may be part of the creative software. The masking module 104b is used to assess how to handle occlusion (if any) of a product to be digitally placed in video material having regard to other objects that may already be present in the video material. Masking assessment may be automated and/or may involve human intervention.
The creative hub 104 also comprises an appearance modelling module 104c, which may be part of the creative software. The appearance modelling module 104c is used to provide a desired appearance in relation to the digitally placed product, for example using blur, grain, highlight, 3D lighting and other effects. Appearance modelling may be automated and/or may involve human intervention.
Since the creative process uses artistic and image manipulation skills, the creative hub 104 may be located near to a pool of such labour skills. The geographical split between the source hub 102 and the creative hub 104 thus providcs an efficiency benefit, whilst still minimising the risk of piracy by controlling what and how video is transmitted outside of the source hub 102.
Hub 106, which is referred to herein as the "quality control" (QC) hub performs quality control in the video processing system 100. Testing and review of video material or associated data created by the creative hub 104 is performed at the QC hub 106. The QC hub 106 may be geographically located remote from both the source hub 102 and the creative hub 104. The QC hub 106 is provided with appropriate quality control sof'arc for use in the quality control process.
The QC hub 106 comprises a rendering module 106a, which is used to render video material. Rendering may be fully automated.
The QC hub 106 also comprises a visual QC module 106b, which is used to play back video material for QC purposes and enables a viewer to approve or reject the video material being viewed from a QC perspective. lfl
HLLb 08, which is referred to herein as the "distribution" hub, distributes video content in the video processing system 100. The distribution hub 108 is provided with appropriate software for use iii the video distribution process.
The distribution hub 108 comprises a rendering module 108a, which is similar to the rendering module lQóa of the QChub 106.
The distribution hub 108 comprises a rcconforming module 108b, which is used to combine video material together and will be described in more detail below.
Reconforming may be frilly automated using the reconforming module I OSb.
In some embodiments, the distribution hub 108 is provided in the same geographic location(s) as the source hub 102, and in some instances may comprise at least some of the same hardware. This logical coupling of the source hub 102 and the distribution hub 108 is indicated by a dashed box 114 in Figure 1. It will be appreciated, however, that the source hub 102 and the distribution hub 108 could be logically separate entities which are not geographically co-located.
The video processing system 100 also includes an online portal 116 which may comprise one or more cloud-based application servers. Data associated with a project may be uploaded to the online portal 116 to facilitate access to the data, for example by clients. The online portal 116 comprises a portal II 6a which providcs access to the project data. The project data may comprise, for example, segment selection report data (produced by the segment sorting module 102b), digital product placement assessment report data (produced by the digital product placement assessment module 102c) and a mock-up of video material with a digitally placed product (produced by the digital product placement assessment module 102c).
By providing a set of hubs in this way, different stages of a video processing project can be carried out in a distributed manner across different regions or territories, using high speed Internet connections or other types of connections to communicate relevant data between these regions or territories. The video processing system 100 scales well for the optimal deployment of hardware systems.
The video processing system 100 may include a plurality of source hubs 102, for video data capture and analysis within the video processing system 100. A given source hub 102 may conveniently be located geoaphically close to a given video data provider or owner. Thus, a source hub 102 could be srtLtated iii one geographical area, and another source hub 102 could be located in a different geographical area.
The video processing system 100 may include a plurality of creative hubs 104 for creative functions within the video processing system 100. For example, it may be desired to have a plurality of creative hubs 104, each in different geographical areas.
The video processing system 100 may include a plurality of QC hubs 106 for quality control functions within the video processing system 100. For example, it is possible to have a plurality of QC hubs 106, each in different geographical areas.
The video processing system 100 may include a plurality of distribution hubs 108 for distributing video content within the video processing system 100. A given distribution hub 108 may conveniently be located in a geographical area in which video material will be distributed.
It may also be desirable to have multiple different hubs of the same type (for example multiple creative hubs 104) for differcnt clicnts, to maintain confidentiality.
IS Embodiments will now be described in svhich the video processing system 100 is used for a digital product placement project, wherein one or more additional video objects are added to source video data to produce output video data to which the one or more additional video objects have been added.
In these embodiments, one or more products are digitally placed into a proamme, such as a television programme, intended for broadcast to an audience.
The one or more products may serve as advertising components and/or may be used to enhance existing video material for the programme, for example to add a special effect.
There are vanous different types of digital product placement, for example: * product placement -a branded product or object can be placed into existing video material as if it werc there when thc video was originally recorded, as would be the case with truc product placement; for example, a box of cereal on a kitchen table; * indoor and outdoor signage -posters, hoardings and billboards, which typically appear in outdoor and indoor scenes and public areas, can be altered to appear to display a chosen product or brand; and * video placement -video data can be embedded into existing video material, for example a commercial or animated sequence running on a TV screen which is in the background of a scene; it is also possible to insert screens on which the video placement may be played, should one not be available in the scene already.
It will be appreciated, however, that the source video data need not be a programme and could correspond to a feature length film, a promotional video, broadcast media, online media or video-on-demand services or other video material to which it is desired to add the one or more additional video objects.
Figure 2 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into source video data to produce output vidco data in accordance with sonic embodiments.
At step 2a, the source hub 102 retrieves source video data. The source video data may be, for example, media programme material into which it is desired embed one or more additional video objects, such as one or more virtual products. The video material for the programme contains various different shots. The shots arc delineated by cuts, where the camera has stopped recording or where the video material is edited to give such an impression. Source video data retrieval may be peilormed automatically or manually.
At step 2b, the source hub 102 creates a relatively low resolution version of the source video data, referred to herein as a "source proxy".
At step 2c, the source hub 102 synchronises the source proxy to one or more hubs, such as the creative hub 104 and the QC hub 106. The creative hub 104 and the QC hub 106 can use the source proxy to create in-context creative sequences and quality control sequences during the subsequent stages of video processing. The creation and synchronising of the source proxy may be performed automatically.
At step 2d, the source hub 102 analyses the source video data. This may involve conducting a pre-analysis pass in relation to the source video data, for example to identify seents corresponding to separate shots in the source video data.
In some embodiments, the step of analysing the source video data occurs concurrently with or prior to creating the source proxy. Analysing the source video data may be performed automatically.
At step 2e, the source hub 102 groups one or more of the identified segments together, for example on the basis that they all relate to the same scene or the same locale. The grouping of identified segments is performed automatically during the pre-analysis stage or manually.
At step 2Z the source hub 102 selects one or more of the identified segments for the inclusion of one or more additional video objects. The one or more segments are selected from one or more groupings made in step 2e. The segments may be selected on the basis that they conpond to video material in which it is likely that products could be digitally placed. This step can be peribrmed automatically during the pre-analysis stage or manually.
At step 2g, the soume hub 102 creates an embed project; a project for adding one or more additional video objects to one or more segments identified in step 2f This may involve creating an embed project file which contains data relating to the embed project. The source hub 102 may create multiple embed projects for the source video data. For example, where each embed project relates to a different locale and there are multiple different locales in the source video dat The creation of the embed project may be performed automatically, but with a manual trigger. All automatic processes that are triggered manually may be triggered from a user on any of the hubs with appropriate credentials.
Typically, not all of the identified segments of the source video data are, in fact, suitable for product placement. Thus, not all of the identified segments arc selected for digital product placement Segment selection may be performed automatically and/or manually. A human opeiutor may be able to assess the appropriateness of opportunities for product placement in context For example, ajar of instant coffee would suit a kitchen scene, but would look out of place in a bathroom scene, or in an outdoor desert scene -a human operator might therefore not select certain segments that may appear to provide a good opportunity for product placement on the basis that they would not be suitable in context In another example, it may be decided that a kitchen worktop in a scene provides a good opportunity for a grocery product placement It may be desirable to determine how long the kitchen worktop is in view -this may be performed manually or automatically. For example, if it is only a fleeting shot, the product placement opportunity is likely to be of limited interest On the other hand, if the scene in the kitchen is long and the location chosen for product placement is in view for this duration, it is likely that this scene will be or more interest for a product placement opportunity.
It may also be desirable to determine how many times a particular scene is featured in a programme. One element of product placement is temporal consistency, also known as continuity. This involves having the same product in the same position every time that scene occurs in the programme.
At step 21i, the source hub 102 combines or concatenates video material associated with the selected segments into one composite video ifie, one for each embed project. The composite or combined video file is referred to herein as an "embed sequence" or "intermediate working vera ion" of the source video data. The creation of the embed sequence may be performed automatically.
The source hub 102 creates an embed sequence from the selected shots, joining them one after the other into one composite video file. The video material may have been dispersed throughout the source video data so that adjacent video material in the composite scene was not necessarily adjacent in the source video data.
In some embodiments, the embed sequence contains a reduced amount of video material compared to the source video data. For example, the embed sequence may contain video material associated with a subset of the identified segments of the source video data -corresponding to the selected segment(s).
In some embodiments, the embed sequence does not include an audio track component Some embodiments comprise removing an audio tack component from the source video data (if present).
It may be desirable, at this or another stage, to create one or more mock-ups of the desired look of the embed sequence. Such mock-ups may be created using the digital product placement assessment module 102c.
In some embodiments, creating the mock-up(s) comprises rendering preview imagery which has a blue box or cylinder in the imagery to represent the (as yet unspecified) product to be placed for an interested party to allow the interested party to assess the product placement opportunity in detail. The preview imagery may be at a lower than final resolution, for example to reduce the amount of data to be transmitted if the preview imagery is being communicated over the data communications network 110 and/or so as not to transmit a final resolution version of the preview imagery outside of the source hub 102.
The source hub 102 may insert a brand image container' into the preview imagery to assist the assessment by the interested party. For example, CGI-generated street fitrniture such as an advertising hording or bus shelter could be inserted into the mock-up(s), so that a virtual poster displaying a message from the interested party can be placed on this street furniture. In another example, a CUI-generated television could be inserted into, for cxamplc, a living room scene, so that virtual videos could be played on the television set. To promote products, the virtual video could be an advertisement for a product or could merely feature the product in question.
The source hub 102 may also creatc a report comprising one or more metrics associated with the potential product placement opportunity, for example spcci'ing how much total time and over how many scenes the potential product can be seen.
Much popular television is episodic, which means that same scenes, locales, and characters may reappear in each episode or show in a series. Thus, product placement opportunities may relate to more than one episode of a programme, for example for a space on the kitchen table in the house of a famous character over many episodes, or even over multiple series.
There are many ways in which the product placement opportunity can be brought to the attention of the interested party. In some embodiments, the source hub 102 uploads the low resolution mock up material, optionally with the report on the opportunity, to the online portal 116 to facilitate access by the interested party. This allows the opportunity to be presented to a large audience and, using the scalability of cloud-based application servers, can be used to present the opportunity to a number of interested parties in an online marketplace environment. As such, potential advertisers and/or brokers for such advertisers may be able to access continually updated information on current product placement opportunities.
At step 2i, the source hub 102 creates metadata comprising information concerning the embed project.
In some embodiments, the source hub 102 adds the metadata to the embed sequence video data file and/or in the project file created by the source hub 102 at step 2g and/or in a separate file to the embed sequence. The metadata may be created in XML. (extensible Markup Language) or another format. The creation of the metadata may be performed automatically. The metadata may identi', using one or more data elements for each data type, one or more of the following: * the source hub 102, QC hub 106 and distribution hub 108 to be used for this embed project -this information is used to identify the particular hubs involved in this particular embed project where there are multiple source hubs 102, QC hubs 106 and/or distribution hubs 108 in the video processing system 100; * a brand and/or brand agency involved; * the content owner of the media; * the media family (for example the name of a series of which the source video data corresponds to an episode) * the particular episode and season associated with the source video data (where appropriate); * the scene within the episode to which the embed sequence relates -this may be identified using a UUID (Universally Unique fDcntifier); * the frames covcred by the embed project -this data supports the reeonform process, which will be described in more detail below; * the timecodes in the source video data conesponding to fl-ames in the embed sequence -this data also supports the reconform process; * the format of the embed sequence, such as whether it is: o progressive video;
o interlaced video, upper/lower field dominant;
o 3:2 pulldown video with specific field dominance and cadence, which may or may not be the same for each shot; o advanced pulldown with specific field dominance and cadence, which may or may not bc the same for each shot; and * the codec to be used to compress the video when rendering the project -this may be changed subsequently.
At step 2j, the source hub 102 initiates synchronisation with the creative hub 104, where the embed project is to be worked on.
In some enibodiments, thc source hub 102 automatically uploads thc embed sequence and/or other project-related data such as project metadata to the creative hub 104, QC hub 106 and/or distribution hub 108 as part of the synchronisation. The source hub 102 may also transmit a message to the relevant hub(s) indicating that the new embed project has been created.
By uploading the embed sequence (rather than the entire source video data), the amount of data to be transferred between the source hub 102 and the creative hub 104 may be significantly reduced where the embed sequence contains less video data than the source video data. Since these data transfers may be via limited bandwidth connections, transfer costs and transfer time may also be improved.
The source hub 102 may also pre-emptively upload the embed sequence to the QC hub 106 at this stage, even though the QC work at the QC hub 106 may not be undertaken for some time. Pre-emptively transmitting the embed sequence to the QC hub 106 may reduce processing times when the QC work does eventually start since it can have already received at least some of the embed sequence by the time the QC work starts. In some eases, the QC hub 106 may have received all of the embed sequence by the time the QC work starts at the QC hub 106.
At step 2k, the source hub 102 transmits the source video data to the distribution hub 108 so that the distribution hub 108 has a copy of the source video data into which the one or more additional video objects are to be added. The transmission of the source video data may be performed automatically, but with a manual trigger.
Figure 3 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into input video data to produce additional video data in accordance with some embodiments.
In these embodiments, the input video data is the embed sequence transmitted from the source hub 102 to the creative hub 104 at step 2j as part of the synchronisation process. Tn sonic embodiments, the embed sequence includes only video material corresponding to segments in which the opportunity to embed a product has been agreed. In othcr words, iii such embodiments, segments in which no product is to be added are not communicated to the creative hub 104.
At step 3a, the creative hub 104 sources or prepares additional media data such as high quality artwork to represent the product (referred to herein as "embed artwork"). The embed artwork may comprise artwork images and/or videos and/or other forms of graphics to be used in the embedding process. The embed artwork may include, for example, a high resolution product label, or a suitable photograph of the product or the like. The embed artwork may be prepared at the creative hub 104, reccived from the source hub 102, from the online portal 116 or otherwise.
There are many ways of building virtual products to which the embed artwork can be applied. For example, virtual products may be built using 3D computer graphics systems such as 3DS Max or Maya, both from Autodesk in Canada. Virtual product building may include the creation of Computer Graphic 3D boxes' that may then be wallpapered with product artwork to form a virtual prodLict, or design of a virtual bottle in CGI and then the CGI affixing of label artwork. Sourcing or preparing the additional media data may be performed automatically.
At step 3b, the project is then worked on at the creative hub 104. The creative stage may involve significant human intervention, although at least some of the creative steps may be performed at least in part automatically. For example, when used, the creative software automatically separates its timeline into the various shots in thc embed sequence upon reading the cmbcd sequence to facilitate working on each shot in succession.
Various creative tasks that may be performed at the creative hub 104 at this stage will now be described. These tasks may be used to identi' one or more desired visual attributes for the digitally placed products to possess when incorporated into the embed sequence. Such visual attributes include, but are not limited to being, position attributes, masking atfributes, visual appearance attributes (for example relating to blur, grain, highlights, 3D lighting effects).
The creative hub 104 may track motion for the virtual product in the embed sequence and produce corresponding tracicing instructions that define the desired motion attribitcs of the product.
Tracking involves tracking the position of the virtual product, as it will appear in the embedded sequence. In all likelihood, the camera that shot the source video data would have moved, cithcr in a tracking shot, or a zoom, such that the position of the virtual product in the corresponding video material would not be in a constant horizontal and vertical position as referred to in the picture or in 3D space. Tracking is used to determine the horizontal and vertical position of the virtual product on each frame of the embed sequence in which the product is to be placed. In general, the tracking information may include 2D and 3D perspective effects such as scale, rotation and shear.
The creative hub 104 may create masks for any foreground objects in the embed sequence that obscure all or part of the embedding area, i.e. the area in which the virtual product is to be embedded, and produce corresponding masking instructions that define the desired masking attributes in relation to product.
In some embodiments, this process comprises using automatic and semi-automatic techniques such as rotoscoping and keying, in which a combination of user adjustable settings and algorithms may be used to separate the foreground and the background in the embed sequence. Rotoscoping involves, in effect, hand-drawing the outline of occluding objects in front of the virtual product, such as actors and furniture, over the live action. Keying involves using a key signal to determine which of two images is to be chosen for that part of the final image.
The creative hub 104 may perform appearance modelling, relating to positioning and adjusting the appearance of the embed artwork, and produce corresponding appearance modelling instructions that define the desired visual appearance of the product.
For example, it may be desirable to integrate the virtual product into the embed sequence so that it looks 111cc it might have been present when the corresponding video was originally shot. Appearance modelling may therefore be used to be to make the virtual product look real. It may involve perspective alteration of the object to be placed, to make it look natural in the scene. It may additionally or fin alternatively involve adding 3D lighting, for example where a directional light is near the virtual object a viewer would expect the virtual object to cast shadows from the light. 3D lighting can be added in a number of industiy standard 3D packages such as 3DS-Max or Maya from Autodesk mc, in Canada.
Tn sonic cases, it may be desirable to apply one or more further image processing features to the COT object that is to be placed in the scene, so that the object matches the look created by the camera, lighting and post production process.
Alternatively, an entirely animated and/or artificial appearance may be desired.
One option for rendering the final material -a final version of the source video material which includes any embedded products -would be to render it at the creative hub 104. This would involve rendering the embed sequence at the creative hub 104, combining it in with the source video data at the creative hub 104 to form a completc pro'amme with embedding in, and then transferring the complete embedded material to the distribution hub 108, possibly via the QC hub 106.
However, all of the source video data would need to be available at the creative hub 104 in order to do so. As explained above, in some embodiments, only the embed sequence, rather than the complete source video data, is transmitted to the creative hub 104.
Various embodiments which will now be described in which the final material to be broadcast is not finally rendered at the creative 104. These embodiments relate to a technique referred to herein as "push render" where the creative hub 104 transmits instructions to another hub to render the project. The rendered material can then be combined in with the relevant video data at another hub.
Transmitting the instructions, rather than the rendered material, can result in a significant reduction in the amount of data being transmitted between the different hubs. Notwithstanding this, it may be desirable, in some circumstances, to transmit rendered material in addition to, or as an alternative to, push rendering instructions -push rendering does not necessarily preclude transmitting rendered material, but provides an additional option for reducing data transfers when used.
Embed projects may be managed using a suitable project management system.
The project management system can include management of project initiation, creation of the embed artwork, tracking, masking and appearance modelling approvals and other functions. The project management system may also support various different push render phases indicating whether the embcd project push render is: * a local rendei, in which an embed project output video (produced by rendering the instructions in a push render) is rendered locally on a hub but has no additional worktlow links such as online video creation or project management notifications; * a blue box render in which the project has blue boxes placed in the video material to identifr the areas where the actual products would or could be digitally placed; * a QC render to check for the quality of the tracking, masking, appearance modelling and other work carried out at the creative hub 104; * a final QC render to check the appearance of the final embed before delivering the completed project to the client(s); and * a delivery render, in which the rendered video is sent to the client to view online so the client can check the complete placement with audio -when approved, the final media can be delivered back to the client At step 3c, the creative hub 104 may: a) create a project file which contains all of the tracking, masking, appearance modelling instructions and any other data created at the creative hub 104, as well as, optionally, the embed project metadata or a subset theieof or b) update an existing project file received from the source hub 102 with such data. Creating or updating the project file may be performed automatically, but with a manual trigger.
In some embodiments, the creative hub 104 receives the metadata created at the source hub 102 in the embed sequence video file or in a separate file. The creative hub 104 may include some or all of the metadata in the project file to support the push render workflow. For example, where the metadata identified the distribution hub 108 to be used in this project, the project file may comprise data idenfl4ng the distribution hub 108. In addition, where the metadata identified one or more flames in the source video data that corresponded to the embed sequence, the project file may include such data to facilitate reconforming.
At step 3d, the creative bib 104 then transmits or pushes video file data comprising at least the rendering instructions to the QC hub 106 for QC purposes.
The video file data sent to the QC hub 106 to initiate push rendering of the project at the QC hub 106 for QC purposes may be a package (for example a zip package) comprising: (1) a project file defining the tracking, masking, appearance modelling, embed artwork, and other data (such as effects data); (2) the embed artwork; and (3) some or all of the embed project metadata.
Alternatively, video file data could comprise only items (1) and (3). The embcd artwork could be synced to the QC hub 106 automatically as soon as it is created on the file system. For example, the creative hub 104 could transmit the embed artwork to the source hub 102 and doing so could trigger uploading of the embcd artwork to thc QC hub 106 associated with the project. This may rcduce further the amount of data to be sent to the QC hub 106 when the push rcndcr is initiated in that the QC hub 106 may receive at least some of the embed artwork prior to starting the QC work. Transmitting the video file data to the QC hub may be performed automatically, but with a manual trigger.
Figure 4 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into input video data to produce additional video data in accordance with some embodiments. The QC hub 106 has received the video file data transmitted from the creative hub 104 at step 3d.
At step 4a, the QC hub 106 renders the projcct based on thc received rendering instructions. Rendering produces additional video data that contains the rendered embed artwork. Each frame of the additional video data contains the rendered embed artwork in the corrcct place and with the correct look as produced at the creative hub 104 and as defined in the project file. Rendering the project based on the received rendering instructions may be perfornted automatically.
In some embodiments, the rendering technique used is precomposite rendering, wherein only thc embed artwork is rendered, with a separate alpha channel, so that it can be later composited onto (i.e. blended with) the source video data. This technique allows there to be only one stage of reading and writing original media frames; the final stage of reconform which will be described in more detail below.
This reduces generation loss caused by decoding and re-encoding the video data. It also allows the rendered embed project video to be small.
In more detail, for a computer-generated 2D image element which stores a colour for each pixel, additional data is stored in a separate alpha channel with a value between 0 and 1. A stored value of 0 indicates that no objects in the 2D image overlapped the associated pixel and therefore that the pixel would effectively be transparent if the 2D image were blended with another image. On the other hand, a value of 1 indicates that an object in the 2D image overlapped the pixel and therefore that the pixel would be opaque if the 2D image were blended with another image.
Thus in some embodiments, rendering results in, in effect, additional video data in the form of overlay video data in which the virtual products are rendered and any embed artwork is applied to them. The overlay video data may be viewed as part of the QC process to check the quality of the creative work performed at the creative hub 104.
Various steps may or may not be performed at the QC hub 106 depending on the push render options used and which push rendering phase has been reached.
The QC hub 106 may compute metrics of the embedded sequence, for example by measuring the area and location of the embed(s) (embedded object(s)) in each frame. These metrics may be combined with inteation metrics (human judgements as to how well the embed interacts with the scene in which it is placed) into a report which can be delivered to an interested party. For instance, the embed may be in the background or the foreground, and there may or may not be actual or implied interaction between the embed and (key) actors. In embodiments, the report is published online via the online portal 116 and made available to designated parties.
In some cmbodimcnts, the overlay video data may be combined (i.e. blended) with video data derived from the source video data, such as the embed sequence or a proxy version of the source video data.
Push rendering an embed project file may create some or all of the following output files, using appropriate identifiers to idcnti the content provider, media family, episode etc. The QC hub 106 may produce a composite of the rendered embed artwork and the embed sequence for viewing for QC purposes. In other words, the QC hub 106 may create a sequence comprising the embed sequence with thc embed artwork applied. This sequence can be used to judge the quality of the embedded artwork, in each shot in isolation.
The QC hub 106 may creatc a sequence comprising a contiguous section of the source video data (extracted from the source proxy), with the shots from the embed sequence showing the embedded artwork that was added to those shots. Any audio in the source video data that had been removed could be added back into the scene previews at tlus stage. Tlus version is used to judge the quality of embedded artwork in the context of the surrounding video and the audio.
In terms of the rendering process, the QC hub 106 may create a video data file, for example a ".niov" file, in an appropriate directory. This is the output precomposite (RGB plus alpha channel) video containing the rendered virtual product with embed artwork applied thereto. The frame numbers for this video data file correspond to those of the embed sequence.
The QC hub 106 may create a file (such as an XML file) which comprises per-frame metrics and a digital image file mctrics (such as a.jpg file) which is a representative metrics image of the embedded sequence with a blue border.
The QC hub 106 may also create a file that specifies the relevant push render phase (in this case, the QC phase). This is a copy of the project file or other video file data (such as the zip package) that was rendered.
The QC hub 106 may also create one or more branded previews (for example in MP4 format) that may be sent to the online portal 116 for preview by the client(s).
The QC hub 106 may also create a video data file, such as a ".mov" file, for the output composite (ROB) video containing the virtual product(s) rendered into the embed sequence. This process may involve digital eompositing in which multiple digital images are combined to make a final image; the images being combined being frames in the precomposite video and corresponding frames in the embed sequence.
This may comprise alpha blending, where the contribution of a given pixel in the precomposite video to a colTesponding pixel in the composited video is based on the opacity value stored in the alpha channel in association with the pixel. Where the opacity value of a foreground pixel is 0 (i.e. where the stored value associated with that value is 0), the corresponding pixel is completely transparent in the foreground; whcre the opacity value of a foreground pixel is 1, (i.e. where the stored value associated with that value is 1), the corresponding pixel is completely opaque in the foreground.
At step 4b, an operator uses the visual QC module 106b at the QC hub 106 to perform a visual QC check 011 the sequences that have been processed at the QC hub 106. This may involve trained operators viewing the rendered material and looking for errors. If visual faults are detected, they can be communicated back to the creative hub 104, whcre thcy can be corrected (steps 4c and 4d). The cycles of correction may be recorded in the project file. The QC cheek is principally performed manually, although some aspects may be automated.
At step 4f, when the material has finally passed quality control (step 4e), the QC hub 106 transmits video file data to the distribution hub 108. The video file data may comprise a push render (which enables the distribution hub 108 to generate associated video material, rather than transmitting the video material itself), or video material that has already been rendered or otherwise produced.
Similar to the video file data transmitted to the QC hub 106 by the creative hub 104 at step 3d to initiate push rendering at the QC hub 106, the video file data sent to the distribution hub 108 at step 4f may be a zip package comprising items (1) to (3) specified above or may be a single file containing items (1) and (3) of the zip package. The embed artwork could likewise have been syneed to the distribution hub 108 automatically as soon as it was created on the file system to reduce further the amount of data to be sent to the distribution hub 108 when the push render is initiated.
It will be appreciated that the video file data transmitted to the distribution hub at step 4f may comprise different data to that in the video file data transmitted by the creative hub 104 at step 3d. For example, as part of the QC process, the creative hub 104 may have updated at least some of the data in the video file data and communicated at least some of the updated data to the QC hub 106 at step 4d. The QC hub 106 could then transmit video file data including the updated data to the distribution hub 108 at step 4f. Alternatively or additionally, the QC hub 106 may update some or all of the data in the video file data transmitted by the creative hub 104 at step 4d itself and include the updated data in the video file data transmitted to the distribution hub 108 at step 4f The transmission of the video file data to the distribution 1mb 108 may be performed automatically, but with a manual input.
Figure 5 is a sequence timing diagram showing the flow of messages associated with adding one or more additional video objects into source video data to produce output video data in accordance with some embodiments.
At step 5a, the distribution hub 108 receives the video file data from the QC hub 106.
In some enThodiments, the distribution hub 108 has also already received the source video data, or data derived therefrom, transmitted by the source hub at step 2k.
At step Sb, the distribution hub 108 obtains video material comprising the one or more additional video objects based on the video file data. This may involve rendering the video material if the video file data comprises instructions for generating the video matcrial. Rendering may be similar to or the same as the rendering carried out at the QC hub 106 and the distribution hub 108 may create at least some of the same data files as those created at the QC hub 108 during the QC push render phase, albeit at a final QC render and/or delivery render phase.
Alternatively, the video file data may already contain the video material, in which case the distribution hub 108 may extract the video material from the video file data. Rendering may be performed automatically.
At step Sc, the distribution hub 108 combines the rendered video material (which includes the embedded object(s)) with the source video data to form a completcd programme, or output video data, which contains the digitally placed product(s). Reconform is performed automatically and may be initiated from within an appropriate software application or by an online user with appropriate credentials.
The output video data is then suitable for broadcasting or other dissemination, such as the sale of DVDs, or downloading programmes via the Thtemet.
In more detail, reeonform takes the result of push rendering one or more embed preet files as described above. The precomposite (overlay) video data produced by the push render is blended with or composited onto the source video data, using the mctadata associated with the embed project to place the rendered product(s) in the correct frames in the source video data. Within the frame range, each frame is read from source video data, and any precomposite outputs for that frame provided by the push rendered projects are overlaid on the source video data, in an internal memory buffer. Finally the frame is exported. To determine which embed to overlay, the reconfomi software looks at the metadata for the push render project.
The relevant data is that which specifies, for each shot from the embed project, the start frame of the shot in the tinieline of the embed sequence, the start frame of the shot in the timeline of the source video data, and the number of frames in the shot.
From this information, each frame may be mapped between the source video data and the precomposite video. The relevant frames in the source video data may, however, be identified in the metadata in another manner.
In some embodiments, reeonfonn may be performed by a cluster of servers or other suitable computing devices. The reconform then commences on the cluster.
A new version of the source video data is thereby created which includes the modified frames to produce the final video material or output video data.
Tn the embodiments described above, the amount of user interaction involved in push rendering a prqject is minimal. One or more operators specify which embed project file to render, and the phase of the push render workflow (for example, the QC stage or the final delivery stage). All details of the render itself arc, in effect, automatically performed based on rendering instructions in the project ifie or other push render data.
In some embodiments, other parts of the metadata associated with the project are used in the process of rendering the project. For example, online videos may be created and automatically assigned permissions that allow only the coritct client(s), whose details are included in the metadata, to view them.
In some embodiments, the project file comprises data specifying one or more locations on the ifie system to which the project is to be rendered. In such embodiments, the worlcflow may be simplified because the push render is simultaneously a rendering solution and a disiribution solution. This allows the process of supporting the workflow to be achieved in one step.
In some cases, them may be information that will be part of the report on the project, but which is sensitive to the client and which it would therefore be preferred not to send to the creative hub 104 or the QC hub 106. By rendering the project at the soLirce hub 102 or distribution hub 108, such secret information need not be sent to the creative hub 104 or the QC hub 106. Nevertheless, if it is desired to render at the crcativc hub 104 or thc QC hub 106, thc secret information could ho automatically omitted from any report created by the render, or the report itself omitted.
Some embodiments provide a feedback mechanism in relation to the push rcndcr workflow, becausc thc project may be pushed from the creative hub 104 or QC hub 106 to another hub, such as the source hub 102 or the distribution hub 108, which may be on the other side of the world. Such embodiments provide feedback on the progress and success or failure of the rendering by feeding back the status of the render to software running at the creative hub 104 and/or QC hub 106.
In some embodimcnts, a project may be push rendered to a hub other than the one specified as the QC hub 106 or distribution hub 108. For example, it may be desired to render an embed project generated by the creative hub 104 at a hub other than hubs 102, 106, 106, 108. The other hub may have a proxy vemion (low rcsolution and compressed) of relevant parts of the source video data. From this, it would be possible to render an embed project through local push rendering. This could be used as part of a QC process, viewing the result of the rendering to ensure that the project has becn completed satisfactorily.
As explained above, the video processing system 100 may comprise a plurality of different source hubs 102, creative hubs 104, QC hubs 106 and/or distribution hubs 108. Where a single piece of source material gives rise to different embed projects targeted at different countries, it may be desirable to transmit the source video data in advance to respective distribution hubs 108 in those countries and then render the projects at those distribution hubs 108. An example may be an episode of a popular US episodic series shown in Europe. In Poland, it may be required to incorporate a Polish brand, but in Germany, in the same scenes, it may be required to position a German brand. In this example, the source hub 102 transmits the source video data to distribution hubs 108 in Poland and Germany and transmits embed sequences to both the Polish and German creative hubs 104. This may sifiticantly reduce the time between obtaining client approval and final delivery at the correct system for broadcast or distribution.
Embodiments described above provide significant data transfer savings, iii that the creative hub 104 and/or the QC hub 106 only transmits instructions on what to do to embed and then render the additional video data, rather than transmitting the rendered embed sequence itself with the embedded objects. Such embodiments do not preclude transfer of some, or all, of the rendered embed sequence, but it is preferably not transmitted in such embodiments.
Where the embed instructions are sent from the creative hub 104 to the source hub 102, QC hub 106 or distribution hub 108. these instructions may be interpreted locally by similar software as was used at the creative hub 104.
Figure 6 is a diagram that illustrates schematically a method for incorporating one or more additional video objects into source video data to produce output video data in accordance with some embodiments.
At step S601, source video data is retrieved. The source video data is made up of a number of frames of image data Segments (A, B, C, D, E, ...) of the source video data identified. For example, each segment may correspond to a shot in the soLirce video data. Each segment comprises a number of frames of video material.
For example, segment A comprises a number of frames of video material between frame identifiers "a" and "b", segment B comprises a number of frames of video material between frame identifiers "b" and "c" and so on. The frame references may be, for example, frame numbers or timecodes associated with the start and end frames of each segment.
At step S602, one or more of the identified segments within the source video data are selected for the inclusion of one or more additional video objects. For example, segments B, C and E may be selected for the inclusion of the one or more additional video objects.
At step 5603 an intermediate working version of the source video data is created. The intermediate working version includes at least video material corresponding to the selected segments (segments B, C and B). Metadata which identifies at least one frame within the source video data which corresponds to the selected segments is created. The metadata identifies the frames in the source video data to which segments B, C and B correspond by including the frame identifiers that correspond to the start and end of each segment: b-c; c-d; and c-f respectively.
At least the intermediate working version is transmitted to a remote system for the creation of additional video data for including the one or more additional video objects in the output video data. In some eases, the metadata may also be transmitted to the remote system.
At step S604, the video file data associated with the additional video data is reccived after it has been creatcd using the intermediate working version transmitted to the remote system. In some embodiments, the additional video data is the intermediate working version with the one or more additional video objects added thereto. Segments B, C and B in the intermediate working version are denoted as segments B', C' and B' in the additional video data to indicate that the one or more additional video objects have been added thereto. Metadata is retrieved which identifies at least one frame within the source video data to which the additional video data is to be added. As depicted in Figure 6, the retrieved metadata includes the frame identifiers that con'espond to the start and end of each segment B', C' and E' in the additional video data: b-c; e-d; and c-f respectively.
At step SODS, the metadata can be used to determine the frames within the source video data to which the additional video data is to be added.
At step S606, at least the additional video data, the source video data and the retrieved metadata are used to produce the output video data. In particular, the output video data includes the original segments A and D that did not form part of the intermediate working version. The segments B', C' and H' in the additional video data to which the one or more additional video objects have been added into the source video data and have replaced corresponding original segments B, C and B. Figure 7 is a schematic diagram showing a video processing system 700 in accordance with some embodiments.
The video processing system 700 is similar to the video processing system 100 described above in relation to, and as depicted in, Figure 1. Like features are indicated using the same reference numerals, incremented by 600, and a detailed
description of such features are omitted here.
In the video processing system 700, the functionality of the source hub 702 and the distribution hub 708 are combined into a single entity 714. Entity 714 thus includes at least the video data analysis module 702a, segment sorting module 702b, digital product placement assessment module 702c, rendering modue 708a and reconforming module 708b.
Figure 8 is a schematic diagram showing a video processing system SOO in accordance with sonic embodiments.
The video processing system 800 is similar to the video processing system 100 described above in relation to, and as depicted in, Figure 1. Like features are indicated using the same reference numerals, incremented by 700, and a detailed
description of such features are omitted here.
In the video processing system 800, the digital product placement assessment module 102c of the source hub 102 is moved into the online portal 816; the online portal 816 therefore includes a digital product placement assessmcnt module 802c which performs the same or similar functions as the digital product placement assessment module 102c of the source hub 102. In such embodiments, the embed sequence may be created and be placed in the cloud, for example at low resolution, which could be used to produce mock-ups of the product placement opportunity locally at customer premises.
Although, in the video processing system 800 depicted in Figure 8, only the digital product placement assessment module 102c of the source hub 102 is moved into the online portal 816, embodiments are envisaged in which one or more of the video data analysis module 102a, segment sorting module 102b, and the digital product placement assessment module 102c of the source hub 102 are moved into the online portal 816. For example, the segment sorting module 102b could be placed into the online portal 816, allowing characters and locales to be annotated at customer premises.
In some embodiments, all segments in the source video data may be placed into the online portal 816. This may not ho as secure as uploading only some, selected segments. However, the form of the video material, after pre-analysis/segment sorting, may not be in the same linear timeline as the source video data. This is because pre-analysis/segment sorting may group like scenes, camera angles and/or locales that may appear at different parts of the programme together.
Thus, even if a determined third party were to get hold of the video material, they would have to undo the pre-analysis/segment sorting, and edit the video material back together into its original form. This offers some form of security.
Figure 9 is a schematic diagram showing a video processing system 900 in accordance with some embodiments.
The video processing system 900 is similar to the video processing system 100 described above in relation to, and as dcpicted in, Figure 1. Like features are indicated using the same reference numerals, incremented by 800, and a detailed
description of such features are omitted here.
In the video processing system 900, all of the processing performed at or by the source hub 102 and the distribution hub 108 has been pushed into the online portal 916. The video processing system 900 allows pre-analysis, seent sorting, assessment, output rendering and reconforming all to be carried out in or from the online portal 916. In such embodiments, cloud security should be carefully considered and increased where possible, as both the source video data and the output video data would be contained within the online portal 916, either of which may be desired by unauthorised third parties.
Figure 10 is a schematic diagram showing a system in accordance with some embodiments. In particular, Figure 10 illustrates schematically various components of the source hub 102.
In some embodiments, the components of the source hub 102 are all located on a suitable subnet, on the same LAN. In some embodiments, the source hub connects to other hubs 104, 106, 108 in the video processing system 100 via a VPN.
The source hub 102 comprises a plurality of workstations 1020a, 1020b. The workstations 1020a, 1020b are connected to a network switch 1022 via suitable connections, for example via 1Gb Ethernet connections.
The source hub 102 includes a cluster 1024 of high speed parallel processing graphics processing unit (GPU)-enahled computers for real-time video processing which are also connected to the switch 222 via suitable connections, for example via lOb Ethernet connections.
The source hub 102 includes primary and backup storage systems 1026, 1028.
The storage systems 1026, 1028 store media and other data associated with projects that are processed in the video processing system 100. The data storage systems 1026, 1028 may store ingested source video data, output video data and other data such as metadata files (for example in XML format), video proxies, reports and assets used in processing video data. The data storage systems 1026, 1028 serve both the workstations 1020a, 1020b and the cluster 1024 and are connected to the switch 1022 via suitable connections, such as 10 Gb Ethernet connections.
The source hub 102 includes a File Transfer Protocol (HP) server 1030 for transferring files such as media files and associated files, which is also connected to the switch 1022 via a suitable connection, for example a 1Gb Ethernet connection.
The source hub 102 may include a media capture device 1032, such as a video tape recorder (VTR) 1032 for importing and exporting video material. The media capture device 1032 is connected to the switch 1022 via a suitable connection.
The switch 1022 is connected to the data communications network 110 via a suitable connection which may include a VPN firewall to allow tunnelling into the hub.
The above embodiments are to be understood as illustrative examples of the invention. Further embodiments of the invention are envisaged.
Embodiments have been described in which the creative hub 104 receives the embed sequence from the source hub 102 and creates push render instructions for the project associated with the embed sequence. However, embodiments are envisaged in which the input video data retrieved by the creative hub 104 is not the intermediate version and, indeed, embodiments relating to push render are not limited to receiving video data from a sou've hub 102. In some embodiments, the input video data could be retrieved from an entity outside the video processing system 100, 700, 800, 900 and the input video data may not have been subject to the analysis, segment sorting and assessment described above.
Embodiments have been described above in which the creative hub 104 comprises various creative modules lOt, 104b, 104e which are used to analyse input video data and to generate instructions for generating additional video data comprising the one or more additional video objects. In some embodiments, the source hub 102 may also comprise some or all such modules. The soutte hub 102 may use such modules to create preview imagery that may be a closer resemblance to fmal result.
It is to be understood that any feature described in relation to any one embodiment may be used alone, or in combination with other features described, and may also be ised iii combination with one or more features of any other of the embodiments, or any combination of any other of the embodiments. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is dcfined in thc accompanying claims.

Claims (23)

  1. Claims 1. A method for use in incorporating one or more additional video objects into input video data to produce output video data, the method comprising: retrieving the input video data; analysing the input video data to identifr one or more desired visual attributes for the one or more additional video objects to possess when incorporated into the input video data; creating instructions for generating additional video data comprising the one or more additional video objects having the one or more desired visual attributes; and transmitting at least the instructions to a first remote system that has already received at least part of the input video data, the instructions being useable by the remote system to generate the additional video data for incorporation into the input video data.
  2. 2. A method according to claim 1, comprising receiving the input video data from a second remote system, the second remote system being different from the first remote system.
  3. 3. A method according to claim 2, wherein the first remote system receives said at least part of the input video data from the second remote system.
  4. 4. A method according to claim 2 or 3, comprising transmitting additional media data associated with the one or more additional video objects for use in generating the additional video data to the second remote system.
  5. 5. A method according to any preceding claim, comprising transmitting additional media data associated with the one or more additional video objects for use in generating the additional video data to the first remote system.
  6. 6. A method according to claim 4 or 5, wherein generating the additional video data comprises generating one or more virtual products corresponding to the one or more additional video objects and applying the additional media data to the one or more virtual products.
  7. 7. A method according to any of claims 4to6, comprising transmitting at least part of the additional media data prior to transmitting the instructions.
  8. 8. A method according to any preceding claim, wherein the one or more desired visual attributes include the position of the one or more additional video oljects in the input video data, the method compdsing transmitting instructions specifying the horizontal and vertical position at which the one or more additional video objects are to be positioned in the additional video data.
  9. 9. A method according to any preceding claim, wherein the one or more desired visual attributes include obscuring at least part of the one or more additional video objects, the method comprising: transmitting instructions specifying that at least part of the one or more additional video objects is to be masked such that one or more foreground objects appear in front of the at least part of the one or more additional video objects in the output video data.
  10. 10. A method according to any preceding claim, wherein the one or more desired visual attributes include the visual appearance of the one or more additional video objects, the method comprising: transmitting instructions specifying one or more appearance effects to be used in relation to the one or more additional video objects when generating the additional video data.
  11. 11. A method according to any preceding claim, wherein the first remote system has already received all of the input video data.
  12. 12. A method according to any preceding claim, for use in a video processing system, the method comprising transmitting data identifring one or more locations in the video processing system at which the output video data is to be stored when produced.
  13. 13. A method according to any preceding claim, wherein the instructions comprise overlay generation instructions for generating a video overlay comprising the one or more additional video objects.
  14. 14. A method according to any preceding claim, comprising transmitting the instructions to the first remote systcm via thc Internet.
  15. 15. A method according to any preceding claim, wherein the input video data comprises an intermediate working version of source video data, the intermediate working version including at least video material corresponding to one or more selected segments within the source video data, the selected one or more segments having been selected for the inclusion of the one or more additional video objects.
  16. 16. A method according to claim 15, comprising receiving metadata associated with the intermediate working version, the metadata identifying at least one frame within the source vidco data which corresponds to the selected one or more frames.
  17. 17. A method according to claim 16, comprising transmitting the received nietadata or metadata derived therefrom to the first remote system, the transmitted metadata identifying at least one frame within the source video data which colTesponds to the additional video data.
  18. 18. A system comprising apparatus for use in incorporating one or more additional video objects into input video data to produce output video data, the apparatus being configured for: retrieving the input video data; analysing the input video data to identify one or more desired visual attributes for the one or more additional video objects to possess when incorporated into the input video data; creating instructions for generating additional video data comprising the one or more additional video objects having the one or more desired visual attributes; and transmitting at least the instructions to a first remote system that has already received at least part of the input video data, the instructions being useable by the remote system to generate the additional video data for incorporation into the input video datt
  19. 19. A computer program adapted to pcribrm a method for use in incorporating one or more additional video objects into input video data to produce output video data, the method comprisinw retrieving the input video data; analysing the input video data to identi' one or more desired visual attributes for the one or more additional video objects to possess when incorporated into the input video data; creating instructions for generating additional video data comprising the one or more additional video objects having the one or more desired visual attributes; and transmitting at least the instructions to a first remote system that has already received at least part of the input video data, the instructions being useable by the remote system to generate the additional video data for incorporation into the input video data.
  20. 20. A computer program product comprising a non-transitory computer-readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause the computerized device to perform a method for use in incorporating one or more additional video objects into input video data to produce output video data, the method comprising: retrieving the input video data; analysing the input video data to identify one or more desired visual attributes for the one or more additional video objects to possess when incorporated into the input video data; creating instructions for generating additional video data comprising the one or more additional video objects having the one or more desired visual attributes; and transmitting at least the instructions to a first remote system that has already received at least part of the input video data, the instructions being useable by the remote system to generate the additional video data for incorporation into the input video datt
  21. 21. A method for incorporating one or more additional video objects into input video data to produce output video data, the method comprising: receiving instructions from a first remote system for generating additional video data comprising the one or more additional video objects having one or more desired visual attributes after already having received at least part of the input video data; generating the additional video data based at least in part on the received instructions; and incorporating the additional video data into the input video data, when received, to produce the output video data including the one or more additional video objects having the one or more desired visual attributes.
  22. 22. A method according to claim 21, comprising receiving the input video data from a second remote system, the second remote system being different from the first remote system.
  23. 23. A method according to claim 22, comprising receiving additional media data from the second remote system, the additional media data being associated with the one or more additional video objects for use in generating the additional video data. In24. A method according to any of claims 21 to 23, comprising receiving additional media data the first remote system, the additional media data being associated with the one or more additional video objects for use in generating the additional video data from.25. A method according to claim 23 or 24, wherein said generating comprises generating one or more virtual products and applying the additional media data to the one or more virtual products.26. A method according to any of claims 23 to 25, comprising receiving at lcast part of the additional media data prior to receiving the instructions.27. A method according to any of claims 23 to 26, wherein the one or more desired visual attributes include the position of the one or more additional video objects in the input video data, the method comprising: receiving instructions specifying the horizontal and vertical position at which the one or more additional video objects are to be positioned in the additional video data.28. A method according to any of claims 23 to 27, wherein the one or more desired visual attributes include obscuring at least part of the one or more additional video objects, the method comprising: receiving instructions speciing that at least part of the one or more additional video objects is to be masked such that one or more foreground objects appear in front of the at least part of the one or more additional video objects in the output video data.29. A method according to any of claims 23 to 28, wherein the one or more desired visual attributes include the visual appearance of the one or more additional video objects, the method comprising: receiving instructions specifying one or more appearance effects to be used in relation to the one or more additional video objects when generating the additional video data.30. A method according to any of claims 21 to 29, comprising receiving the instructions after already having received all of the input video data.31. A method according to any of claims 21 to 30, for use in a video processing system, the method comprising receiving data identifying one or more locations in the video processing system at which the output video data is to be stored when produced.32. A method according to claim 31, comprising storing the output video data in the identified one or more locations.33. A method according to any of claims 21 to 32, wherein the instructions comprise overlay generation instructions for generating a video overlay comprising the one or more additional video objects.34. A method according to claim 33, comprising generating the video overlay.35. A method according to claim 34, comprising combining the video overlay with the input video data to product the output video data.36. A method according to any of claims 21 to 35, comprising receiving thc insfructions from thc first rcmotc system via the Internet.37. A method according to any of claims 21 to 36, wherein the input video data comprises an intcrmcdiatc working version of source video data, thc intermediate working version including at least video material corresponding to one or more selected segments within the source video data, the selected one or more segments having been selected for the inclusion of the one or more additional video objects.38. A method according to claim 37, comprising receiving metadata associated with the intermediate working version, the metadata identifying at least one frame within the source video data which corresponds to the selected one or more frames.39. A method according to claim 38, comprising transmitting the received metadata or metadata derived therefrom, the transmitted metadata identi1'ing at least onc frame within the source video data which corresponds to the additional video data.40. A system comprising apparatus for incorporating one or more additional video objects into input video data to produce output video data, the apparatus bcing arranged for: receiving instructions from a first remote system for generating additional video data comprising the one or more additional video objects having one or more desired visual attributes afler already having received at least part of the input video data; generating the additional video data based at least in part on the received instructions; and incorporating the additional video data into thc input video data, when received, to produce the output video data including the one or more additional video objects having the one or more desired visual attributes.41. A computer program adapted to perform a method for incorporating one or more additional video objects into input video data to produce output video data, the method comprising: receiving instructions from a first remote system for generating additional vidco data comprising thc one or more additional vidco objects having one or more desired visual attributes afler already having received at least part of the input video data; generating the additional video data based at least in part on the received instructions; and incorporating The additional video data into the input video data, when received, to produce the output video data including the one or more additional video objects having the one or more dcsircd visual attributes.42, A computer program product comprising a non-transitory computer-readable storage medium having computer readable instructions stored thereon, the computer readable instructions being executable by a computerized device to cause the computerized device to perform a method for incorporating one or more additional video objects into input video data to produce output video data, the method comprising: receiving instructions from a first remote system for generating additional video data comprising the one or more additional video objects having one or more desired visual attributes after already having received at least part of the input video data; generating the additional video data based at least in part on the received instructions; and incorporating the additional video data into the input video data, when received, to produce The output video data including the one or more additional video objects having the one or more desired visual attributes.
GB1221327.8A 2012-11-27 2012-11-27 Producing video data Active GB2508242B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB1221327.8A GB2508242B (en) 2012-11-27 2012-11-27 Producing video data
US14/091,294 US9402096B2 (en) 2012-11-27 2013-11-26 System and method of producing video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1221327.8A GB2508242B (en) 2012-11-27 2012-11-27 Producing video data

Publications (3)

Publication Number Publication Date
GB201221327D0 GB201221327D0 (en) 2013-01-09
GB2508242A true GB2508242A (en) 2014-05-28
GB2508242B GB2508242B (en) 2016-08-03

Family

ID=47560747

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1221327.8A Active GB2508242B (en) 2012-11-27 2012-11-27 Producing video data

Country Status (2)

Country Link
US (1) US9402096B2 (en)
GB (1) GB2508242B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11521656B2 (en) * 2019-05-24 2022-12-06 Mirriad Advertising Plc Incorporating visual objects into video material

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10462499B2 (en) * 2012-10-31 2019-10-29 Outward, Inc. Rendering a modeled scene
WO2014071080A1 (en) 2012-10-31 2014-05-08 Outward, Inc. Delivering virtualized content
GB2520334B (en) 2013-11-18 2015-11-25 Helen Bradley Lennon A video broadcast system and a method of disseminating video content
US11082738B2 (en) * 2015-08-21 2021-08-03 Microsoft Technology Licensing, Llc Faster determination of a display element's visibility
US9972140B1 (en) * 2016-11-15 2018-05-15 Southern Graphics Inc. Consumer product advertising image generation system and method
US10636178B2 (en) * 2017-09-21 2020-04-28 Tiny Pixels Technologies Inc. System and method for coding and decoding of an asset having transparency
KR102478426B1 (en) * 2018-03-16 2022-12-16 삼성전자주식회사 Method for detecting black-bar included in video content and electronic device thereof
CN111859017A (en) * 2020-07-21 2020-10-30 南京智金科技创新服务中心 Digital video production system based on internet big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070214476A1 (en) * 2006-03-07 2007-09-13 Sony Computer Entertainment America Inc. Dynamic replacement of cinematic stage props in program content
WO2008008947A2 (en) * 2006-07-14 2008-01-17 Vulano Group, Inc. System for dynamic personalized object placement in a multi-media program
WO2008010203A2 (en) * 2006-07-16 2008-01-24 Seambi Ltd. System and method for virtual content placement
WO2009101623A2 (en) * 2008-02-13 2009-08-20 Innovid Inc. Inserting interactive objects into video content
US20120140025A1 (en) * 2010-12-07 2012-06-07 At&T Intellectual Property I, L.P. Dynamic Modification of Video Content at a Set-Top Box Device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005114519A2 (en) 2004-05-14 2005-12-01 Delivery Agent, Inc. Product and presentation placement system and method
US20080126226A1 (en) 2006-11-23 2008-05-29 Mirriad Limited Process and apparatus for advertising component placement
US20080304805A1 (en) 2007-06-06 2008-12-11 Baharav Roy Preparing and presenting a preview of video placement advertisements
US8307395B2 (en) 2008-04-22 2012-11-06 Porto Technology, Llc Publishing key frames of a video content item being viewed by a first user to one or more second users
GB0809631D0 (en) 2008-05-28 2008-07-02 Mirriad Ltd Zonesense
US8860865B2 (en) * 2009-03-02 2014-10-14 Burning Moon, Llc Assisted video creation utilizing a camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070214476A1 (en) * 2006-03-07 2007-09-13 Sony Computer Entertainment America Inc. Dynamic replacement of cinematic stage props in program content
WO2008008947A2 (en) * 2006-07-14 2008-01-17 Vulano Group, Inc. System for dynamic personalized object placement in a multi-media program
WO2008010203A2 (en) * 2006-07-16 2008-01-24 Seambi Ltd. System and method for virtual content placement
WO2009101623A2 (en) * 2008-02-13 2009-08-20 Innovid Inc. Inserting interactive objects into video content
US20120140025A1 (en) * 2010-12-07 2012-06-07 At&T Intellectual Property I, L.P. Dynamic Modification of Video Content at a Set-Top Box Device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11521656B2 (en) * 2019-05-24 2022-12-06 Mirriad Advertising Plc Incorporating visual objects into video material

Also Published As

Publication number Publication date
GB201221327D0 (en) 2013-01-09
GB2508242B (en) 2016-08-03
US9402096B2 (en) 2016-07-26
US20140150013A1 (en) 2014-05-29

Similar Documents

Publication Publication Date Title
US11164605B2 (en) System and method of producing certain video data
US9402096B2 (en) System and method of producing video data
US9294822B2 (en) Processing and apparatus for advertising component placement utilizing an online catalog
Teodosio et al. Salient video stills: Content and context preserved
US7689062B2 (en) System and method for virtual content placement
US20020094189A1 (en) Method and system for E-commerce video editing
WO2021135334A1 (en) Method and apparatus for processing live streaming content, and system
US9224156B2 (en) Personalizing video content for Internet video streaming
US20110107368A1 (en) Systems and Methods for Selecting Ad Objects to Insert Into Video Content
JP2004304791A (en) Method and apparatus for modifying digital cinema frame content
WO2013152439A1 (en) Method and system for inserting and/or manipulating dynamic content for digital media post production
KR20210083690A (en) Animation Content Production System, Method and Computer program
Langlotz et al. AR record&replay: situated compositing of video content in mobile augmented reality
CN112262570B (en) Method and computer system for automatically modifying high resolution video data in real time
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
JP2020101847A (en) Image file generator, method for generating image file, image generator, method for generating image, image generation system, and program
Gaddam et al. Camera synchronization for panoramic videos
US11706375B2 (en) Apparatus and system for virtual camera configuration and selection
KR20230018571A (en) Image photographing solution of extended reality based on virtual production system
Morvan et al. Handling occluders in transitions from panoramic images: A perceptual study
Hermawati et al. Virtual Set as a Solution for Virtual Space Design in Digital Era
KR20060035033A (en) System and method for producing customerized movies using movie smaples
Chen et al. The Replate
Dupras et al. UHD Introduction at the Canadian Broadcasting Corporation: A Case Study
Niamut et al. High-resolution video, more is more?

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20150528 AND 20150603