WO2000072574A9 - Architecture de commande du flux et de la transformation de donnees multimedia - Google Patents

Architecture de commande du flux et de la transformation de donnees multimedia

Info

Publication number
WO2000072574A9
WO2000072574A9 PCT/US2000/013882 US0013882W WO0072574A9 WO 2000072574 A9 WO2000072574 A9 WO 2000072574A9 US 0013882 W US0013882 W US 0013882W WO 0072574 A9 WO0072574 A9 WO 0072574A9
Authority
WO
WIPO (PCT)
Prior art keywords
digital media
media assets
event
content
context
Prior art date
Application number
PCT/US2000/013882
Other languages
English (en)
Other versions
WO2000072574A2 (fr
WO2000072574A3 (fr
Inventor
Alan S Ramadan
Jeffrey E Sussna
Matthew Alan Brocchini
William B Schaefer Iv
John Taylor
Original Assignee
Quokka Sports Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quokka Sports Inc filed Critical Quokka Sports Inc
Priority to AU51478/00A priority Critical patent/AU5147800A/en
Publication of WO2000072574A2 publication Critical patent/WO2000072574A2/fr
Publication of WO2000072574A3 publication Critical patent/WO2000072574A3/fr
Publication of WO2000072574A9 publication Critical patent/WO2000072574A9/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44204Monitoring of content usage, e.g. the number of times a movie has been viewed, copied or the amount which has been watched
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4516Management of client data or end-user data involving client characteristics, e.g. Set-Top-Box type, software version or amount of memory available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/454Content or additional data filtering, e.g. blocking advertisements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8126Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/835Generation of protective data, e.g. certificates
    • H04N21/8352Generation of protective data, e.g. certificates involving content or source identification data, e.g. Unique Material Identifier [UMID]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8543Content authoring using a description language, e.g. Multimedia and Hypermedia information coding Expert Group [MHEG], eXtensible Markup Language [XML]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/162Authorising the user terminal, e.g. by paying; Registering the use of a subscription channel, e.g. billing
    • H04N7/165Centralised control of user terminal ; Registering at central

Definitions

  • the present invention relates to the field of broadcasting events using multiple media; more particularly, the present invention relates to controlling the acquisition, production, flow, and presentation of multimedia data using a structure such as a context map to indicate relationships between event data and contexts.
  • Individual viewers may wish to view events through a perspective that is different from that of the broadcasters. For instance, if a broadcaster is showing a sporting event on television, an individual viewer may wish or desire to follow an individual competitor, sponsor, etc. However, the individual viewer does not have control over the particular content that is broadcasted by the broadcaster and cannot indicate the content they desire to see as an event is being broadcast.
  • current broadcasting technologies are unable to organize and transmit the rich diversity of experiences and information sources that are available to participants or direct spectators at the event.
  • a live spectator or participant at a sporting event is able to simultaneously perceive a wide range of information, such as watching the event, listening to it, reading the program, noticing weather changes, hearing the roar of the crowd, reading the scoreboard, discussing the event with other spectators, and more.
  • the spectator/participant is immersed in information relating to the event.
  • a knowledgeable spectator knows how to direct his attention within this flood of information to maximize his experience of the event.
  • the viewer of a television broadcast does not and cannot experience this information immersion. That is, television lacks the ability to emulate for the viewer the experience of attending or participating in the particular event.
  • the method comprises receiving digital media assets corresponding to remotely captured data from the event, converting the digital media assets into content by using a context map to organize the digital media assets and indicate relationships between contexts associated with the digital media assets, and distributing the content for delivery to a plurality of delivery mechanisms.
  • Figure 1 illustrates one embodiment of a platform.
  • Figure 1A illustrates the data flow through one embodiment of the platform.
  • Figure 2 is an alternative view of the platform.
  • Figure 3 illustrates one embodiment of the subprocesses of production.
  • Figure 4 illustrates a simple route allocated to a flow-through video stream.
  • Figure 5 illustrates an example of a context defined using metadata.
  • Figure 6 is a context depicted to facilitate illustrating the use of URN's and URN references.
  • Figure 7 illustrates one embodiment of an architecture for an end user client.
  • Figure 8 illustrates exemplary asset groups, stylesheets and presentation modules.
  • Figure 9 illustrates an exemplary stylesheet describing a layout and presentation.
  • Figure 10 illustrates a simple context map
  • Figure 11 is a schematic of a control flow.
  • Figure 12 is a block diagram of one embodiment of an architecture for a client.
  • a platform, or architecture, that uses a group of processes and mechanisms to deliver digital media assets from events (e.g., sporting events) and/or other sources to end users is described.
  • the digital media assets include assets (individual units of digital content) and metadata.
  • An asset may include, for example, material such as a photographic image, a video stream, timing data from an event (e.g., timing data from a race, timing data for a single car on a single lap within a race), the trajectory of a ball as it flies towards a goal, an HTML file, an email (from a computer), etc.
  • material such as a photographic image, a video stream, timing data from an event (e.g., timing data from a race, timing data for a single car on a single lap within a race), the trajectory of a ball as it flies towards a goal, an HTML file, an email (from a computer), etc.
  • Metadata is information about an asset, such as for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from a given frame of a particular video), the situation in which an end user accessed it (e.g., a hit), its heritage (e.g., other assets from which it was generated), its importance to the immersive experience, and its movement through the platform.
  • asset such as for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from a given frame of a particular video), the situation in which an end user accessed it (e.g., a hit), its heritage (e.
  • Metadata may provide more abstract information, such as for example, the types of values available within a particular kind of telemetry stream, instructions generated by production and followed by immersion (e.g., to track certain kinds of user behavior, to automatically present certain assets to the user at certain times or in response to certain user actions, etc.), relationships between assets and other entities such as events, competitors, sponsors, etc.
  • the platform treats both assets and metadata as first-class objects, which are well-known in the art.
  • a context is a metadata structure that defines a set of assets (and /or other contexts). Contexts may be dynamically generated or may be stored for optimal access.
  • the platform controls the data flow from each event.
  • the platform collects assets from a venue, produces an immersive experience and delivers the experience to end users (viewers).
  • the platform receives digital media assets (e.g., metadata and individual units of digital content) corresponding to the event and converts those assets into immersive content (i.e., context from and about an event in which users may immerse themselves).
  • digital media assets e.g., metadata and individual units of digital content
  • the conversion of digital media assets into immersive content is performed by using a context map (e.g., a graph structure), which organizes the digital media assets and indicates relationships between contexts associated with the digital media assets.
  • the platform may maintain a hierarchical database of contexts.
  • the platform tags digital media assets with global identifiers indicative of context information.
  • the global identifier is a persistent, location-independent, globally unique identifier to the digital media asset describing the event.
  • the immersive content is distributed, delivered, and presented to end users (viewers).
  • the immersive content enables an immersive experience to be obtained.
  • This immersive experience is a virtual emulation of the experience of actually being present at or participating in an event, obtained by being subjected to the content that is available from and about the event.
  • the platform collects, transmits, produces, distributes, delivers, and presents the digital media assets.
  • Each of these functions, or phases, may be implemented in hardware, software, or a combination of both. In alternative embodiments, some of these may be performed through human intervention with a user interface.
  • the present invention also relates to apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • the instructions of the programming language(s) may be executed by one or more processing devices (e.g., processors, controllers, control processing units (CPUs), execution cores, etc.).
  • processing devices e.g., processors, controllers, control processing units (CPUs), execution cores, etc.
  • Figures 1 and 2 illustrate the platform in terms of six subprocesses or segments which it performs: collection 101, transmission 102, production 103, distribution 104, delivery 105 and immersion 106. In the following description, each of these will be discussed briefly and then in more detail.
  • collection 101 is a segment of the platform that collects data of an event.
  • collection 101 of assets occurs at the venue of the event.
  • raw assets from the event are delivered directly to a studio where collection 101 occurs.
  • the event may be any type of event, such as, for example, a sporting event, a concert, etc. Examples of sporting events include, but are not limited to, an auto race, a mountain climbing expedition, a fleet of boats spread across an entire ocean, etc.
  • collection 101 gathers the digital assets from which an immersive content will be created (video, text, images, audio, etc.) and packages (e.g., converts) them into a format.
  • a specific format is used to communicate assets, context, and other metadata and will be described in more detail below.
  • a single format may be used to communicate all assets, context and other metadata, including time information indicative of when the asset was created or collected.
  • the output of collection 101 comprises digital media assets.
  • Transmission 102 transmits digital media assets to a studio using communications facilities (e.g., satellites, etc.).
  • assets are transmitted in a specific format over Internet Protocol (IP) networks.
  • IP Internet Protocol
  • Production 103 converts digital media assets into immersive content.
  • production 103 includes subprocesses that may be performed on digital media assets at a studio, such as, for example, video editing, HTML writing/page creation, automatic processing of telemetry, creation of views (or groups of digital media assets), stylesheets, display modules, setting view priorities, managing a view queue or other structure containing display information, etc.
  • almost all of the production work occurs at the studio.
  • Production 103 may produce different distribution streams for different types of delivery. For instance, a satellite system may include high resolution video streams, while basic internet delivery will not. Therefore, depending on the type of delivery for the content being produced, one or more different distribution streams may be created at production.
  • Distribution 104 transmits content from the studio to one or more delivery media devices, channels and /or systems, which forwards the content to individual end users (e.g., an audience).
  • Distribution 104 may use a distribution network to distribute different streams generated by production 103 to different types of delivery mechanisms that deliver the content to end users (e.g., the audience). Such a distribution may depend on the communications mechanism available to end users.
  • Delivery 105 delivers the content to the audience using one or more types of media (e.g., satellite, cable, telco, network service provider, television, radio, on-line, print, etc.).
  • Immersion 106 is a segment of the platform where content is received and presented to end users for viewing and interacting.
  • the end users generate audience intelligence (Al) information that is sent back to production 103 for analysis.
  • Al audience intelligence
  • Figure 1 A illustrates the data flow through one embodiment of the platform.
  • collection processing 191 generates streams and packages that are forwarded, via transmission processing 192, to production processing 193.
  • a stream is a constant flow of real time data.
  • a package is a bundle of files that is stored and forwarded as a single unit.
  • a stream resembles a radio or television program, while a package resembles a letter or box sent through the mail (where the letter or box may contain video or audio recordings).
  • Streams are often used to allow end users to view time-based content (e.g., real-time video data), while packages may be used for non- temporal content (e.g., graphics, still images taken at the event, snapshots of content that changes less rapidly than time-based content, such as a leaderboard, etc.).
  • time-based content e.g., real-time video data
  • packages may be used for non- temporal content (e.g., graphics, still images taken at the event, snapshots of content that changes less rapidly than time-based content, such as a leaderboard, etc.).
  • the streams and packages being transmitted are formatted by collection processing 191 into a particular format.
  • the streams and packages flow through the platform from collection processing 191 through to immersion processing 196.
  • Collection processing 191 and production processing 193 exchange metadata in order to synchronize contextual information.
  • streams, packages and metadata are transferred via distribution processing 194.
  • the streams and packages may be converted into formats specific to the delivery technology (e.g., http responses, etc.).
  • Al information is fed back from immersion processing 196 and delivery processing 195 to production processing 193.
  • the Al information may include user identification and activity information.
  • the Al information is represented using the same format as the streams and packages and metadata.
  • the Al information fed back from delivery processing 195 may be information that is sent on behalf of immersion processing 196 and/or may be sent because immersion processing 196 is not capable of sending the information. For example, a hit log for a web page may be sent back from a server in delivery processing 195.
  • Monitoring and control processing 197 controls the processes throughout the platform and monitors the operation of individual segments.
  • Collection 101 is the process of capturing proprietary data at event venues and translating it into a format. That is, at its most upstream point, the platform interfaces to a variety of data acquisition systems in order to gather raw data from those systems and translate it into a predefined format.
  • the format contains media assets, context and other metadata.
  • the format adds a global identifier, as described in more detail below, and synchronization information that allows subsequent processing to coordinate content from different streams or packages to each other.
  • the processes within collection 101 are able to control gathering and translation activities.
  • collection 101 converts raw venue data into digital media assets.
  • the venue data may include both real-time and file-based media.
  • This media may include traditional real-time media such as, for example, audio and video, realtime data collected directly from competitors (e.g., vehicle telemetry, biometrics, etc.), venue-side real-time data (e.g., timing, position, results, etc.), traditional file-based media (e.g., photographs, editorial text, commentary text, etc.), other file-based media (e.g., electronic mail messages sent by competitors, weather information, maps, etc.), and/or other software elements used by the client (e.g., visualization modules, user interface elements dynamically sent out to client to view data in new ways, sponsor (advertising) contexts, view style sheets).
  • traditional real-time media such as, for example, audio and video, realtime data collected directly from competitors (e.g., vehicle telemetry, biometrics, etc.), venue-side real-time data (e.g., timing, position, results
  • Sporting events can be characterized by the assets collected from the event.
  • these assets can include audio and video of the actual event, audio or video of individual competitors (e.g., a video feed from an in-car camera), timing and scoring information, editorial /commentary /analysis information taken before, during and /or after an event, photographs or images taken of and by the competitors, messages to and from the competitors (e.g., the radio links between the pit and the driver in an auto race), a data channel (e.g., a control panel readout taken from the device in a car), telemetry indicating vital functions of a competitor.
  • the telemetry can include biometrics of the competitor (e.g., heart rate, body temperature, etc.).
  • Other telemetry may include position information of a competitor (e.g., player with microchip indicating position) using, for example, a global positioning system (GPS), telemetry from an on-board computer, etc., or a physical device (e.g., an automobile).
  • GPS global positioning system
  • Various devices may be used to perform the collection of sporting event data, or other data. For example, cameras may be used to collect video and audio.
  • Microphones may be used to collect audio (e.g., audience reaction, participant reaction, sounds from the event, etc.). Sensors may be used to obtain telemetry and electronic information from humans and /or physical objects. The information captured by such devices may be transferred using wires or other conductors, fibers, cables, or using wireless communications, such as, for example, radio frequency (RF) or satellite transmission.
  • RF radio frequency
  • collection 101 includes remote production.
  • Remote production is the process of managing an event from a particular point of view.
  • managing an event includes determining which assets will be collected and transferred from the event venue.
  • event management includes: defining, statically or dynamically, event-specific metadata based on global metadata received from production; dynamically controlling which assets are captured (using dynamic selection of information as event data is being collected), how they are formatted (e.g., adjusting a compression rate using a video encoder depending on contents of the video), and transmitted away from the event; managing physical resources (data collection hardware, communications paths and bandwidth, addresses, etc.) necessary to capture, format, and transmit assets; locally producing assets (e.g., editorial text) to complement those being captured; and generating metadata in order to transmit event-specific metadata definitions back to production 103 (while performing production typically, but possibly while running the event).
  • a remote production facility performs the remote production and allows event producers to manage venue-side production (i.e., located at the event) processes for an event.
  • the RPF contains one or more graphical user interfaces (GUIs) for controlling one or more of the following functions: bandwidth management (allocating individual chunks of transmission bandwidth), stream control (dynamically allocating physical resources in order to capture, process, and transmit specific data streams), limited metadata management (a subset of the functionality supported by the studio), limited asset production (a subset of the functionality supported by the studio), and limited immersion production (a subset of the functionality supported by the studio).
  • Certain events do not support co-located remote production (for example, a climbing expedition on an isolated mountain). In such cases, the RPF is positioned further from the venue, or is omitted from the process and all RPF functions are performed at the studio.
  • collection 101 uses includes hardware, such as collection devices or data acquisition systems (e.g., cameras, microphones, recorders, sensors, etc.), communications equipment, encoding servers, remote production management server(s), and network equipment.
  • each of the collection devices converts the event data it captures into a format that includes the digital units of data, metadata and context information.
  • each of the captured devices sends the raw captured data to a location where a remote production unit, device, or system formats it.
  • a data acquisition application programming interface provides a uniform interface between data acquisition systems and a remote production system. This allows developers to write specific gathering and translation modules as plug-ins for interfacing the data acquisition systems to the remote production system that ensure that the remote production system can handle the data.
  • Transmission 102 transmits specifically formatted streams and packages, including metadata, from event venues to a studio.
  • streams and packages are transferred via high speed IP networks.
  • the IP networks may be terrestrial and /or satellite-based.
  • a communication mechanism of transmission 102 for the transmission of streams may be selected based on its ability to accommodate bandwidth management, while a communication mechanism of transmission 102 for the transmission of packages may be selected based on its reliability.
  • transmission 102 treats the specifically formatted assets as opaque entities. In other words, transmission 102 has no knowledge of what data is being transmitted, nor its format, so that the information is just raw data to transmission 102.
  • transmission 102 may include dynamic network provisioning for individual sessions. That is, the network may dynamically allot more bandwidth to particular streams or packages based on priority. Data could be routed over links based on cost or time priorities. For example, transmission 102 may purchase transport bandwidth, while a terrestrial IP network is on all the time. Supplementary data might be routed over Internet virtual private networks while video might be sent over a satellite.
  • transmission 102 may include a process that encrypts assets prior to transmission.
  • Production 103 includes a set of processes that are applied to digital media assets before they are distributed.
  • the digital media assets include specifically .formatted streams and packages resulting from collection 101, as well as content being produced within the process of production 103 itself.
  • Production 103 uses a studio as a central site where the digital media assets are produced before being distributed.
  • the studio is an internet protocol (IP) studio. It is referred to as an IP studio because all, or some portion of, digital media assets that are received from the studio are sent out using an industry standard TCP/IP protocol suite throughout the rest of the segments (phases) and the assets are digital IP assets.
  • IP internet protocol
  • the studio may not send and view the digital video assets or perform all operations using IP in alternative embodiments.
  • the studio fundamentally operates as a switch in which formatted content from multiple sources is received and sent out to multiple destinations.
  • Numerous operations may be performed on the content, such as archiving, compression, editing, etc., as part of the switching process. That is, there is no hardwired connection between the operations and they may be performed on the pool of assets in general.
  • Other production operations may be performed by the studio such as, for example, laying out text and graphics, setting priorities for views (assets groups), creating associations between assets, etc.
  • production 103 comprises the following processes: acquisition, asset storage, asset production, analysis, immersion production, metadata management, dissemination, process management, user management, distillation and syndication.
  • each of these operate in a manner decoupled from each other.
  • the process may be implemented as hardware and /or software modules.
  • Figure 3 illustrates each of these processes.
  • the acquisition process 301 provides the interface between transmission 102 and production 103.
  • Acquisition process 301 receives specifically formatted streams, packages, and metadata from collection 101 and parses them into assets (units of digital data) and metadata. Metadata may come to the acquisition process 301 separate from digital media assets when the metadata cannot be attached to event data that has been captured. This may be the case with an NTSC-based stream of data, where in such a case the metadata may indicate that the stream is an NTSC stream.
  • the acquisition process 301 provides an interface through which a number of operations may be performed. For instance, in one embodiment, the acquisition process 301 decrypts assets that had been encrypted for secure transmission, unpackages packages into their constituent parts, parses metadata messages to determine their type and meaning, collects Al usage and user identification information flowing back to production 103 from delivery 105 and immersion 106 and /or logs arrival times for all assets.
  • the acquisition process 301 parses the metadata messages from the information received and forwards their contents to the metadata management process 306. After initially processing assets, the acquisition process 301 forwards them to the asset storage process 302. It also registers new assets with the metadata management process 306. The registration may be based on a context map that indicates what assets will be collected, and the tag on each asset (attached at collection). During the acquisition process 301, using the tag, the process knows one or more of the following: what to do with the asset (e.g., store for later use, pass through unchanged, contact context management to notify it that the asset has been received, etc.).
  • the acquisition process 301 forwards the assets directly to the dissemination process 307.
  • flow-through assets are simultaneously forwarded from the acquisition process 301 to the asset storage process 302 and the dissemination process 307. This is shown in Figure 4.
  • the asset storage process 302 manages physical storage and retrieval of all assets.
  • One or more storage media may be used.
  • a variety of storage technologies may be used, each suitable for certain types of assets.
  • the asset storage process 302 is responsible for interacting with the appropriate storage technology based on asset types.
  • a given asset type may be stored in multiple ways. For example, a telemetry stream may be stored as a flat file in memory and as a set of database records.
  • the asset storage process 302 is involved in storage, retrieval, removal, migration and versioning.
  • Migration refers to moving assets up and down within a storage hierarchy. This movement may be between different storage technologies (e.g., between hard disk and tape). Migration may be performed to free up local or short-term storage. Versioning may be used to indicate an asset's current version (after changes to this asset have been made or have occurred). In one embodiment, every time the asset storage process 302 stores, removes, migrates, or versions an asset, it communicates with the metadata management process 306 to update the asset's physical location attributes, which the metadata management process 306 manages.
  • the asset production process 303 is the set of processes by which individual digital media assets are created and edited within production 103.
  • the asset production process 303 is applied to most assets that have been acquired from collection 101.
  • in-house editorial and production staffs may create and edit their own assets during the asset production process 303.
  • asset production process 303 includes creation, editing, format conversion (e.g., Postscript to JPEG, etc.), and distillation.
  • a number of editing tools may be used.
  • the creation and editing processes are performed in cooperation with the asset storage process 302 and the metadata management process 306. This interaction may be automatic or manual.
  • assets are transferred from asset storage in order to be edited, and transferred back into asset storage after editing has been completed.
  • effects of production actions are communicated to the metadata management process 306.
  • the asset production process 303 notifies the metadata management process 306 that a JPEG asset was derived from a Postscript asset.
  • the distillation process creates multiple versions of an asset to support different kinds of delivery technologies (e.g., high, medium, and low-bandwidth web sites, oneway satellite data broadcast, interactive television, etc.).
  • the distillation process is performed by assessing the capabilities of the delivery technology against the asset and type of data being transformed. Depending on the complexity of the differences, the distillation process may be more or less automated. In any case, in one embodiment, the distillation process takes into account many aspects of delivery, including, but not limited to, file format, and the number and kind of assets that will be included for a specific delivery platform.
  • the analysis process 304 coordinates the Al user activity and identification information flows into the production process via the acquisition process 301 and asset storage processing 302.
  • the Al information may indicate user requests as part of their activity information.
  • the analysis process 304 directs Al data to a data warehouse (or other storage). Once in the warehouse, this information can be analyzed using tools (e.g., data mining tools). This analysis may be done either manually or automatically.
  • Immersion production process 305 attaches greater meaning to the assets that flow through production.
  • the immersion production process 305 initially creates HTML pages that reference and embed other assets.
  • the immersion production process 305 also creates and edits contexts, generates Al instructions to indicate to immersion applications which user behavior to track and /or what actions to take when this behavior occurs, generates (manually or automatically) one kind of content based on another (e.g., highlight generation, running averages derived from telemetry values, specifically formatted metadata based on context management, etc.), generates production instructions directing immersion applications to automatically present certain information based on specific user actions, uses immersion applications to view content for quality control purposes, and defines the types of values available within particular stream types.
  • Metadata may include many different types of data.
  • metadata includes asset attributes, production instructions and contexts.
  • the production instructions may control the immersion applications based on activity of the user.
  • all types of metadata are first-class objects, thereby allowing easy transport between the platform segments.
  • every object has a unique identifier and a set of attributes. Unique identifiers are used to track objects and to relate them to other objects.
  • the metadata management process 306 creates and modifies attributes and contexts, logically locates objects by querying contexts (e.g., locate all streams belonging to Zinardi's car in the Long Beach race), logically locates objects by querying asset attributes (e.g., locate all the JPEG assets whose author is "Emily Robertson"), physically locates objects by tracking their movements in the form of asset attributes
  • contexts e.g., locate all streams belonging to Zinardi's car in the Long Beach race
  • asset attributes e.g., locate all the JPEG assets whose author is "Emily Robertson”
  • the dissemination process 307 provides the interface between production 103 and distribution 104. To facilitate this interface, the dissemination process 307 is configured to communicate with individual distribution channels. The dissemination process 307 communicates with the asset storage process 302 to retrieve assets and with the metadata management process 306 to retrieve metadata. The dissemination process 307 also communicates directly with the acquisition process 301 in the case of flow- through streams. In one embodiment, the dissemination process 307 provides an interface for a number of operations. In one embodiment, the dissemination process 307 provides an interface that constructs messages out of metadata, packages assets and metadata into packages, optionally encrypts data for secure distribution, and logs departure times for all streams and packages.
  • the dissemination process 307 sends the digital media assets to various delivery head ends.
  • the type of data that is distributed to different types of devices is dependent on the device and the dissemination process 307 controls which streams and packages of data are forwarded to the delivery head ends.
  • a device such as a Personal Digital Assistant (PDA) will only be sent data that it is capable of displaying.
  • PDA Personal Digital Assistant
  • an HDTV device will only be sent data that it will capable of displaying.
  • all data that is available is forwarded to the device and the device makes a determination as to whether it can or cannot display some or all of the information.
  • the studio distributes a control stream.
  • this control stream is the context map. That is, the context map is sent to the end user devices. In an alternative embodiment, only a portion of the context map that specifically deals with the event being captured is forwarded to the device to indicate what types of digital media assets are being forwarded. Based on the information in the control stream, the end user devices may determine what information is being sent to it and may determine what to view.
  • the process management 308 is a process that controls the automation of other production processes. The process management 308 uses several types of objects to control asset switching (routing). In one embodiment, these types of objects include routes, process events, schedules and rules.
  • a route is a mapping between a set of processes and a set of physical (e.g., hardware, software, and network) resources. For example, Figure 4 illustrates a simple route allocated to a flow-through video stream.
  • a stream is received from an incoming network 401 and undergoes acquisition via the acquisition process 301. From acquisition, the video stream is forwarded to both the assets storage process 302, which stores the video stream and makes it accessible on a video server 403, and the dissemination process 307 where the video stream is disseminated to an outgoing network 402.
  • a process event is the application of a given route to a particular asset or group of assets at a specific time.
  • a schedule is the set of times at which a processing event occurs.
  • a rule is a logical constraint that determines when an event occurs. For example, a rule might state that a static leaderboard update page should be generated whenever a leaderboard stream has been acquired and archived. By using these objects, the assets may be managed, including indicating what information is to be shown.
  • the process management 308 also provides an interface for creating, querying, and editing routes, process events, schedules, and rules. In one embodiment, the process management 308 also keeps a log of every completed event and the success or failure of its outcome.
  • the user management process 309 controls access by production users to the various processes within the studio.
  • the user management process 309 manages definitions of users, groups, and access levels. Based on these definitions, it responds to requests from the process management 308 to provide access credentials for particular studio activities.
  • the syndication process 310 allows 3 rd -party organizations (e.g., external media companies) access to assets within the studio.
  • 3 rd -party organizations e.g., external media companies
  • individual assets and subscriptions can be offered, with e-commerce taking place based on those offers.
  • the studio contains a hierarchical or other type of arrangement of asset storage hardware and software (e.g., a database, robotic-type system, etc.).
  • the asset storage control system controls the flow of assets up and down within that hierarchy and determines how to route assets based on their types.
  • the asset storage system would direct data of differing types (e.g., telemetry vs. video) to appropriate storage types.
  • the asset storage system can also make intelligent decisions about asset migration, for example, based on the time since the asset was accessed, the relationship of the asset to current production activity (as determined by context analysis, the time-sensitivity of the asset, and /or an industry standard algorithm (e.g., least recently used (LRU)). As the production evolves, the asset system might choose to push some (previously used) data into off-line storage (e.g., HSM).
  • off-line storage e.g., HSM
  • each studio subsystem presents an application programming interface (API) for access by other subsystems.
  • API application programming interface
  • the process management 308 controls the automated movement of assets and metadata through the studio. It manages routes, process events, schedules, and rules, defines a common process management API that studio subsystems support and use this API to invoke particular asset and metadata operations in response to event triggers.
  • the process management system may be tightly integrated with the monitoring and control system.
  • the content may be web content.
  • a web content publishing system streamlines the web content production process.
  • the web content publishing system may support file locking to prevent simultaneous updates by multiple users, version management, HTML link maintenance, specialized content verification triggers, incremental update generation, multiple staging areas, and automated content pushes.
  • Web sites may have special needs for content publishing and delivery.
  • the web page may need a graceful mechanism for dynamically updating the files being delivered by the site.
  • the web pages may need a robust, scalable infrastructure for delivering dynamic content (particularly content that is truly interactive, such as a multi-user on-line game).
  • the web content delivery system includes middleware and application software necessary to support these requirements.
  • the web content delivery system may be a third-party different than the content generator.
  • production 103 is able to create and distribute content in the form of incremental web site updates for ensuring incremental updates to live sites.
  • incremental updates to the live sites two versions of the data are maintained on one server. Each version is accessed through separate directories in the file system.
  • the directories are switched so that the server accesses the updated directory and allows the previously used directory to be updated.
  • the directories need not be moved to implement the switch.
  • only the pointer used by the server to access a directory is changed. This ensures that the newest version is always available.
  • a version number is associated with each version to indicate which version is currently being stored. In such a case, latest version available on all servers is the version that is used and made accessible.
  • production 103 uses hardware such as specialized archive equipment (tape backup systems, video servers, etc.), production management servers, video encoders, and network equipment.
  • specialized archive equipment tape backup systems, video servers, etc.
  • production management servers video encoders, and network equipment.
  • distribution 104 is the process of transmitting streams and packages from the studio, via high-speed IP networks, to delivery facilities and /or mechanisms.
  • Distribution 104 may use a distribution network having multiple simultaneously transmitting channels.
  • broadband communications are used for transmission.
  • the mechanism(s) used for distribution 104 may be selected based on the ability to accommodate bandwidth management, reliability, and/or other considerations.
  • Delivery 105 makes immersion content available to immersion 106. Numerous, different classes of delivery technology may be used, and multiple instances of each class may be used as well. In one embodiment, delivery 105 may employ satellite, cable, telcos, ISPs, television, radio, on-line and print.
  • delivery may take the form of one-way broadcast or client-server requests, via low, medium, and high-bandwidth bi- directional networks. Also depending on the technology, delivery may or may not involve translating streams and packages into other formats (e.g., extracting data from a telemetry stream and inserting it into a relational database). Each delivery provider may implement a proprietary content reception protocol on top of basic IP protocols.
  • Al usage and user identification information generation may occur with delivery 105.
  • Al information is discussed in more detail below.
  • decryption may occur during delivery 105.
  • Immersion 106 is the process of compositing immersion content for access by end users. Many immersion applications may be executed, or otherwise performed, in immersion 106. For example, immersion applications may include custom, standalone applications, generic HTML browsers and Java applets, etc. In certain cases, where end user devices are essentially non-interactive (e.g., traditional television), immersion 106 may occur as part of delivery 105.
  • end user devices are essentially non-interactive (e.g., traditional television)
  • immersion 106 may occur as part of delivery 105.
  • Immersion 106 may comprise software that generates a generic web browser or a custom immersion platform. Such an immersion platform enables the end user to specify content desired for viewing, such as by selecting active links (e.g., Universal Resource Locators (URLs), thumbnails, etc.).
  • active links e.g., Universal Resource Locators (URLs), thumbnails, etc.
  • each client for use by the end user is comprised of filter, an agent, and some type of display interface.
  • the filter receives the stream from the delivery provider and filters out content to retain only that content that may be displayed by the client to the end user.
  • the filter may understand the capabilities of the client, and based on knowledge of what data is being sent, can determine which content to filter (e.g., ignoring a stream of graphics or video in the use of a PDA type device).
  • the filtering uses the content map or other control stream information to determine what is being sent and received and what information to ignore.
  • the data is sent to the agent which coordinates with the display interface to display or otherwise provide the information to the end user.
  • the agent is a client- specific agent designed to coordinate the display of information in a predetermined way.
  • the display interface interfaces the agent to the display functionality of the end user's device or system.
  • Monitoring and control 107 monitors all segments of the process, with the exception of immersion and sometimes delivery, to control the systems within processes.
  • the control may include bringing a system on and off-line as well as coordinating dynamic and static configurations.
  • Monitoring and control 107 also ensures that the processes are synchronized with a global network time.
  • monitoring and control 107 is performed by a system running a system management application used for monitoring and controlling hardware and software systems throughout the platform in a manner well-known in the art.
  • the systems responsibilities include detecting hardware, software network failures, maintain hardware and software asset inventories, automatically configuring hardware based on software asset inventories, and notifying operators of failures or conditions.
  • the system performs by using status information provided by the elements of the platform.
  • this information is provided on demand.
  • the monitoring and control system sends programmatic configuration commands to the elements (e.g., software systems and processes) of the platform to obtain the status information.
  • An API may facilitate the defining of queries and commands to which the components in the system respond.
  • the system in addition to monitoring and controlling individual resources, is able to monitor content streams and packages as they flow between physical resources.
  • the system is able to determine such things as: where a given asset is, what path it is following, and why an asset failed to make it from point A to point B, where A and B may not be directly connected to each other.
  • the platform may be implemented using well-known technologies, including, but not limited to, networks, hardware, transport protocols, and software. Transport protocols transfer the streams and packages.
  • all assets are transported over IP, with stream delivery using Real Transfer Protocol (RTP) (IETF RFC 1889) on top of IP and package delivery using a File Transfer Protocol (FTP) (IETF RFC
  • HTML HTML 4.0 W3C recommendation
  • XML XML 1.0 W3C recommendation
  • HTML HTML 4.0 W3C recommendation
  • XML XML 1.0 W3C recommendation
  • HTML files may contain both content and links and are stored within the studio as assets. Their link structure is also accessible to the metadata management process.
  • each web site has a special "root" context. By navigating downward from the root, it is possible to reconstitute a web site from metadata and stored assets.
  • audio /video data may not be transported in IP format.
  • IP format For example, there may be a particularly remote event where the only way to transfer video from the venue is via traditional television broadcast technologies.
  • a television broadcast station may be a delivery provider.
  • each platform process comprises an individual IP network, which together form an internet.
  • the end-to-end platform requires an internetwork that connects the segments into a seamless whole.
  • a package delivery system provides a uniform mechanism for transporting packages between platform segments.
  • a stream delivery system provides a uniform mechanism for transporting streams between architecture segments.
  • the media database comprises a relational or object-relational database that contains all metadata and supports all metadata management functions; however, any type of database structure may be used. It will be required to provide standard database functions, such as transactional semantics, recovery, hot backup, SQL queries, query optimization, and so forth.
  • the media database directly imports and exports XML documents.
  • the platform uses other various software components.
  • these components include systems (i.e., databases), middleware, applications, and API's.
  • applications use open internet standards (i.e., LDAP, FTP, SSL, XML, etc.)
  • management GUI's use Web technologies, so as to provide location-independent access
  • API's are defined and implemented using the Object Management Group's
  • the platform manages media assets, such as video, images, audio, and the like, using metadata.
  • Context and other metadata are information that gives the assets meaning.
  • Metadata is information about an asset, such as, for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from a given frame of a particular video), the situation in which an end user accessed it (e.g., a hit), its heritage (e.g., other assets from which it was generated), or its movement through the processing.
  • asset such as, for example, its type (e.g., JPEG, MPEG, etc.), its author, its physical attributes (e.g., IP multicast addresses, storage locations, compression schemes, file formats, bitrates, etc.), its relationship to other assets (e.g., that a photograph was captured from
  • Metadata may be data that describes a particular image. For example, the knowledge that a particular photograph was taken at the German Grand Prix motorcycle race on a particular date, is stored in JPEG format, was used in the book Faster. Faster. Faster, and is of Mick Doohan on his Hyundai is metadata.
  • Metadata can also provide more abstract information such as, for example, the types of values available within a particular kind of telemetry stream, instructions generated by production and followed by immersion (e.g., to track certain kinds of end user behavior, to automatically present certain assets to the end user at certain times or in response to certain end user actions, etc.), or relationships between assets and other entities such as events, competitors, sponsors, etc.
  • the platform treats both assets and metadata as first-class objects. Metadata can apply to other metadata.
  • the platform transmits metadata from one location to another. For example, information about the syndicate that owns a boat can be distributed along with a photo of the boat.
  • Metadata may be used to provide information on what to do with assets.
  • metadata may be used to allow assets to be processed into a narrative (for example, by indicating what an asset is thereby triggering its inclusion into a narrative being produced), to coordinate their flow through a production process (for example, by indicting that assets are approved for public release when their release requires such approval), to store and retrieve them efficiently, and /or to control licensing (for example, where an asset may only be published in a particular location (e.g., country)) and security.
  • a context is a special kind of metadata that defines a set of assets (and/or other contexts). Examples of some contexts include the telemetry from Alex Zinardi's crash at Long Beach, all video assets from the German Gran Prix, the winner of the 1998-99 Around Alone race, the general manager of each syndicate participating in the 1997-98 Whitbread race.
  • a context may be understood as a query executed against the metadata known to the system.
  • a context can be dynamically generated, or it can be persistently stored for optimal access. Persistent contexts, like other kinds of abstract metadata, are themselves assets.
  • Metadata may be managed.
  • a context map is created as part of a pre-production task that defines the structure of an event and determines how assets gathered at the event will be organized.
  • the context map comprises a graph that indicates how various contexts are related.
  • the context map relates assets and metadata, as well as web sites, within hierarchies.
  • Various hierarchical processes include creation, querying (e.g., find the web site rooted at "X"), editing (e.g., add, remove, and replace nodes), transportation (e.g., compress into and decompress out of the format).
  • querying e.g., find the web site rooted at "X"
  • editing e.g., add, remove, and replace nodes
  • transportation e.g., compress into and decompress out of the format.
  • Transporting context maps between the collection and production processes and describing differences between versions of asset or context hierarchies within the studio use hierarchical descriptions. These transmissions are also in the single format used to transport digital media assets.
  • the context map (or a portion thereof) is sent from production to collection to organize the assets being collected.
  • the context map at the event venue may be expanded if data being collected is not in the context map.
  • the format used for sending context maps lends itself to describing hierarchies because assets are described in terms of their parents and children.
  • the format is a complete, self-contained representation of a given hierarchy.
  • the format is a set of changes (add, remove, replace) to be applied to an existing hierarchy.
  • the platform operates as a distributed system. Related assets and metadata may flow through that system in different ways and at different times. However, they are correlated with each other using contexts. For example, consider the following hypothetical example:
  • FIG. 5 illustrates an example of a context defined using metadata.
  • each node in this tree is a context.
  • the asset picture.jpg is directly placed in two contexts:
  • Contexts are designed to handle the basic asset management needs in a simple and intuitive way. Not every piece of information that needs to be tracked for an asset is a context. Some are simply attributes. For example,
  • the platform accomplishes name mapping through the use of Uniform Resource Names (URNs), such as set forth, for example, in IETF RFC 2141.
  • URN Uniform Resource Names
  • a URN is a persistent, location-independent, globally unique identifier. Every object, whether an asset or a context such as a competitor, team, or event, has a URN. Contexts refer to assets and other contexts via a URN. URN's may safely passed between platform segments. In one embodiment, every asset contain its URN embedded within it. In the case of packages, the URN may be represented by the package's file name; in the case of streams, the URN is included in a packet which is wrapped around each data packet. Alternatively, other global identifiers may be used (e.g., UID).
  • URN's and URN references allows assets and metadata to safely be split apart and recombined.
  • the following scenario illustrates the manner in which this process occurs. This scenario is based on the context illustrated in Figure 6. Referring to Figure 6, contexts in bold are created in the studio, while contexts in italics are created in the remote production facility (RPF). Dotted lines represent references by URN across context creation boundaries.
  • the studio creates a context for the German Grand Prix race, and for Mick Doohan as a GP competitor, and generates URN's for both.
  • the RPF configures itself with the event and competitor URN's that it receives from the studio. To the event context, it adds the fact that Doohan (i.e., his URN) is a competitor in this particular event. In other words, to the list of competitors in the event content, the URN is added.
  • the RPF captures a video asset from Doohan's bike and creates an URN for it. By basing the asset URN on the event URN it received from the studio, the RPF ensures that there will be no conflicts with URN's created at other events.
  • the RPF transmits the video asset to the studio as a stream with the asset's URN embedded within it.
  • the RPF separately generates a metadata message describing the complete event context and transmits that asset to the studio.
  • This metadata asset includes the Doohan URN as a member of the event's competitor set and the video asset as a member of the event's assets set.
  • the video asset contains a reference to the Doohan URN.
  • Contexts define points of focus for event coverage. Points of focus are both static and dynamic.
  • Static contexts include creation contexts and structural contexts. Creation contexts define systems from which assets are created (e.g., car #1 telemetry, trackside camera #2, video production suite #3, etc.). Structural contexts define the logical structure of the event (e.g., competitors, teams, tracks, etc.). Dynamic contexts define points of interest that occur during the course of a particular event (e.g., crashes, winners, red flags, etc.)
  • context views representing particular perspectives on an event (e.g., a crash consisting of two video streams from two different angles, along with the telemetry from each car involved in the crash, for a particular period of time).
  • the selection of a view may be made by a producer.
  • three parts are used to generate a view: 1) an indication of the logical arrangement of the view that is to be shown, 2) a style sheet indicating how the content is to be presented visually, and 3) code (e.g., a browser or applet) to put the view on the screen.
  • serialization of context views into an industry-standard interchange format are used to generate a view: 1) an indication of the logical arrangement of the view that is to be shown, 2) a style sheet indicating how the content is to be presented visually, and 3) code (e.g., a browser or applet) to put the view on the screen.
  • context maps to translate raw usage logs into meaningful business intelligence (e.g., understanding that a particular set of "click streams" means that end users are interested in a given competitor as opposed to some other competitor).
  • the information and metadata associated with the asset, via the context map may provide beneficial knowledge. For example, if end users often view a particular portion of content, a determination may be made as to what is in the content and such information may be important to, for example, a manufacturer or sponsor if they know their product or competitor is getting exposure.
  • Contextual information makes it possible to generate audience intelligence (Al) instructions to track every access to a digital media asset, such as, for example, an asset associated with a Nissan motorcycle in the example above.
  • a digital media asset such as, for example, an asset associated with a Nissan motorcycle in the example above.
  • both contexts and Al are represented using the same format as the digital media assets.
  • Al is a collection and analysis of meaningful information about any end user activity.
  • the context provides the framework for what is considered to meaningful.
  • the platform described above supports Al through the following processes which are separated across multiple platform segments. First, Al is supported through the generation of production instructions instructing immersion applications as to what behavior to track. Second, Al is supported by immersion
  • client applications generating usage information based on production instructions.
  • Al is supported through the collection of usage information back from immersion applications and the collection of end user identification (e.g., demographics, etc.) information back from immersion applications. Al may also be supported through cross-analysis of the collected usage and end user identification information.
  • Al end user activity and identification information is transferred back to the studio from immersion 106 (via client software) and from servers at delivery 105.
  • This data is stored in a Al data warehouse.
  • the warehouse is a standard data warehouse product that supports multi-dimensional queries. It will be used to support Al analysis activities.
  • the acquisition and dissemination process implement the entry and exit points to and from the studio of Al information and instructions, respectively.
  • the acquisition process is responsible for decomposing formatted information into assets and metadata, and routing it into the studio.
  • the dissemination process is responsible for composing assets and metadata back into the format for distribution out of the studio.
  • Al data can be communicated via Internet, bulk file transfer (e.g., floppy) (where the acquisition process does not require a direct (Internet) connection between the client (end user) and the studio), or gleaned from prompts to end user for action (e.g., "Call us for a free T-
  • the platform uses a universal format for transporting and storing digital media assets.
  • the use of the format ensures interoperability between platform segments as well as the ability to retain asset associations during storage and retrieval.
  • the format is based on extensible Mark-up Language (XML) (XML 1.0 W3L Recommendation) and also defines the low level format for certain kinds of data streams (e.g., card telemetry, etc.), defines packaging formats for transporting and storing groups of assets, and maintains a dense message format for transporting and storing metadata, specifies format requirements for all asset types (e.g., video must be MPEG 2, etc.), and specifies protocol requirements for underlying network transport layers (e.g., real-time streams must be transported using RTP over multicache, etc.).
  • content is recorded in the XML high-level layer, followed by the lower level definitions.
  • the platform uses a standard file naming process. This process can be either manual or automatic.
  • a client API provides a common set of format services across client platforms.
  • asset file names depends on the nature of the asset.
  • An asset may be either "static” or “dynamic”.
  • a dynamic asset is one that was created relative to a particular place and time during the course of an event (e.g., a daily update report, or a photograph of the finish of a race, etc.).
  • a static asset is one that plays a relatively stable role in the overall structure of a presentation (e.g., a navigational button, or an index page, etc.)
  • Static assets can be of several types. Examples of these types include, but are not limited to, structure (index pages and the like), behavior (scripts and applets that control interactive behavior), metadata (descriptions of schemas, contexts, production instructions, etc.), presentation (purely visual elements), content (static content such as bios, rules, and so forth).
  • the context code field corresponds to a Context ID (CID) representing the asset's place within the presentation structure (e.g., "the gallery,” or “the main section,” “global navigational elements,” etc.).
  • CID Context ID
  • this field corresponds to the CID for the context in which the asset was created.
  • the CID is a six character code for the context.
  • a creation context defines a specific environment in which an asset was produced.
  • the context code field of an asset's file name indicates the location from which the asset came.
  • the creation context may or may not directly correspond to the context in which the asset is deployed as part of immersion production. Examples of creation contexts include, but are not limited to, a competitor's vehicle on a track during a particular session of a particular race, a journalist's laptop on a mountainside over the course of a trek, the daily news update production process within the studio, the "medals by country” leaderboard for a particular Olympic Games.
  • CID's Ownership and management of CID's is the responsibility of event producers. Producers may manage CID's in any way they like (i.e., manually, or through some sort of database application). They may distribute identifier lists (of CID's) to asset producers manually (i.e., on a sheet of paper), or via a web page, or with any other convenient mechanism. All the naming standard requires is that CID's be unique within an event, and that event and asset producers agree on their meaning.
  • file names consist of a body, followed by a period, followed by an extension, which are 1-5 characters in length. It should take the natural form for a given asset type (e.g., HTML files should have an extension of ".html"). The format for bodies is specified below. In one embodiment, file names are case-insensitive.
  • metafiles While not requiring it, the file naming standard encourages the use of metafiles.
  • metafiles they adhere to the following requirements: the metafile uses the same name as the asset it describes, except with an extension of ".qmf"; the asset and metafile are transported together inside a ZIP file; this file uses the same name as the asset, except with the extension of ".qpg".
  • asset types i.e., ".mpg” for video, "-jpeg” for photos, etc.
  • file formats and asset types don't necessarily map one-to-one.
  • a video asset could be either MPEG or AVI.
  • both free-form text and email could be represented in a ".txt” file.
  • file extensions are only to used help operating systems launch appropriate authoring tools.
  • High-level asset types are encoded within file name bodies.
  • a filename body for a dynamic asset may consist of the following fields in the following order: event, data, context, sequence number, type.
  • filename body for a static asset may consist of the following fields in the following order: event, context, name, type.
  • the leading fields define the asset
  • the final field (type) defines the sub-asset.
  • Sub-assets are known as asset variants.
  • the legal character set for a field consist of: A-Z, a-z, 0-9, _ (underscore).
  • three characters represent the event (or other top-level context, such as network, studio, corporate) to which the asset belongs.
  • a global list of event codes is used.
  • Eight characters represent the date on which the asset was originally collected, in the following format - YYYYMMDD. This date is in GMT format.
  • an event-specific list of contexts may be selected for each event.
  • CID's starting with a numeric digit (0-9) are reserved for system-generated assets (e.g., replication system temp files, etc.). Numeric CID's are not used for static assets where there is no way to distinguish the start of the context from the start of the date, and thus no way to determine that the asset is in fact "static”.
  • numeric characters represent an asset's sequence of creation within a given event/date/context combination. This sequence monotonically increases. Depending on the event and context, the sequence may encode different kinds of information (such as time in seconds from midnight, time in hours:minutes:seconds, etc.). This standard neither specifies nor constrains any internal structure within sequence numbers.
  • Three characters represent the asset's type. This is a logical type, such as "thumbnail”, “update email”, "story headline”, rather than a physical file format. In some cases, however, the two may map directly to one another (i.e., photograph and JPEG).
  • a global, event-independent list of legal types is used.
  • a list of event-specific type codes are selected for each event.
  • the metafile uses a reserved type code.
  • a metafile that describes a single asset variant can still use that variant's type code.
  • a scalable system for presenting immersive sports experiences is described.
  • the system presents an immersive experience containing both live and static assets for any event on any delivery platform.
  • the system scales to any number of assets, any bandwidth, and any platform capabilities.
  • the architecture presents the end user with choices of how and what to asset group.
  • the architecture is aided by the very design of the assets.
  • Asset groups define associations between assets.
  • Asset groups are, in turn, bundled with stylesheets which describe how to present those assets.
  • Figure 7 illustrates one embodiment of the architecture structure for an end user client.
  • the application 700 is divided into three main areas of functionality: the hardware abstraction 701, the core 702, and the presentation modules 703.
  • the hardware abstraction 701 provides an interface to hardware in the system that transfers information to and from the client.
  • the hardware abstraction layer also insulates the core 702 and the presentation modules 703 from specific implementations of video streams, audio streams, input streams, display functionality, and local storage facilities.
  • the hardware abstraction 101 interfaces a video decoder, audio decoder, display driver, end user input, local storage, etc., from the core 702 and presentation modules 703.
  • the core 702 coordinates the visual presentation, manages end user input, and coordinates the media streams. In itself, the core 702 does not actually draw anything on the screen; it manages the process by which the display is rendered and relies on presentation modules 703 to actually perform the rendering.
  • the core 702 also embodies the client services that implement features such as the context map, and view management.
  • presentation modules 703 are loaded dynamically either from the local hardware platform or from a data stream. They implement the actual visual display that the users see and interact with.
  • An asset group is an arbitrary collection of assets that define the content presented by a stylesheet.
  • the collection of assets may include both streamed and browsed assets.
  • a stylesheet is a description of how those assets are visualized, it specifies which presentation modules 703 are used, and what content they will display.
  • Figure 8 illustrates asset groups, stylesheets and presentation modules. Referring to Figure 8, the universe of assets 800 are shown with an asset group 801 identified. From the universe of assets 800, asset group 801, which tells a particular story, is selected and applied to a style that defines a layout for a set of presentation modules 803 that use the assets on a specific delivery platform.
  • stylesheets can be nested in much the same way that frames can be nested in HTML.
  • Figure 9 illustrates one embodiment of a stylesheet describing the layout and presentation.
  • the architecture of the client relies on the characteristics of the context map.
  • the context map provides a structure for all event content.
  • Figure 10 illustrates a simple context map. This map is also used to give unique identifications to assets (either by URN or UID). All references to assets are by these unique names, so it makes sense to use a client-side context map to manage the assets since the client will also be referring to the assets by these unique names.
  • This client-side context map allows for a new mechanism for state management in presentation modules 703. Moreover, these names provide a consistent namespace for defining stylesheets and asset groups that apply to particular clients.
  • a leaderboard component posted "Car X Selected” events when the end user clicked on a specific car.
  • a Bio Viewer presentation module having previously subscribed to that event type, would receive the event and look up the specified car number in a local data structure to find out who is driving that car.
  • One problem with this design is that the Bio Viewer presentation module could be used in just about any sport viewing experience, except for the fact that it depends on the "Car
  • the client-side context map also allows for a mechanism for state management in presentation modules 703.
  • Change notification is the process whereby presentation modules 703 are informed of changes in the context map.
  • Change notification resolves a problem of having events being too tightly coupled to the code of individual presentation modules, and not capable of providing a structure which could be remapped to different data layouts for different sports by moving the semantic dependencies out of the presentation modules and into the context map.
  • the Bio Viewer presentation module subscribes to change notification on the "Current Racer" node of the context map. Rather than coding the context map specifics into the module, this relationship is established by the stylesheet that caused the Bio Viewer presentation module to be instantiated. When the end user clicks on the leaderboard, it updates the "Current Racer" node. The Bio Viewer presentation module is then notified of the change.
  • a different stylesheet could establish a Bio Viewer that tracked the "Hero Racer” or the "Racer in the Lead” rather than the "Current Racer", with no code changes required in the Bio Viewer presentation module itself.
  • context map node information are described externally to the implementation of the module.
  • presentation modules 703 may never have "hardcoded" references to particular context map nodes; all such references are defined in a stylesheet, such that the presentation module 703 can be repurposed merely by changing the stylesheet.
  • data flows from the collection interface through the platform-specific components, into the context map.
  • Control is initiated in the user interface (via the presentation modules 703), except for master control directives (also known as "Priorities") delivered via production.
  • master control directives also known as "Priorities”
  • These directives allow the production facility (the studio) to provide appropriate idle activity, guide the end user to content they might otherwise miss, to change the behavior of specific components such as the Al module, or to impose a particular asset group and stylesheet on the client so as to focus the user's attention on a specific happening in the sporting event.
  • the data and control flow into the client is structured in terms of events. Note that this is different from an event such as a sporting event. In this case event refers to "an object within the client code that represents a particular state update.”
  • event refers to "an object within the client code that represents a particular state update.”
  • User events represent low-level user input (mouse clicks or keyboard input, for instance).
  • User events are handled by presentation modules 703, perhaps after some filtering by the core 702.
  • Timer events are generated by an internal system clock and are used to synchronize animation and rendering. Timer events are handled by the context map, and result in updates to the "current time” entry of the context map.
  • Data events are created when new data arrives from the network or other transport medium (for example, "video frame available” or “telemetry datum available”). All data events result in updates to the context map.
  • presentation modules 703 do not directly receive timer or data events; presentation modules 703 rely on the context map to obtain data and notifications about data updates. The timer and data events are therefore "hidden” by the context map. Presentation modules 703 that wish to update themselves as time passes do so by requesting change notification from the "current time” node of the context map. (The core 702 does provide services for logging and replaying all incoming events, to facilitate testing, but this is not visible to presentation modules 703.)
  • the core 702 provides an event filtering service for input (e.g., mouse input, keyboard input, etc.). For example, when the end user moves the mouse, global mouse move events are generated by the user-input-handling platform component. These global mouse move events are then received by the display manager, which maps them to a particular display area owned by a particular presentation module. That presentation module then receives a local mouse move event which is within its own display area's coordinate space. This enables presentation modules to be unaware of their location on the screen; they need only deal with input handling relative to their own portion of the screen.
  • input e.g., mouse input, keyboard input, etc.
  • context map entries consist of information about the event delivered from the venue.
  • data entries in the context map are largely the same in all the clients viewing a given event. Some examples include: spectator, the current time, statistics about each participant, the leaderboard, in sports which have a leaderboard, the status of the entire event.
  • Viewer entries consist of information about what this particular end user is currently viewing or focusing on.
  • viewer entries in the context map may be different for each user's client. Some examples include: the currently selected contestant (e.g., the contestant that most interests the user), the current view being watched by the end user, the volume (as set by the end user) of various audio streams.
  • Data entries are updated either by the core 702 itself in response to incoming data events, or by internal components (presentation modules) which compute new context information in response to that data.
  • the leaderboard could be a calculated entry in the context map, updated by a component in response to individual timing events.
  • not all data entries in the context map are necessarily delivered from the venue; some may be calculated locally on the client.
  • Viewer entries are updated by presentation modules 703 in response to end user input or directorial commands. Updates to viewer entries by some presentation modules 703 may, in turn, result in context map update notifications to other presentation modules 103.
  • FIG 11 is a schematic of the control flow when the end user clicks (for example) on a racer whom they want to select.
  • the process begins with the mouse input is received by the user-input-handling platform module (processing block 1101). Next, the user input event is routed to the display manager
  • processing block 1102 the display manager calculates which display area was hit and routes a display area event to that presentation module (processing block 1103).
  • That presentation module calculates which racer was selected, and updates a selected racer viewer entry in the context map (processing block 1104).
  • the other presentation modules e.g., the Bio Viewer presentation module and a telemetry viewer presentation module
  • the client relies upon the client components to deliver an event stream. These components decode the information coming to the client and present interfaces for the various media types (e.g., audio, video, telemetry, control, etc.).
  • Platform standard components handle the decoding of audio and video streams. To prevent the client from developing specific component dependencies, every widely available platform component are included with an abstraction layer that provides a standard interface to that functionality.
  • Packages may contain data files or browsed content to cached on the client. Specific formats may be supported with appropriate browser plug-ins.
  • the application may write content to the local storage facilities via predetermined format (e.g., XML).
  • predetermined format e.g., XML
  • modules may use proprietary formats that are designed for specific use.
  • the hardware modules present on one embodiment of a typically configured client are: video decoder 711, decoder 712, package receiver 713, telemetry 714, display 715, user input 716 and local storage 717.
  • Video decoder 711 decompresses video streams into primary or secondary display surface.
  • Audio decoder 712 decompresses audio streams into local audio playback facilities.
  • Package receiver 713 handles storage and registration of locally stored content.
  • Telemetry 714 decompresses telemetry streams into client APIs.
  • Display 715 manages primary and secondary display surface allocation.
  • User Input 716 handles local platform input devices.
  • Local storage 717 provides storage and retrieval functionality for arbitrary data.
  • While the core 702 of the client is somewhat dynamic, the following comprise the functionalities for a context map, display manager, event dispatch, audio manager, and video manager.
  • the context map is local representation of the event content.
  • the context map receives assets and stores them in a structured tree-like namespace. It provides access to particular entries by URNs.
  • the context map also allows the creation and deletion of entries, the storage of structured data in individual nodes, and the notification of context map clients when nodes are updated or otherwise changed.
  • the display manager renders loop and handles animation clock, display loop, and display management.
  • the display manager may be a simplified windowing system.
  • a presentation module 703 allocates screen space as directed by the layout directives in the stylesheet that caused it to be loaded. This not only simplifies changes to screen layouts, it allows the display manger to quickly contextualize user input events.
  • this component also manages the render thread and the animation thread. These process threads are suspended and resumed via display manager interfaces. Every presentation module 703 that requests a display area causes the associated drawing surface to be registered with the renderer. As a result, surfaces allocated as part of a display area can be double buffered if there are sufficient system resources.
  • presentation modules 703 assume that they are double buffered (calling flip() at the conclusion of drawing operations) in order to automatically take advantage of this functionality.
  • the event dispatcher whose function is the coordination of event flow.
  • the event dispatcher implements the following services: subscribe, cancel, and post.
  • the subscribe (event description, handler) service registers an interest in a kind of event, returns a unique subscription-identifier.
  • the cancel(subscription-id) relinquishes an interest in a previously subscribed event.
  • the post(event) submits an event.
  • the event dispatcher is responsible for routing events.
  • the event dispatcher has three jobs: it allows components to post new events; it allows components to subscribe to particular types of events; and it implements an internal dispatch loop which selects one event at a time and dispatches it to all the subscribers for that event's type.
  • the event dispatcher supports three different techniques for posting events.
  • the post external events method is used by external modules to post new, incoming, external events from the outside world. Post external events is thread- safe; multiple, separately-threaded external modules may all be posting external events concurrently and the event dispatcher will not block.
  • the event dispatcher may reorder or queue various categories of external events before dispatching those events to subscribers.
  • the post replay events method is very much like post external events, but is intended specifically for supporting replay of a recorded external event trace. Events posted with post replay events are guaranteed never to get reordered within the event dispatcher before being dispatched. This is critical for deterministic replay.
  • the post internal events method is used by internal components to post viewer events which should not be recorded as part of the external event stream. In one embodiment, internal events are never reordered.
  • the event dispatcher's dispatch loop is essentially single-threaded; there is only ever one event being dispatched at a time, and when an event is selected for dispatch, it gets dispatched to each subscriber in order, one at a time. So the guarantee is that when an event subscriber gets called with an event, it is assumed that no other component in the system is currently handling that event. This calls into other modules in the system, safe in the knowledge that those modules will not be busy, and hence no deadlock.
  • the audio manager handles balance, volume, and stream allocation for audio.
  • the audio manager is an API used by the presentation modules 703 to control an audio hardware module.
  • the video manager handles stream allocation, and destination assignments for video.
  • the video manager is an API used by the presentation modules 703 to control a video hardware module.
  • the list of presentation modules 703 in one embodiment of a client map consists of:
  • Event Log - telemetry storage access by time-frame / car / etc.
  • Figure 12 illustrates the application as a collaboration between many different components, each supplying a specific area of functionality.
  • the presentation module API specifies the interfaces to the core functionality that is available to the presentation modules.
  • the common hardware abstractions include specific API's for each of the hardware components.
  • Java may be used to implant the client but this does not preclude other solutions such as C++ /Visual Basic with COM or ActiveX, shared libraries, or even TCL/TK.
  • External events are events which arrive from outside the client; they represent the outside world's entire effect on the client. Input events, data events, participant events, and time events make up all the external events.
  • Internal events are generated by modules of the client in response to particular external events. These internal events include (for example) leaderboard update, racer selection, etc.
  • the client is fundamentally based on the concept of deterministic execution. That is, everything that happens in the client happens in response to handling external events.
  • the external events received by the client can be recorded in linear order, and if the client is later restarted and those exact events are replayed into the client, the client will do exactly what it did on the previous run of the system.
  • chronicle of the event is only slightly less than real time. In such a case, the occurrences of an event may be depicted in minutes rather than hours.
  • the system may be used for live sportscasting, and thus the system produces live coverage of a "chronicle-like" nature.
  • live when used in conjunction with a "chronicle” refers to being produced and published within a predetermined period of time (e.g., five minutes).
  • chroniclelike the coverage comprises a series of produced self-contained "stories,” each of which may have its own complex multimedia layout.
  • chronicle content being presented may include streamed coverage such as live video, real time audio coverage and live telemetry displays in a complementary way.
  • the chronicle is initially "sketched out” in pre-production.
  • the run down manager is an application that allows individuals to communicate with a rundown server, which acts as a backend for the system.
  • the rundown itself is a list of POPs ("pieces of programming").
  • a POP is the basic unit of presentation, roughly corresponding to a "story” in broadcast news. At this stage, each POP is little more than a name. For example, a rundown for a Tour de France stage in early pre-production might begin something like the table below:
  • the early rundown provides a skeletal structure for the sportscast, but POPs will be added, deleted, modified, and re-ordered right up through the live presentation.
  • the next pre-production step is to start building templates.
  • the templates define the structure of the POP - what kind of media is in it, and how it is laid out.
  • the rundown defines the pacing and order of the event being broadcast; the templates define its look.
  • templates are HTML based, and are described in detail below.
  • templates are created for specific coverage situations that can be anticipated, although some generic templates may also be useful.
  • the production of a template is a two-stage process.
  • templates begin with actual HTML composed by a designer.
  • there is no limit to the size of a template it may extend to many pages.
  • Appropriate script code is added to the HTML to produce a final template.
  • the template In production, the template is filled with content during the course of the live production process. When it is ready to be published, it is compiled back into pure HTML and handed off to a publishing system. This may be automated or manual.
  • a template captures the key elements of the design in such a way that the layout can be used with whatever particular content is required for live coverage. This moves the bulk of the design and layout work into pre-production, and makes live event broadcasting possible.
  • the template for the page is created directly from the working HTML.
  • the variable portions of the design are then separated from the static portions. Possible static portions might be corporate branding, or elements that are part of the presentation as a whole (e.g., background colors, font sizes, or even background images).
  • the dynamic (or changeable) elements of the design have been identified and specified in an initial definitions block, and the corresponding locations in the HTML have been replaced with syntax for retrieving those defined values.
  • the script sets for the template syntax.
  • the purpose of this template syntax is to allow broad flexibility in describing how content gets incorporated into the production of an HTML file. In achieving this goal, it balances legibility, extensibility, and complexity. Thus, a scripting language attempts to capture the complexity of a design.
  • Field definition occurs when the template is loaded into the rundown manager.
  • the template is partially parsed and any define() or declare() statements are executed to produce a suitable user interface (UI) for the template.
  • UI user interface
  • these statements contain information that determines which user-interface elements are displayed; text entry boxes for text fields, sliders for integer values, color pickers for HTML colors, and so forth.
  • Compilation takes place when the template is converted from script to HTML in a process also referred to as 'compiling'.
  • the statements and expressions that are not part of define() statements are interpreted, references to external media are resolved, any implicit media conversions are performed, and pure
  • HTML HTML is produced.
  • the first construct of the script is the define statement. It is used to describe a part of the template that will be exposed in the production tools user interface. define(specification-list)
  • a specification-list is a variable length list of comma separated parameters. The contents of the list depend upon what kind of variable is being defined. All definitions require a name, and a type, and all variables can be marked as required or optional, but beyond that there is no expectation of commonality. The documentation for each variable type has all of the details for its specification.
  • the first definition is probably the simplest possible; it describes an unconstrained field of text.
  • the second one is declared as optional, which indicates that there is no problem if the field is not filled in when the template is compiled.
  • the last one has a constraint, perhaps imposed by the designer, that the text must be at least fifty words long.
  • the declare statement is used to expose static elements of the original design to the script context. declare(specification-list)
  • Declarations and definitions can refer to previously defined declarations. In this way the system can be used as a kind of macro language. In one embodiment, blocks of declarations and definitions can be imported as well.
  • Compile time syntax is very similar. Statements refer to previously defined or declared variables and are evaluated to produce HTML. In one embodiment, every statement either generates no output, or HTML.
  • get() retrieves the value of a variable. How that variable is represented in HTML depends entirely on its type and any processing directives specified when it was declared or defined. For example: get('companyname-logo') would result in something like "http://www.companyname.com/images/tdf- log.gif". It will be appreciated by those skilled in the art that there are plenty of possible system configuration settings that would affect how the address is compiled.
  • each variable element is identified and defined or declared as appropriate. Their original references in the HTML are replaced with embedded script statements. In this manner, the file is converted piece by piece into a template.
  • a template is wrapped up. This final step prepares the template for use in the live production environment.
  • a thumbnail image of the template compiled with sample content is generated, the template is given a name, and the template itself is placed in the production template repository. Once it has been wrapped up, the template it can be used in live production.
  • the rundown server maintains the rundown and the individual POPs and communicates with the clients, the publishing system and the content management system, both of which are part of production. It works closely with the content management sub-system to provide seamless propagation of digital media assets as it is edited and submitted.
  • the rundown client provides a user interface through which the team members can work with and get information about the POPs in progress.
  • a POP when a POP is created, it is given a name, an automatically assigned unique ID, and a media scratchpad.
  • the name ID and scratchpad stay with the POP from the moment of creation, through production and publication, and are retained when the POP is archived after the sportscast. Of course, they may change the name at any time.
  • a POP can be assigned a template.
  • the template can be removed or changed at any time.
  • Digital media assets can be placed in the template or in the scratchpad.
  • the scratchpad is a place to put raw source materials, possible alternate digital media assets for the template, or digital media assets that might be used for additional coverage in the future.
  • the rundown client tailors its display of the POP list to the role of the person using it. So, the exact representation of the list, or a POP entry, depends upon who is looking at it.
  • the display of the POP includes suitable input areas for the fields specified in the template.
  • text fields will be represented as text entry boxes, and HTML color fields will have a color palette selector.
  • the rundown manager In addition to providing information on the state of the sportscast, the rundown manager also allows individuals to perform operations on POPs and on the POP list. A UI may be used to carry out these operations. Operations on the POP List
  • the POP entries are created in the rundown manager in pre-production and during the live event broadcasting.
  • An initial template may be supplied or loaded into either the scratchpad or the template. He or she can also change the current template or remove media from the template.
  • Unpublished entries may be moved up and down the list as their relevance or timeliness dictates.
  • Add or Change a Template In one embodiment, to assign a template to a POP, an individual drags a template from the template browser and drops it on the POP. If a template has already been assigned, then a dialog is displayed confirming the replacement operation.
  • Add media to the Template Digital media assets of any type may be dragged directly from an editor and dropped on an appropriate field in the template interface.
  • Add media to the scratchpad Digital media assets of any type may be placed anywhere in the media scratchpad.
  • Preview the POP In one embodiment, every time new content is placed in a POP, it is automatically rendered out to the browser window. This feature can be turned off in the rundown manager configuration dialog. There may also be a 'Preview' button in the POP interface for looking at the current content of a template without having to add anything new to it.
  • Task assignment A particular template field or entire POP may be assigned to a specific individual. In this situation, the fields will be locked to everyone except the assigned person.
  • a POP may not be deleted or re-ordered. New entries may be inserted into the chronology after a POP that has been published.
  • one of the design goals of the rundown manager is provide a mechanism for integrating familiar off-the-shelf tools into the live production environment. Where possible, this integration is facilitated via the Windows cut-and- paste buffer or the drag-and-drop interface.
  • Exemplary custom tools include a template browser and task accelerators.
  • the template browser provides a mechanism for selecting a template by name, by thumbnail, or semi-automatically.
  • the template browser offers possible selections based on which templates have been published recently. This feature is designed to help prevent unwanted template repetition in the sportscast.
  • Task accelerators may be used with specific tasks that may be fully automated.
  • a template may specify that a certain image must be black and white. This is called a constraint on the field. It is possible to tie the constraint to an automated task, in this case a graphics filter that converts the image given to it to grayscale. When a color image is dropped on the field in the template interface, it would be automatically converted If there is no task tied to the constraint, then the image would be rejected and it would be up to an individual to edit the image.
  • a POP when published, it can be tagged in various ways to identify it as a "highlight of the day (or quarter or leg)," or a candidate for the appropriate summary chronology.
  • Archived POPs can be viewed with the rundown manager. They can also be copied and edited or re-compiled for different output resolutions.
  • the exemplary chronicle content is designed to enable the production team to create portions of live programming very quickly. Once this can be done, there is another issue to tackle: how to deliver the material to the audience. The audience is able to both receive the presentations passively and navigate among them. This is what will make the experience most exciting and bring it closest to perfectly emulate actually being at the event. To deliver the best experience to the audience, the publishing system provides: dynamic updates as POPs are published, and support for active and passive viewing, there are many options for the implementation of these features. At the low end of the spectrum is a simple HTML design that is pushed out to the end users every time a POP is published, and at the other end is an applet or embedded application that has a connection back to the content server
  • Applets, embedded applications, and plug-ins may be used.
  • a custom interface module is developed that provides a dynamically updated display. To achieve this, the interface module opens a connection back to a server and listens to status messages. Icons, pictures, or other markers appear in the interface as POPs are published and the POP display is automatically updated.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

L'invention concerne un procédé et un appareil de commande du flux d'informations provenant d'un événement. Dans un mode de réalisation, le procédé consiste à recevoir des éléments d'un support numérique correspondant aux données saisies à distance provenant de l'événement, à convertir les éléments du support numérique en contenu d'immersion, puis à répartir le contenu d'immersion afin de l'envoyer à plusieurs mécanismes de fourniture.
PCT/US2000/013882 1999-05-21 2000-05-17 Architecture de commande du flux et de la transformation de donnees multimedia WO2000072574A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU51478/00A AU5147800A (en) 1999-05-21 2000-05-17 An architecture for controlling the flow and transformation of multimedia data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US31632899A 1999-05-21 1999-05-21
US09/316,328 1999-05-21

Publications (3)

Publication Number Publication Date
WO2000072574A2 WO2000072574A2 (fr) 2000-11-30
WO2000072574A3 WO2000072574A3 (fr) 2001-05-31
WO2000072574A9 true WO2000072574A9 (fr) 2001-06-21

Family

ID=23228583

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/013882 WO2000072574A2 (fr) 1999-05-21 2000-05-17 Architecture de commande du flux et de la transformation de donnees multimedia

Country Status (2)

Country Link
AU (1) AU5147800A (fr)
WO (1) WO2000072574A2 (fr)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9123380B2 (en) 1998-12-18 2015-09-01 Gvbb Holdings S.A.R.L. Systems, methods, and computer program products for automated real-time execution of live inserts of repurposed stored content distribution, and multiple aspect ratio automated simulcast production
US20030001880A1 (en) * 2001-04-18 2003-01-02 Parkervision, Inc. Method, system, and computer program product for producing and distributing enhanced media
US11109114B2 (en) 2001-04-18 2021-08-31 Grass Valley Canada Advertisement management method, system, and computer program product
JP2001283015A (ja) * 2000-03-29 2001-10-12 Nippon Columbia Co Ltd コンテンツデータ配信システムおよび方法
US7812856B2 (en) 2000-10-26 2010-10-12 Front Row Technologies, Llc Providing multiple perspectives of a venue activity to electronic wireless hand held devices
US7630721B2 (en) 2000-06-27 2009-12-08 Ortiz & Associates Consulting, Llc Systems, methods and apparatuses for brokering data between wireless devices and data rendering devices
US7039594B1 (en) 2000-07-26 2006-05-02 Accenture, Llp Method and system for content management assessment, planning and delivery
KR100436088B1 (ko) * 2000-12-04 2004-06-14 주식회사 알티캐스트 디지털 방송용 컨텐츠 데이터의 재활용 방법 및 시스템
GB2372116A (en) * 2001-02-08 2002-08-14 Accenture Multi-media management systems
WO2002071705A1 (fr) * 2001-03-01 2002-09-12 Jacobsen Joern B Plate-forme de communications multimedia
US7861155B2 (en) 2001-03-05 2010-12-28 International Business Machines Corporation Method and system for providing message publishing on a dynamic page builder on the internet
US20030009472A1 (en) * 2001-07-09 2003-01-09 Tomohiro Azami Method related to structured metadata
GB2387729B (en) 2002-03-07 2006-04-05 Chello Broadband N V Enhancement for interactive tv formatting apparatus
WO2004088984A1 (fr) * 2003-04-04 2004-10-14 Bbc Technology Holdings Limited Systeme et procede de stockage et de recherche de donnees video avec conversion de la resolution
GB2404803A (en) * 2003-07-16 2005-02-09 Empics Ltd Image editing and distribution system
KR100619308B1 (ko) * 2004-03-02 2006-09-12 엘지전자 주식회사 멀티미디어 메시징 서비스 시스템 및 그 방법
EP1796393A1 (fr) * 2005-12-09 2007-06-13 Koninklijke KPN N.V. Méthode et dispositif de génération automatique de programmes IPTV.
US10717011B2 (en) 2007-12-03 2020-07-21 Microsoft Technology Licensing, Llc Read redirection of physical media
US9191429B2 (en) 2012-07-13 2015-11-17 Qualcomm Incorporated Dynamic resolution of content references for streaming media

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5861881A (en) * 1991-11-25 1999-01-19 Actv, Inc. Interactive computer system for providing an interactive presentation with personalized video, audio and graphics responses for multiple viewers
US6101180A (en) * 1996-11-12 2000-08-08 Starguide Digital Networks, Inc. High bandwidth broadcast system having localized multicast access to broadcast content
US5835727A (en) * 1996-12-09 1998-11-10 Sun Microsystems, Inc. Method and apparatus for controlling access to services within a computer network

Also Published As

Publication number Publication date
WO2000072574A2 (fr) 2000-11-30
AU5147800A (en) 2000-12-12
WO2000072574A3 (fr) 2001-05-31

Similar Documents

Publication Publication Date Title
WO2000072574A9 (fr) Architecture de commande du flux et de la transformation de donnees multimedia
US20020112247A1 (en) Method and system for creation, delivery, and presentation of time-synchronized multimedia presentations
US8875215B2 (en) Method and apparatus for browsing using alternative linkbases
US7281260B2 (en) Streaming media publishing system and method
USRE45594E1 (en) Network distribution and management of interactive video and multi-media containers
US6573907B1 (en) Network distribution and management of interactive video and multi-media containers
US7260564B1 (en) Network video guide and spidering
US7802004B2 (en) Dynamic streaming media management
Miller et al. News on-demand for multimedia networks
US20150135206A1 (en) Method and apparatus for browsing using alternative linkbases
US20150135214A1 (en) Method and apparatus for browsing using alternative linkbases
WO2001019079A9 (fr) Systeme de distribution et de fourniture de trains multiples de donnees multimedia
CN101764973B (zh) 元数据代理服务器及方法
Podgorny et al. Video on Demand Technologies and Demonstrations

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/10-10/10, DRAWINGS, REPLACED BY NEW PAGES 1/10-10/10; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP