EP1423769A2 - Intelligent fabric - Google Patents
Intelligent fabricInfo
- Publication number
- EP1423769A2 EP1423769A2 EP02759393A EP02759393A EP1423769A2 EP 1423769 A2 EP1423769 A2 EP 1423769A2 EP 02759393 A EP02759393 A EP 02759393A EP 02759393 A EP02759393 A EP 02759393A EP 1423769 A2 EP1423769 A2 EP 1423769A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- content
- data
- network
- fabric
- intelligent switch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/756—Media network packet handling adapting media to device capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/765—Media network packet handling intermediate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
- H04L69/322—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
- H04L69/329—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
- G11B2220/2537—Optical discs
- G11B2220/2562—DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Definitions
- SONET which is a high speed synchronous carrier system based on the use of optical fiber technology
- ATM which is a high speed low delay multiplexing and switching network.
- SONET is high speed, high capacity and suitable for large public networks
- ATM is applicable to a broad band integrated services digital network (BISDN) for providing convergence, multiplexing, and switching operations.
- ATM uses standard size packets (cells) to carry communications signals.
- Each cell that is transmitted over a transmission facility includes a 5 byte header and a 48 byte payload. Since the payload is in digital form, it can represent digitized voice, digitized video, digitized facsimile, digitized data, multi-media, or any combinations of the above.
- the header contains information which allows each switching node along the path of an ATM communication to switch the cell to the appropriate output.
- the cells travel from source to destination over pre- established virtual connections. In-a virtual connection, all cells from the same ingress port having the same virtual connection address will be sent to the same egress port. Once a virtual connection has been established from a Customer Premises Equipment (CPE) source to a CPE destination, all cells of the virtual connection will be sent via the same nodes to the same destination.
- CPE Customer Premises Equipment
- a typical switch architecture includes line interface units (LIMs), a switch fabric, and a controlle.
- LIMs line interface units
- the data path for cells traveling through an ATM network is to enter the line interface, pass through the fabric, and then exit through another line interface.
- cells are removed from the outgoing stream and sent to the controller.
- the controller can also transmit cells through the network by passing the cells to a LIM.
- the cells are then transmitted through the fabric and finally transmitted out an exit line interface. Passing control through the fabric before going to the controller or leaving the switch allows multiple controllers to each monitor a small number of line interfaces with call control and network management message passed to a centralized processor when the architecture is expanded to a larger number of ports.
- Connection information is contained in the ATM header and the switch cell header used internally within the switch itself.
- An ATM header contains a virtual path identifier (VPI) and a virtual circuit identifier (VCI) which together uniquely denote a single connection between two communicating entities. Other information, including a payload type and header error control fields, is included for use by the network in transporting the cells.
- the switch header contains a connection identifier to denote the connection. A portion of the connection identifier may be replaced by a sequence number as described later in this document. Additionally, the switch header contains routing information so that the cell can be routed through the switch fabric. Due to the popularity of the Internet and applications such as video and sound content transmission, an insatiable need exists for bandwidth any time and any where! Further, due to the explosion in digital devices, a number of devices with dissimilar capability and characteristics need to be served quickly and efficiently over the fabric so that high quality presentations are achieved using minimal network resources.
- Implementations of the invention may include one or more of the following.
- Predictive analysis is used to configure to deliver QoS.
- the network fabric comprises one or more POPs and a gateway hub, wherein each POP send its current load status and QOS configuration to the gateway hub where predictive analysis is performed to handle load balancing of data streams to deliver consistent QoS for the entire network on the fly.
- the predicting means periodically takes snapshots of traffic and processor usage and correlates the traffic and usage data with previously archived data for usage patterns that are used to predict the configuration of the network to provide optimum QoS.
- the network fabric streams MPEG (Moving Picture Experts Group) elementary streams (ES), including Binary Fonnat for Scenes (BIFS) data and Delivery Multimedia Integration Framework (DMIF) data.
- MPEG Motion Picture Experts Group
- ES Binary Fonnat for Scenes
- DMIF Delivery Multimedia Integration Framework
- the BiFS data contains the DMIF data to determine the configuration of content.
- the DMIF and BiFS information determine the capabilities of the device accessing the channel.
- the data content defines the configuration of the network once its BiFS Layer is parsed and checked against the available DMIF Configuration and network status.
- the predicting means parses the ODs and the BiFSs to regulate elements being passed to the multiplexer.
- the BiFS comprises interaction rules.
- the rules are used to query a field in a database and wherein the field can contain scripts that execute one or more If/Then statements.
- the rules customize a particular object in a given scene.
- the network fabric includes an Asynchronous Transfer Mode (ATM) and a telephone network.
- Data is media content or the data represents a graphical user interface (GUI).
- GUI graphical user interface
- the system combines the advantages of traditional media with the Internet in an efficient manner so as to provide text, images, sound, and video on-demand in a simple, intuitive manner.
- the fabric supports the ability to communicate digital media data streams in real-time.
- the system is cheaper and more flexible than the prior approach to data transmission.
- the fabric more susceptible to incorporation within a massively parallel processing network that enhance the ability to provide real-time multi-media communications to the masses.
- Such a network provides a seamless, global media system which allows content creators and network owners to virtualize resources. Rather than restrictively accessing only the memory space and processing time of a local resource, the system allows access to resources throughout the network. In small access points such as wireless devices, where very little memory and processing logic is available due to limited battery life, the system is able to customize delivery so that judicious bandwidth consumption is achieved while providing a high quality presentation given particular device hardware characteristics.
- the invention also support deployment of new application software and services by broadcasting data across the network rather than by instituting costly hardware upgrades across the whole network. Broadcasting software across the network can be performed at the end of an advertisement or other program that is broadcasted nationally. Thus, services can be advertised and then transmitted to new subscribers at the end of the advertisement.
- Fig. 1 shows one embodiment of a FABRIC for supporting customizable presentations.
- Fig. 2 shows an exemplary operation for a local server.
- Fig. 3 shows an exemplary authoring process.
- Fig. 4 shows an exemplary process running on a viewing terminal.
- Fig. 5 illustrates a process relating to content consumption within a browser/player.
- FIG. 6A shows an exemplary diagram showing the relationships among a user viewing content(s) in particular context(s).
- Fig. 6B shows an exemplary presentation.
- Fig. 7 shows a process to enhance for user community participation.
- Fig. 1 shows an exemplary network.
- the system also stores content, serves content and streams the content, as modified in real-time by the context, to a user on-demand.
- the system includes a switching FABRIC 50 connecting a plurality of networks 60.
- the switching FABRIC 50 provides an interconnection architecture which uses multiple stages of switches 56 to route transactions between a source address and a destination address of a data communications network.
- the switching FABRIC 50 includes multiple switching devices and is scalable because each of the switching devices of the FABRIC 50 includes a plurality of network ports and the number of switching devices of the FABRIC 50 may be increased to increase the number of network 60 connections for the switch.
- the FABRIC 50 includes all networks, which subscribe and are connected to each other and includes wireless networks, cable television networks, WAN's such as Exodus, Quest, DBN.
- Computers 62 are connected to a network hub 64 that is connected to a switch 56, which can be an Asynchronous Transfer Mode (ATM) switch, for example.
- Network hub 64 functions to interface an ATM network to a non-ATM network, such as an Ethernet LAN, for example.
- Computer 62 is also directly connected to ATM switch 56.
- Multiple ATM switches are connected to WAN 68.
- the WAN 68 can communicate with FABRIC, which is the sum of all associated networks.
- FABRIC is the combination of hardware and software that moves data coming in to a network node out by the correct port (door) to the next node in the network.
- Each server 55 includes a content database that can be customized and streamed on-demand to the user. Its central repository stores information about content assets, content pages, content structure, links, and user profiles, for example.
- Each regional server 55 (RUE) also captures usage information for each user, and based on data gathered over a period, can predict user interests based on historical usage information. Based on the predicted user interests and the content stored in the server, the server can customize the content to the user interest.
- the regional server 55 (RUE) can be a scalable compute farm to handle increases in processing load.
- the regional server 55 communicates the customized content to the requesting viewing terminal 70.
- the viewing terminals 70 can be a personal computer (PC), a television (TV) connected to a set-top box, a TV connected to a DVD player, a PC-TV, a wireless handheld computer or a cellular telephone.
- PC personal computer
- TV television
- TV-TV television
- wireless handheld computer a wireless handheld computer
- the system is not limited to any particular hardware configuration and will have increased utility as new combinations of computers, storage media, wireless transceivers and television systems are developed. In the following any of the above will sometimes be referred to as a "viewing terminal".
- the program to be displayed may be transmitted as an analog signal, for example according to the NTSC standard utilized in the United States, or as a digital signal modulated onto an analog carrier, or as a digital stream sent over the Internet, or digital data stored on a DVD.
- the signals may be received over the Internet, cable, or wireless transmission such as TV, satellite or cellular transmissions.
- a viewing terminal 70 includes a processor that may be used solely to run a browser GUI and associated software, or the processor may be configured to run other applications, such as word processing, graphics, or the like.
- the viewing terminal's display can be used as both a television screen and a computer monitor.
- the terminal will include a number of input devices, such as a keyboard, a mouse and a remote control device, similar to the one described above. However, these input devices may be combined into a single device that inputs commands with keys, a trackball, pointing device, scrolling mechanism, voice activation or a combination thereof.
- the video capture card digitizes a video image and displays the video image in a window on the monitor.
- the terminal is also connected to a regional server 55 (RUE) over the Internet using various mechanisms. This can be a 56K modem, a cable modem, Wireless Connection or a DSL modem.
- RUE regional server 55
- This can be a 56K modem, a cable modem, Wireless Connection or a DSL modem.
- the user connects to a suitable Internet service provider (ISP), which in turn is connected to the backbone of the network 68 such as the Internet, typically via a Tl or a T3 line.
- the ISP communicates with the viewing terminals 70 using a protocol such as point to point protocol (PPP) or a serial line Internet protocol (SLIP) 100 over one or more media or telephone network, including landline, wireless line, or a combination thereof.
- PPP point to point protocol
- SLIP serial line Internet protocol
- a similar PPP or SLIP layer is provided to communicate with the ISP.
- a PPP or SLIP client layer communicates with the PPP or SLIP layer.
- a network aware GUI receives and formats the data received over the Internet in a manner suitable for the user.
- the computers communicate using the functionality provided by MPEG 4 Protocol (ISO 14496).
- the World Wide Web (WWW) or simply the "Web” includes all the servers adhering to standard JJP protocol.
- communication can be provided over a communication medium.
- the client and server may be coupled via Serial Line Internet Protocol (SLIP) or TCP/IP connections for high-capacity communication.
- SLIP Serial Line Internet Protocol
- TCP/IP connections for high-capacity communication.
- VUI user interface
- the user interface is a GUI that supports Moving Picture Experts Group-4 (MPEG-4), a standard used for coding audio-visual information (e.g., movies, video, music) in a digital compressed format.
- MPEG-4 Moving Picture Experts Group-4
- the major advantage of MPEG compared to other video and audio coding formats is that MPEG files are much smaller for the same quality using high quality compression techniques.
- the GUI (VUI) can be on top of an operating system such as the Java operating system. More details on the GUI are disclosed in the copending application entitled "SYSTEMS AND METHODS FOR DISPLAYING A GRAPHICAL USER INTERFACE", the content of which is incorporated by reference.
- the terminal 70 is an intelligent entertainment unit that plays DVD.
- the terminal 70 monitors usage pattern entered through the browser and updates the regional server 55 (RUE) with user context data.
- the regional server 55 (RUE) can modify one or more objects stored on the DVD, and the updated or new objects can be downloaded from a satellite, transmitted through the internet or other on-line service, or transmitted through another land line such as coax cable, telephone line, optical fiber, or wireless technology back to the terminal.
- the terminal 70 in turn renders the new or updated object along with the other objects on the DVD to provide on-the-fly customization of a desired user view.
- the system handles MPEG (Moving Picture Experts Group) streams between a server and one or more terminals using the switches.
- the server broadcasts channels or addresses which contain streams. These channels can be accessed by a terminal, which is a member of a WAN, using IP protocol.
- the switch which sits at the gateway for a given WAN, allocates bandwidth to receive the channel requested.
- the initial Channel contains BiFS Layer Information, which the Switch can parse, process DMIF to determine the hardware profile for its network and determine the addresses for the AVO's needed to complete the defined presentation.
- the Switch passes the AVO's and the BiFS Layer information to a Multiplexor for final compilation prior to broadcast on to the WAN.
- the data streams (elementary streams, ES) that result from the coding process can be transmitted or stored separately, and need only to be composed so as to create the actual multimedia presentation at the receiver side.
- ES elementary streams
- MPEG-4 relationships between the audio-visual components that constitute a scene are described at two main levels.
- the Binary Format for Scenes (BIFS) describes the spatio-temporal arrangements of the objects in the scene. Viewers may have the possibility of interacting with the objects, e.g. by rearranging them on the scene or by changing their own point of view in a 3D virtual environment.
- the scene description provides a rich set of nodes for 2-D and 3-D composition operators and graphics primitives.
- Object Descriptors define the relationship between the Elementary Streams pertinent to each object (e.g the audio and the video stream of a participant to a videoconference) ODs also provide additional information such as the URL needed to access the Elementary Steams, the characteristics of the decoders needed to parse them, intellectual property and others.
- Media objects may need streaming data, which is conveyed in one or more elementary streams.
- An object descriptor identifies all streams associated to one media object. This allows handling hierarchically encoded data as well as the association of meta-information about the content (called Object content information') and the intellectual property rights associated with it.
- Each stream itself is characterized by a set of descriptors for configuration information, e.g., to determine the required decoder resources and the precision of encoded timing information. Furthermore the descriptors may carry hints to the Quality of Service (QOS) it requests for transmission (e.g., maximum bit rate, bit error rate, priority, etc.)
- QOS Quality of Service
- Synchronization of elementary streams is achieved through time stamping of individual access units within elementary streams.
- the synchronization layer manages the identification of such access units and the time stamping. Independent of the media type, this layer allows identification of the type of access unit (e.g., video or audio frames, scene description commands) in elementary streams, recovery of the media object's or scene description's time base, and it enables synchronization among them.
- the syntax of this layer is configurable in a large number of ways, allowing use in a broad spectrum of systems. ,
- the synchronized delivery of streaming information from source to destination, exploiting different QoS as available from the network, is specified in terms of the synchronization layer and a delivery layer containing a two-layer multiplexer.
- the first multiplexing layer is managed according to the DMIF specification, part 6 of the MPEG-4 standard. (DMIF stands for Delivery Multimedia Integration Framework)
- DMIF Delivery Multimedia Integration Framework
- This multiplex may be embodied by the MPEG-defined FlexMux tool, which allows grouping of Elementary Streams (ESs) with a low multiplexing overhead. Multiplexing at this layer may be used, for example, to group ES with similar QoS requirements, reduce the number of network connections or the end to end delay.
- the "TransMux" (Transport Multiplexing) layer models the layer that offers transport services matching the requested QoS.
- the BiFS Layer contains the necessary DMIF information needed to detennine the configuration of the content. This can be looked at as a series of criteria filters, which address the relationships defined in the BiFS Layer for AVO relationships and priority.
- DMIF and BiFS determine the capabilities of the device accessing the channel where the application resides, which can then determine the distribution of processing power between the server and the terminal device.
- Intelligence built in to the FABRIC, will allow the entire network to utilize predictive analysis to configure itself to deliver QOS.
- the switch 16 can monitor data flow to ensure no corruption happens.
- the switch also parses the ODs and the BiFSs to regulate which elements it passes to the multiplexer and which it does not. This will be determined based on the type of network the switch sits as a gate to and the DMIF information.
- This "Content Conformation" by the switch happens at gateways to a given WAN such as a Nokia 144k 3-G Wireless Network. These gateways send the multiplexed data to switches at its respective POP's where the database is installed for customized content interaction and "Rules Driven" Function Execution during broadcast of the content.
- the BiFS can contain interaction rules that query a field in a database.
- the field can contain scripts that execute a series of "Rules Driven” (If/Then Statements), for example: If user "X” fits "Profile A” then access Channel 223 for AVO 4.
- This rules driven system can customize a particular object, for instance, customizing a generic can to reflect a Coke can, in a given scene.
- Each POP send its current load status and QOS configuration to the gateway hub where Predictive Analysis is performed to handle load balancing of data streams and processor assignment to deliver consistent QOS for the entire network on the fly.
- the result is that content defines the configuration of the network once its BiFS Layer is parsed and checked against the available DMIF Configuration and network status.
- the switch also periodically takes snapshots of traffic and processor usage. The information is archived and the latest information is correlated with previously archived data for usage patterns that are used to predict the configuration of the network to provide optimum QOS.
- the network is constantly re-configuring itself.
- the content on the FABRIC can be categorized in to two high level groups:
- A/V Audio and Video
- Programs can be created which contain AVO's (Audio Video Objects), their relationships and behaviors (Defined in the BiFS Layer) as well as DMIF (Distributed Multimedia Interface Framework) for optimization of the content on various platforms.
- Content can be broadcast in an "Unmultiplexed” fashion by allowing the GLUI to access a channel which contains the Raw BiFS Layer.
- BiFS Layer will contain the necessary DMIF information needed to determine the configuration of the content. This can be looked at as a series of criteria filters, which address the relationships defined in the BiFS Layer for AVO relationships and priority.
- a person using a connected wireless PDA, on a 3-G WAN can request access to a given channel, for instance channel 345. The request transmits from the PDA over the wireless network and channel 345 is accessed.
- Channel 345 contains BiFS Layer information regarding a specific show. Within the BiFS Layer is the DMIF information, which says... If this content is being played on a PDA with access speed of 144k then access AVO 1, 3, 6, 13 and 22.
- the channels where these AVO's may be defined can be contained in the BiFS Layer of can be extensible by having the BiFS layer access a field on a related RRUE database which supports the content. This will allow for the elements of a program to be modified over time.
- a practical example of this systems application is as follows: a broadcaster transmitting content with a generic bottle can receive advertisement money from Coke another from Pepsi. The Actual label on the bottle will represent the advertiser when a viewer from a given area watches the content.
- the database can contain and command rules for far more complex behavior. If/ Then Statements relative to the users profile and interaction with the content can produce customized experiences for each individual viewer on the fly.
- FABRIC Applications running on FABRIC represent the other type of Content. These applications can be developed to run on the servers and broadcast their interface to the GLUI of the connected devices. The impact of FABRIC and VUI enables 3 rd party developers to write an application such as a word processor that can send its interface, in for example, compressed JPEG format to the end users terminal device such as a wireless connected PDA.
- an elementary stream is a consecutive flow of mono-media from a single source entity to a single destination entity on the compression layer.
- An access unit (AU) is an individually accessible portion of data within an ES and is the smallest data entity to which timing information can be attributed.
- a presentation consists of a number of elementary streams representing audio, video, text, graphics, program controls and associated logic, composition information (i.e. Binary Format for Scenes), and purely descriptive data in which the application conveys presentation context descriptors (PCDs). If multiplexed, streams are demultiplexed before being passed to a decoder. Additional streams noted below are for purposes of perspective (multi-angle) for video, or language for audio and text. The following table shows each ES broken by access unit, decoded, then prepared for composition or transmission.
- a timeline indicates the progression of the scene.
- the content streams render the presentation proper, while presentation context descriptors reside in companion streams. Each descriptor indicates start and end time code. Pieces of context may freely overlap.
- the presentation context is attributed to a particular ES, and each ES may or may not have contextual description. Presentation context of different ESs may reside in the same stream or different streams.
- Each presentation descriptor has a start and end flag, with a zero for both indicating a point in between.
- descriptor infonnation is repeated in each access unit corresponds to the random access characteristics of the associated content stream. For instance, predictive and bi-directional frames of MPEG video are not randomly accessible as they depend upon frames outside themselves. Therefore, in such cases, PCD info need not be repeated in such instances.
- PCD is absolute, that is, its context is always active when its temporal definition is valid, or conditional, in which case it is only active upon user selection.
- the PCD refers to presentation content (not context) to jump to, enabling contextual navigation.
- the conditional context may also be regarded as interactive context.
- Absolute context will just indicate a particular scene or segment has been reached to the system. This information can be used to funnel additional information outside of the main presentation, such as advertisements.
- Interactive context is triggered by the user, unlike traditional menus.
- Interactive context provides a means for the user to access contextually related information via a context menu.
- a transitional stream is a local placeholder used to increased perceived reponsiveness, and provides feedback in regards to stream acquisition.
- Information in regards to background music, location, set props, and objects corresponding to brand names, such as clothing, could provide contextual information.
- the system can pass new context in one or more additional presentation context streams.
- All a presentation context descriptor does is define a region of content in regards to an elementary stream, and, optionally, define a context menu item positioned within an associated hierarchy. It functions like, and corresponds to, a database, key.
- a descriptor is just a place holder, it is the use of semantic descriptors which generate meaning: that is, how the segment relates to other segments, and to the user, and by an extension, how a user relates to other users.
- Semantic descriptors operate with context descriptors to create a collection of weighted attributes. Weighted attributes are applied to content segments, user histories, and advertisements, yielding a weight-based system for intelligent marketing.
- the logic of rules-based data agents then comes down to structured query language.
- a semantic descriptor is itself no more than an identifier, a label, and a definition, which is enough to introduce categorization. Its power comes from its inter-relationship with other semantic descriptors. Take the following descriptors: playful, silly, funny, flirtatious, sexy, predatorial, and mischievous.
- a presentation context descriptor and a semantic descriptor are associated via a semantic presentation map tying the two descriptors and a relative weight. This adds a good degree of flexibility in scoring the prominence of attributes within content. It is up to a particular database agent to express the particular formula involved.
- Fig. 2 shows an exemplary operation for the local server 62.
- the server 62 initializes a content database and a context database (step 300).
- the server receives and parses requests being directed at it (step 302).
- the server adds or updates the received information to its content database (step 304).
- the content database provides a fine-grained categorization of one or more scenes in a particular movie, corporate presentation, video program, or multimedia content. Based on the categorization, context information could be applied. For example, a movie can have a hundred scenes. A content creator, such as a movie editor, would use the authoring system to annotate each scene using a predetermined format, for example an XML compatible format. The annotation tells the local server 62 the type of scene, the actor/actress involved, a list of objects that can be customized, and definitions so that the local server can retrieve and modify the objects. After all scenes have been annotated, the authoring system uploads the information to the local server 62.
- the local server 62 determines whether it is from a user (step 306). If so, the system determines whether the user is a registered user or a new user and provides the requested content to registered users.
- the local server 62 can send the default content, or can interactively generate alternate content by selecting a different viewing angle or generate more information on a particular scene or actor/actress, for example.
- the local server 62 receives in real-time actions taken by the user, and over time, the behavior of a particular user can be predicted based on the context database.
- the user may wish to obtain more information relating to specific areas of interest or concerns associated with the show, such as the actors, actresses, other movies released during the same time period, or travel packages or promotions that may be available through primary, secondary or third party vendors.
- the captured context is stored in the context database and used to customize information to the viewer even with the multitude of programs broadcast every day.
- the system can rapidly update and provide the available information to viewers in real time. After servicing the user, the process loops back to step 302 to handle the next request.
- the system updates the context database by correlating the user's usage patterns with additional external data to determine whether the user may be interested in unseen, but contextually similar information (step 310). This is done by data- mining the context database.
- the server 62 finds groupings (clusters) in the data. Each cluster includes records that are more similar to members of the same cluster than they are similar to rest of the data. For example, in a marketing application, a company may want to decide who to target for an ad campaign based on historical data about a set of customers and how they responded to previous campaigns.
- Clustering techniques provide an automated process for analyzing the records of the collection and identifying clusters of records that have similar attributes. For example, the server can cluster the records into a predetermined number of clusters by identifying records that are most similar and place them into their respective cluster. Once the categories (e.g., classes and clusters) are established, the local server 62 can use the attributes of the categories to guide decisions.
- a web master may decide to include advertisements directed to teenagers in the web pages that are accessed by users in this category.
- the local server 62 may not want to include advertisements directed to teenagers on a certain presentation if users in a different category who are senior citizens also happen to access that presentation frequently.
- Each view can be customized to a particular user, so there are not static view configurations to worry about. Users can see the same content, but different advertisements.
- a Naive-Bayes classifier can be used to perform the data mining.
- the Naive-Bayes classifier uses Bayes rule to compute the probability of each class given an instance, assuming attributes are conditionally independent given a label.
- the Naive-Bayes classifier requires estimation of the conditional probabilities for each attribute value given the label. For discrete data, because only few parameters need to be estimated, the estimates tend to stabilize quickly and more data does not change the model much. With continuous attributes, discretization is likely to form more intervals as more data is available, thus increasing the representation power. However, even with continuous data, the discretization is usually global and cannot take into account attribute interactions. Generally, Naive-Bayes classifiers are preferred when there are many irrelevant features.
- the Naive- Bayes classifiers are robust to irrelevant attributes and classification takes into account evidence from many attributes to make the final prediction, a property that is useful in many cases where there is no "main effect.” Also, the Naive-Bayes classifiers are optimal when the assumption that attributes are conditionally independent hold, e.g., in medical practice. On the downside, the Naive-Bayes classifiers require making strong independence assumptions. When these assumptions are violated, the achievable accuracy may asymptote early and will not improve much as the database size increases. Other data-mining techniques can be used. For example, a Decision-Tree classifier can be used. This classifier assigns each record to a class, and the Decision-Tree classifier is induced (generated) automatically from data.
- the data which is made up of records and a label associated with each record, is called the training set.
- Decision-Trees are commonly built by recursive partitioning. A univariate (single attribute) split is chosen for the root of the tree using some criterion (e.g., mutual information, gain-ratio, gini index). The data is then divided according to the test, and the process repeats recursively for each child. After a full tree is built, a pruning step is executed which reduces the tree size.
- Decision- Trees are preferred where serial tasks are involved, i.e., once the value of a key feature is known, dependencies and distributions change. Also, Decision-Trees are preferred where segmenting data into sub-populations gives easier subproblems. Also, Decision-Trees are preferred where there are key features, i.e., some features are more important than others.
- a hybrid classifier called the NB-Tree hybrid classifier, is generated for classifying a set of records.
- each record has a plurality of attributes.
- the NB-Tree classifier includes a Decision-Tree structure having zero or more decision-nodes and one or more leafhodes. At each decision-node, a test is performed based on one or more attributes. At each leaf-node, a classifier based on Bayes Rule classifies the records.
- a process 350 for authoring content and registering the new content with the local server 62 is shown.
- the process 350 is executed by the Authoring System at Design Time.
- a user imports content elements (step 352).
- the user applies contextual descriptors to elementary streams: MPEG-7 layer information, for example (step 354).
- the user can also define compositional layout, such as multiple windows or event specific popups and certain content meant to be displayed in a windowed presentation can make use of the popups, for example (step 356).
- the content is arranged in regards to layout, sequence, and navigational flow (step 358).
- the user can also specify navigational interactivity; examples of navigational interactivity are: anchors (clickable targets), forms, alternate tracks and context menus, virtual presence (VRML-like navigation), and interactive stop mode, where playback breaks periodically pending user interaction, which determines flow control.
- the user then defines and associates context menus to contextual descriptors; specify hierarchical positioning of context menu entry, description, and one or more of the following end actions (local-offline, remote, and transitional (if remote is defined)) (step 360).
- the user can specify design-time rules for flow customization(step 362).
- the user can specify image destination (CD, DVD, streamed, for example) (step 364).
- the user can also specify licensing requirements (copy protection, access control, and e-commerce), which may vary for specific content segments (step 366).
- the user then registers as a content provider if he or she is not one already (step 368). Additionally, the user can generate final, registered output image; registration entails updating system databases in regards to content, context, and licensing requirements (step 370).
- the user imports components or assets into a particular project and edits the assets and annotates the assets with information that can be used to customize the presentation of the resulting content.
- the authoring system can also associate URLs with chapter points in movies and buttons in menus.
- a timeline layout for video is provided which supports the kind of assemble editing users expect from NLE systems.
- Video clips can simply be dropped or rearranged on the timeline. Heads and Tails of clips can be trimmed and the resulting output is MPEG compliant.
- the user can also generate active button menus over movies using subpictures and active button hotspots on movies for interactive and training titles.
- contextual triggers are defined to make available the various contextual segments; primary linkage, then, depends upon external content.
- a process 400 running on the local terminal 70 is shown.
- the user first logs-in to the server (step 401).
- the server retrieves the user characteristics and presents a list of options that are customized to the user's tastes (step 402).
- the options can include a custom list of movies, sport programs, financial presentations, among others, that the user has viewed in the past or is likely to watch.
- the user can select one of the presented options, can designate an item not on the list, or can insert a new DVD (step 404).
- the user selection is updated in the context database (step 406) and the local server 62 retrieves information from the content to be played (step 408).
- the local server 62 identifies the DVD and search in its content database for customizable objects and information relating to the content. Based on the content database, the local server customizes the content and/or associated programs such as associated advertisements or information for the content (step 410) and streams the content to the terminal 70 (step 412).
- the user can passively view the content, or can interact with the content by selecting different viewing angles, can query certain information relating to the scene or the actors and actresses involved, or can interact with a commercial if desired (step 414).
- Each user operation is captured, along with the context of the operation, and the resulting data is used to update the context database for that user (step 414).
- the local server can adjust the content based on the new interaction (step 416) before looping back to step 410 to continue showing the requested content.
- the process thus provides customized content to the user, and allows the user to link, search, select, retrieve, initiate a subscription to and interact with information on the DVD as well as supplemental value-added information from a remote database, computer network or on-line server, e.g., a network server on the Internet or World Wide Web.
- Fig. 5 illustrates a process 450 relating to content consumption within a browser/player. First, a user initiates playback of content (step 452). The browser/player then demultiplexes any multiplexed streams (step 454) and parses a BiFS elementary stream (step 456).
- the user then fulfill any necessary licensing requirements to gain access if content is protected, this could be ongoing in the event of new content acquisitions (step 458).
- the browser/player invokes appropriate decoders (step 460) and begins playback of content (step 462).
- the browser/player continues to send contextual feedback to system (step 464), and the system updates user preferences and feedback into the database (step 466).
- the system captures transport operations such as fast forward and rewind, generate context information, as they are an aspect of how users interact with the title; for instance, what segments users tend to skip, and which users tend to watch repeatedly, are of interest to the system.
- the system logs the user and stores the contextual feedback, applying any relative weights assigned in the Semantic Map, and utilizing the Semantic Relationships table for indirect assignments, an intermediate table should be employed for optimized resolution; the assignment of relative weights is reflected in the active user state information.
- system sends new context information as available, such as new context menu items (step 468).
- the system may utilize rules-based logic, such as for sending customer focused advertisements, unless there are multiple windows, this would tend to occur during the remote content acquisition process (step 470).
- the system then handles requests for remote content (step 472).
- Fig. 6A shows an exemplary diagram showing the relationships among a user 1 viewing content 2 in particular context(s) 3.
- the user 1 interacts with a viewing system through a user interface that can be a graphical user interface (GUI), a voice user interface (VUI), or a combination thereof. Initially, the user 1 can simply request to see the content 2.
- GUI graphical user interface
- VUI voice user interface
- the content 2 is streamed and played to the user.
- the user 1 can view the default stream, or can interact with the content 2 by selecting a different viewing angle, query for more information on a particular scene or actor/actress, for example.
- the user interest exhibited implicitly in his or her selection and request is captured as the context 3.
- the actions taken by the user 1 through the user interface is captured, and over time, the behavior of a particular user can be predicted based on the context 3.
- the user 1 can be presented with additional information associated with a particular program.
- the captured context 3 is used to customize information to the viewer even with the multitude of programs broadcast every day.
- the system can rapidly update and provide the available information to viewers in real time.
- the combination of content 2 and context 3 is used to provide customized content, including advertising, to viewers.
- Fig. 6B shows an exemplary presentation where a main presentation window is displayed along with a supplemental window running advertisements.
- PCDs Presentation Context Descriptors
- SDs Semantic descriptors
- Semantic descriptors can from an acyclic relationship graph; the requisite relationships are mapped in the Semantic Relationships table.
- the relationships define a transitive equivalency flowing from specific to general, such that specific semantic instances also validate more general, inclusive semantics.
- the application of a semantic descriptor and a PCD occurs in a table called a semantic map, which furthermore supplies a nonzero weight less than or equal to one (default).
- the SDs attributed to it are located via the semantic map.
- the score specified by the weight is added to the respective attribute subtotals located in a cumulative profile and session profile.
- transitive aggregation is applied for related SDs via the Semantic Relationship table, and applying the weight assigned to the relating attribute in the Semantic Map.
- Fig. IB the main presentation window is displayed along with a supplemental window running advertisements.
- the advertisements might be image-only banners while the main presentation is playing, but whenever it is paused, including when the presentation is halted pending user selection, a video or audio-video advertisement might run. For full screen mode, the window might temporarily split for these purposes.
- the system locates attributes linked directly via the Semantic Map and indirectly via the Semantic Relationships table, and updates the aggregate scores located in the session and cumulative user state attributes. This value is part of the current context. Should the user pause presentation at this point, a commercial best fitting the current presentation context, the session context, or the user history could be selected via a comparison of attribute scores. In fact, any choice the user makes, the act will be logged along with the current context. Activation of context menu options, will yield contextual content options valid for the present context.
- PCD 2 becomes valid, while PCD 1 remains valid.
- the context state change for PCD 2 is sent to the system.
- the feedback process described at time 0 recurs.
- PCD 4 becomes invalid, while PCD 1 and 3 remain valid.
- the context state change for PCD 4 is communicated to the system.
- the feedback process described at time 0 recurs.
- PCD 6 becomes valid, while PCD 1 and 5 remain valid.
- the context state change for PCD 6 is sent to the system.
- the feedback process described at time 0 recurs.
- PCD 6 becomes invalid, while PCD 1 and 5 remain valid.
- the context state change for PCD 6 is communicated to the system.
- the feedback process described at time 0 recurs.
- Fig. 1 A and IB can support DVD multi-angle and navigation in that the system can utilize behavioral analysis to customize the user's experience. By focusing on the more general case of metadata, a deeper understanding of the user's interest in certain contents or subsections thereof can be built.
- a process 500 to enhance for user community participation is shown.
- a user may opt to participate in public viewing session, or opt out of such a session; this is useful for point-to-point presentations, for example (step 502).
- other public users become visible, and may join into groups, resulting in synchronized sessions with one user designated as the pilot for navigation purposes (step 504).
- a communication window is made available so users may discuss the content (step 506).
- all content viewed is logged in passive mode, as the user is not responsible for interactive selections (step 508).
- the pilot can enter a white board mode, and draw on the presentation content; these drawings are made visible to the other group members (step 510).
- the user may opt to work in annotation mode, which is analogous to third party value-add information, in that users may leave commentary tied to particular sequences of the presentation, the visibility of such annotations may be public, or visible only to restricted access groups; an annotation window is utilized for these purposes, and is tied to the content the user is currently viewing (step 512).
- annotation mode is analogous to third party value-add information, in that users may leave commentary tied to particular sequences of the presentation, the visibility of such annotations may be public, or visible only to restricted access groups; an annotation window is utilized for these purposes, and is tied to the content the user is currently viewing (step 512).
- the user may elect to receive email notifications (step 514).
- the AUTHOR either downloads the AUTHORING SYSTEM from FABRIC, or obtains it from some install disc; in either case the complete runnable AUTHORING SYSTEM does not reside on the installed computer for security purposes 2)
- the AUTHOR installs and registers the AUTHORING SYSTEM with FABRIC; the
- AUTHOR SYSTEM knows how to query for ASP service providers, utilizing technologies such as Jini and / or LDAP; the USER may also manually enter the location of an ASP service provider for the AUTHOR SYSTEM to connect to
- PCDs presentation context descriptors
- the AUTHOR defines Contextual Object References and may relate them hierarchically
- the AUTHOR arranges the content streams into some layout, and defines navigational and sequential flow 10)
- the AUTHOR defines Context Menu Entries (CMEs) and associates them freely to CORs and PCDs
- the AUTHOR specifies design time rules for flow customization by acquiring user input or usage statistics to affect branching and content acquisition 12) The AUTHOR tests the title via compilation and simulation; depending on the
- AUTHORING SYSTEM'S licensing policy various uses of AUTHORING SYSTEM functionality might incur just in-time code downloads and / or code fixing, as well as commerce transactions
- the AUTHOR specifies the image destination of the title; even given specification of a static storage medium, the AUTHOR may specific that certain streams and stream segments reside remotely within FABRIC instead
- the AUTHOR specifies access control options; these specifications may be general or granular (i.e. title, streams, and stream segments); the AUTHORING SYSTEM conveys this information in IPMP (Intellectual Property Management and Protection) elementary streams; access control may involve user permissions, such as for corporate and distance learning applications
- the AUTHOR specifies and applies various copy protection options; these specifications may be general and / or granular (i.e. title, streams, and stream segments); copy protection options are selected from FABRIC, and are available based on authentication and authorization from FABRIC on a permissions basis, wherein the
- the AUTHOR may define and register new copy protection options with FABRIC, indicating the accessibility scope and any commercial implications
- the AUTHOR specifies commercial constraints with FABRIC, resulting in the generation of pricing models with FABRIC; these pricing models are articulated with the XML grammar for pricing, and are stored within FABRIC 18)
- the AUTHOR may simulate effects of pricing, utilizing the FABRIC database
- the AUTHOR generates the final output image, which includes registration with FABRIC; at this point, streams might be transferred to FABRIC for remote streaming; this registration involves stream and title information being transmitted to FABRIC; this registration includes PCD, COR and CME specifications being transmitted to FABRIC; this registration includes codec information being transmitted to FABRIC; this registration includes access control information being transmitted to FABRIC 21)
- the USER acquires the GLUI, whether downloaded from FABRIC, obtained from an installation disc, or already residing on the USER'S device
- the USER registers the GLUI with FABRIC; this is important, because FABRIC must understand the performance constraints of deployed GLUIs
- the USER At first login request to the GLUI, the USER must register with FABRIC; this can include financial information for commercial transactions and billing, or this financial information might be supplied on demand at a later time
- the USER may create user profiles, such as to accommodate various family members
- the USER may subscribe to title information based on content attributes they are interested in 26)
- the USER may opt for remote storage within FABRIC, including security specifications
- the USER acquires content, whether downloaded from FABRIC or played from a content storage medium
- the USER may opt to view content offline if the particular title allows this 29)
- the GLUI may accumulate usage statistics within a hollow region of its install image; this may enable content flow customization to take place in an offline session; the GLUI may later update FABRIC with this information at a later time 30)
- the USER may log into FABRIC, which involves authentication, authorization and access logging 31)
- FABRIC may inform the USER of updated GLUI components available on the server
- FABRIC may provide the USER with information they have subscribed for; this may might be conveyed via email
- the USER accesses content within the GLUI, which might pertain to audio-visual content, information, ASP application streams, or similar types of content.
- the GLUI may receive new PCDs, CORs, and CMEs from FABRIC, whether from the AUTHOR or a SUPPLEMENTAL CONTENT PROVIDER
- the GLUI reacts to the presence of access control information; this may require the user to begin an online session in certain cases
- the GLUI provides the user with commerce constraint information in conjunction with FABRIC
- the GLUI communicates PCDs state changes; these may indirectly provide commercial feedback
- the GLUI may request new content streams on behalf of the USER, such as identified via CMEs; during content acquisition, the GLUI will display any specified transitional content as stream acquisition proceeds; the GLUI will communicate with FABRIC for advertising specification; this may involve advertising selection parameters, in which case an advertisement stored along with the content may be selected and displayed, while updating FABRIC with the PCD state changes pertaining to the advertisement; the GLUI may sometimes download advertisements, whether during stream acquisition or in anticipation of it.
- PCDs User interactivity is described by PCDs; when a user selects different viewing options, this activates and deactivates respective PCDs; when a USER requests remote streams (such as via CME or explicit content links), this corresponds to this activating and deactivating PCDs; CORs are inevitably associated to PCDs, whether directly, or by virtue of other CORs, so the GLUI does not need to communicate their state changes to FABRIC; however, the utilization of CORs for navigation purposes, might be communicated to FABRIC.
- the USER may interact with CORs to achieve context-based seeking, such as to navigate all the scenes with a particular combination of actors, subject matters and objects.
- the USER may interact with content representing the GUI of an ASP application.
- GUI elements correspond to unique identifiers (PCDs) to be conveyed along with event-specific context as part of an event model to the operating system.
- PCDs unique identifiers
- the operating system must know what GUI element in what context is being interacted with. This information comprises a message sent to a remote application, unless a local application proxy has registered to handle the particular event.
- the OS drawing routines are rendered using the GLUI's OS API so that dynamic, visual and audio-visual elementary streams can be generated. For instance, when a system message needs to be displayed, the message text along with the audio-visual scene object to accompany it with are passed from the OS to the GLUI via the API. The GLUI then dynamic generates the stream along with the necessary BiFS commands to alter the scene.
- the GLUI communicates with FABRIC to create and store the requisite transaction.
- FABRIC communicates when an underlying payment and fulfillment system, if applicable.
- the USER may interact with FABRIC via the GLUI to visualize pricing scenarios and implications.
- the USER may establish billing constraints to manage their ASP expenditures.
- FABRIC accumulates weighted scoring of PCDs and CORs, with which it can make calculated determinations, such as what content or value-add streams to offer, and when. Furthermore, the AUTHOR and SUPPLEMENTAL CONTENT PROVIDER can utilize this data to improve their services.
- SUPPLEMENTAL CONTENT PROVIDERS may associate new content, such as audio-visual streams or information, to the title's PCDs and CORs by utilizing the AUTHORING SYSTEM. This access may or may not correspond to commercial licensing, and may or may not correspond to access permissions to access the PCDs and CORs.
- the AUTHOR or SUPPLEMENTAL CONTENT PROVIDER may find it necessary to create new PCDs or CORs. Access control, commercial constraints, and copy protection are articulated with the new authoring of the content.
- the content creation process is identical to the process for authoring standalone content, except that content streams are associated to external titles and reside within FABRIC.
- the USER can subscribe to SUPPLEMENTAL CONTENT PROVIDER offerings, via the GLUI, which works in conjunction with FABRIC to negotiate licensing constraints. This content generally, but not necessarily, remains within FABRIC to be streamed on demand to the USER.
- the USER can take part in community-based functionality.
- This functionality is enabled by databases and directories within FABRIC to create an annotation server, as well as by an annotation module which may be distinct of integrated within the GLUI.
- USERs can find other online USERs, such as those viewing the same content.
- This community participation can include public and private viewing sessions wherein a designated pilot USER may drive a synchronized viewing experience, including whiteboard interactivity.
- This community participation may involve the public or private posting of annotations, as well as the reception of public and private USER-provided annotations attributed to particular titles. Thus, as the appropriate segments of the title become active, the related annotations become visible.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
Claims
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US932346 | 1992-08-19 | ||
US09/932,346 US6744729B2 (en) | 2001-08-17 | 2001-08-17 | Intelligent fabric |
US09/932,217 US20030043191A1 (en) | 2001-08-17 | 2001-08-17 | Systems and methods for displaying a graphical user interface |
US932217 | 2001-08-17 | ||
US932345 | 2001-08-17 | ||
US09/932,345 US20030041159A1 (en) | 2001-08-17 | 2001-08-17 | Systems and method for presenting customizable multimedia presentations |
US09/932,344 US20040205648A1 (en) | 2001-08-17 | 2001-08-17 | Systems and methods for authoring content |
US932344 | 2001-08-17 | ||
PCT/US2002/026251 WO2003017059A2 (en) | 2001-08-17 | 2002-08-15 | Intelligent fabric |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1423769A2 true EP1423769A2 (en) | 2004-06-02 |
Family
ID=27506013
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP02759393A Withdrawn EP1423769A2 (en) | 2001-08-17 | 2002-08-15 | Intelligent fabric |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1423769A2 (en) |
JP (1) | JP2005500769A (en) |
AU (1) | AU2002324732A1 (en) |
WO (4) | WO2003017122A1 (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080079690A1 (en) * | 2006-10-02 | 2008-04-03 | Sony Ericsson Mobile Communications Ab | Portable device and server with streamed user interface effects |
CN1937623A (en) * | 2006-10-18 | 2007-03-28 | 华为技术有限公司 | Method and system for controlling network business |
US7870272B2 (en) | 2006-12-05 | 2011-01-11 | Hewlett-Packard Development Company L.P. | Preserving a user experience with content across multiple computing devices using location information |
US20090164452A1 (en) * | 2007-12-21 | 2009-06-25 | Espial Group Inc. | Apparatus and mehtod for personalization engine |
CN105700767B (en) * | 2014-11-28 | 2018-12-04 | 富泰华工业(深圳)有限公司 | The stacked display system of file and method |
CN105487920A (en) * | 2015-10-12 | 2016-04-13 | 沈阳工业大学 | Ant colony algorithm based optimization method for real-time task scheduling of multi-core system |
WO2018035117A1 (en) * | 2016-08-19 | 2018-02-22 | Oiid, Llc | Interactive music creation and playback method and system |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682326A (en) * | 1992-08-03 | 1997-10-28 | Radius Inc. | Desktop digital video processing system |
US5760767A (en) * | 1995-10-26 | 1998-06-02 | Sony Corporation | Method and apparatus for displaying in and out points during video editing |
US5848396A (en) * | 1996-04-26 | 1998-12-08 | Freedom Of Information, Inc. | Method and apparatus for determining behavioral profile of a computer user |
US6434530B1 (en) * | 1996-05-30 | 2002-08-13 | Retail Multimedia Corporation | Interactive shopping system with mobile apparatus |
US6021403A (en) * | 1996-07-19 | 2000-02-01 | Microsoft Corporation | Intelligent user assistance facility |
JPH1066008A (en) * | 1996-08-23 | 1998-03-06 | Kokusai Denshin Denwa Co Ltd <Kdd> | Moving image retrieving and editing device |
US6006241A (en) * | 1997-03-14 | 1999-12-21 | Microsoft Corporation | Production of a video stream with synchronized annotations over a computer network |
US6301586B1 (en) * | 1997-10-06 | 2001-10-09 | Canon Kabushiki Kaisha | System for managing multimedia objects |
US6363411B1 (en) * | 1998-08-05 | 2002-03-26 | Mci Worldcom, Inc. | Intelligent network |
US6067565A (en) * | 1998-01-15 | 2000-05-23 | Microsoft Corporation | Technique for prefetching a web page of potential future interest in lieu of continuing a current information download |
WO1999060504A1 (en) * | 1998-05-15 | 1999-11-25 | Unicast Communications Corporation | A technique for implementing browser-initiated network-distributed advertising and for interstitially displaying an advertisement |
US6154771A (en) * | 1998-06-01 | 2000-11-28 | Mediastra, Inc. | Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively |
US6119147A (en) * | 1998-07-28 | 2000-09-12 | Fuji Xerox Co., Ltd. | Method and system for computer-mediated, multi-modal, asynchronous meetings in a virtual space |
US6411946B1 (en) * | 1998-08-28 | 2002-06-25 | General Instrument Corporation | Route optimization and traffic management in an ATM network using neural computing |
US6385619B1 (en) * | 1999-01-08 | 2002-05-07 | International Business Machines Corporation | Automatic user interest profile generation from structured document access information |
US6466980B1 (en) * | 1999-06-17 | 2002-10-15 | International Business Machines Corporation | System and method for capacity shaping in an internet environment |
US6542295B2 (en) * | 2000-01-26 | 2003-04-01 | Donald R. M. Boys | Trinocular field glasses with digital photograph capability and integrated focus function |
-
2002
- 2002-08-15 WO PCT/US2002/026250 patent/WO2003017122A1/en not_active Application Discontinuation
- 2002-08-15 WO PCT/US2002/026251 patent/WO2003017059A2/en active Application Filing
- 2002-08-15 JP JP2003521906A patent/JP2005500769A/en active Pending
- 2002-08-15 AU AU2002324732A patent/AU2002324732A1/en not_active Abandoned
- 2002-08-15 EP EP02759393A patent/EP1423769A2/en not_active Withdrawn
- 2002-08-15 WO PCT/US2002/026252 patent/WO2003017082A1/en not_active Application Discontinuation
- 2002-08-15 WO PCT/US2002/026318 patent/WO2003017119A1/en not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
See references of WO03017059A3 * |
Also Published As
Publication number | Publication date |
---|---|
JP2005500769A (en) | 2005-01-06 |
WO2003017059A8 (en) | 2004-04-22 |
WO2003017082A1 (en) | 2003-02-27 |
WO2003017122A1 (en) | 2003-02-27 |
WO2003017059A3 (en) | 2003-10-30 |
WO2003017119A1 (en) | 2003-02-27 |
AU2002324732A1 (en) | 2003-03-03 |
WO2003017059A2 (en) | 2003-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6744729B2 (en) | Intelligent fabric | |
US20050182852A1 (en) | Intelligent fabric | |
US20030041159A1 (en) | Systems and method for presenting customizable multimedia presentations | |
US10609447B2 (en) | Method of unscrambling television content on a bandwidth | |
US11468917B2 (en) | Providing enhanced content | |
US20030043191A1 (en) | Systems and methods for displaying a graphical user interface | |
EP2433423B1 (en) | Media content retrieval system and personal virtual channel | |
JP2005130087A (en) | Multimedia information apparatus | |
EP1423769A2 (en) | Intelligent fabric | |
Papadimitriou et al. | Integrating Semantic Technologies with Interactive Digital TV | |
Thomas et al. | FascinatE D3. 1.1 Survey of metadata and knowledge for automated scripting | |
Royer et al. | Automatic generation of explicitly embedded advertisement for interactive TV: concept and system architecture | |
UPC et al. | Survey of Metadata and Knowledge for Automated Scripting | |
Gerfelder et al. | An Open Architecture and Realization for the Integration of Broadcast Digital Video and Personalized Online Media |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20040302 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LI LU MC NL PT SE SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL LT LV MK RO SI |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: PATTON II, FREDERICK, JOSEPH Inventor name: TINSLEY, DAVID |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: 7G 06F 13/00 A |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20090302 |