WO2003017082A1 - System and method for processing media-file in graphical user interface - Google Patents
System and method for processing media-file in graphical user interface Download PDFInfo
- Publication number
- WO2003017082A1 WO2003017082A1 PCT/US2002/026252 US0226252W WO03017082A1 WO 2003017082 A1 WO2003017082 A1 WO 2003017082A1 US 0226252 W US0226252 W US 0226252W WO 03017082 A1 WO03017082 A1 WO 03017082A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- gui
- content
- media file
- user
- audio
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/756—Media network packet handling adapting media to device capabilities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/765—Media network packet handling intermediate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/61—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/30—Definitions, standards or architectural aspects of layered protocol stacks
- H04L69/32—Architecture of open systems interconnection [OSI] 7-layer type protocol stacks, e.g. the interfaces between the data link level and the physical level
- H04L69/322—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions
- H04L69/329—Intralayer communication protocols among peer entities or protocol data unit [PDU] definitions in the application layer [OSI layer 7]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
- G11B2220/2537—Optical discs
- G11B2220/2562—DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/102—Programmed access in sequence to addressed parts of tracks of operating record carriers
- G11B27/105—Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/50—Network services
- H04L67/60—Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
- H04L67/63—Routing a service request depending on the request content or context
Definitions
- This invention relates to authoring systems and processes supporting a graphical user interface (GUI).
- GUI graphical user interface
- the communications industry has traditionally included a number of media, including television, cable, radio, periodicals, compact disc (CDs) and digital versatile discs (DVDs).
- CDs compact disc
- DVDs digital versatile discs
- One over-arching goal for the communications industry is to provide relevant information upon demand by a user. For example, television, cable and radio broadcasters and Web-casters transmit entertainment, news, educational programs, and presentations such as movies, sport events, or music events that appeal to as many people as possible.
- a number of file structures are used today to store time-based media: audio formats such as AIFF, video formats such as AVI, and streaming formats such as RealMedia. They are different at least in part because of their different focus and applicability. Some of these formats are sufficiently widely accepted, broad in their application, and relatively easy to implement, that they are used not only for content delivery but also as interchange formats such as the QuickTime file format.
- the QuickTime format is used today by many web sites serving time-based data; in many authoring environments, including professional ones; and on many multimedia CD ROM (e.g., DVD or CD-I) titles.
- the QuickTime media layer supports the relatively efficient display and management of general multimedia data, with an emphasis on time-based material (video, audio, video and audio, motion graphics/animation, etc.).
- the media layer uses the QuickTime file format as the storage and interchange format for media information.
- the architectural capabilities of the layer are generally broader than the existing implementations, and the file format is capable of representing more information than is currently demanded by the existing QuickTime implementations.
- the QuickTime file format has structures to represent the temporal behavior of general time-based streams, a concept which covers the time-based emission of network packets, as well as the time-based local presentation of multimedia data. Given the capabilities and flexibility provided by time-based media formats, it is desirable to provide a user interface that provides suitable functionality and flexibility for playback and/or other processing of time-based media in such formats.
- Prior user interfaces for controlling the presentation of time-based media include user interfaces for the RealPlayers from RealNetworks of Seattle, Washington, user interfaces for the QuickTime MoviePlayers from Apple Computer, Inc. of Cupertino, Calif., and user interfaces for the Windows Media Players from Microsoft Corporation of Redmond, Wash. Also, there are a number of time-based media authoring systems which allow the media to be created and edited, such as Premiere from Adobe Systems of San Jose, Calif.
- a time bar may be displayed on a window with controls for playback on the same window. While these controls are readily visible and available to a user, a large number of controls on a window causes the window to appear complex and tends to intimidate a novice user.
- Some prior user interfaces include the ability to select, for presentation, certain chapters or sections of a media. LaserDisc players typically include this capability which may be used when the media is segmented into chapters or sections. A user may be presented with a list of chapters or sections and may select a chapter or section from the list. When this list contains a large number of chapters or sections, the user may scroll through the list but the speed of scrolling is fixed at a single, predetermined rate. Thus, the user's ability to scroll through a list of chapters is limited in these prior user interfaces.
- US Patent No. 6,262,724 shows a time-based media player display window for displaying, controlling, and/or otherwise processing time-based media data.
- the time-based media player which is typically displayed as a window on a display of a computer or other digital processing system, includes a number of display and control functions for processing time-based media data, such as a QuickTime movie.
- the player window 200 may be "closed” using a close box (e.g. the user may "click” on this box to close the window by positioning a cursor on the box and depressing and releasing a button, such as a mouse's button while the cursor remains positioned on the close box).
- the media player includes a movie display window 202 for displaying a movie or other images associated with time-based media.
- a time/chapter display and control region of the media player provides functionality for displaying and/or controlling time associated with a particular time-based media file (e.g., a particular movie processed by the player).
- a time-based media file may be sub-indexed into "chapters" or sections which correspond to time segments of the time-based media file, and which chapters may also be titled. As such, a user may view or select a time from which, or time segment in which, to play back a time-based media file.
- a method for interacting with a user through a graphical user interface (GUI) for a device includes receiving a media file representative of the GUI, the media file containing a plurality of GUI streams; determining hardware resources available to the device; selecting one or more GUI streams based on the available hardware resources; rendering the GUI based on the selected one or more GUI streams; detecting a user interaction with the GUI; and refreshing the GUI in accordance with the user interaction.
- GUI graphical user interface
- the refreshing the GUI can include receiving a second media file representative of a second GUI; and rendering the second GUI on the screen.
- the media file can be a time-based media file such as an MPEG file or a QuickTime file.
- the media file can be stored at a remote location accessible through a data processing network, or can be stored on a machine-readable medium at a local location.
- the media file can be sent from a remote data processing system in response to a selection of an icon on the GUI associated with the media file.
- the media file can be playback in response to selection of the media icon associated with the media file.
- the media file can be one of video data, audio data, visual data, and a combination of audio and video data.
- the method includes dynamically generating customized audio or video content according to the user's preferences; merging the dynamically generated customized audio or video content with the selected audio or video content; and displaying the customized audio or video content as the GUI.
- the method includes registering content with the server.
- the method also includes annotating the content with scene information.
- the user's behavior can be correlated with the scene information.
- Additional audio or video content can be correlated with an annotation such as a scene annotation.
- the scene information includes one or more of the following: background music, location, set props, and objects corresponding to brand names.
- Customized advertisement can be added to the customized video content.
- a presentation context descriptor and a semantic descriptor can be generated.
- Customized content can be provided to a viewer by archiving the viewer's behavior on a server coupled to a wide area network and collecting the viewer's preferences over time; receiving a request for a selected audio or video content; dynamically generating customized audio or video content according to the viewer's preferences; merging the dynamically generated customized audio or video content with the selected audio or video content; and displaying the customized audio or video content to the viewer.
- the system combines the advantages of traditional media with the Internet in an efficient manner so as to provide text, images, sound, and video on-demand in a simple, intuitive manner.
- Other advantages and features will become apparent from the following description, including the drawings and claims.
- Fig. 1 shows a computer-implemented process supporting interactions with a user through a graphical user interface (GUI) for a device.
- GUI graphical user interface
- Fig. 2 A shows an exemplary application for supporting the GUI on top of an operating system.
- Fig. 2B shows an exemplary operating system that directly supports the GUI.
- Fig. 3 shows one embodiment of a fabric for supporting customizable presentations.
- Fig. 4 illustrates a process for displaying content using an MPEG-4 browser.
- Fig. 1 shows a computer-implemented process 10 supporting interactions with a user through a graphical user interface (GUI) for a device.
- the device can be a desktop computer, a digital television, a handheld computer, a cellular telephone, or a suitable mobile computer, among others.
- the GUI is specified by a media file, such as a MPEG 4 file, for example.
- the media file includes a plurality of streams which are selected based on hardware characteristics of the device. For instance, a desktop computer can have a high resolution display and a large amount of buffer memory, while a handheld computer can have a small monochrome display with a small buffer memory.
- one or more streams may be selected for rendering the GUI.
- the media file defines compositional layout for the GUI, such as multiple windows or event specific popups and certain content meant to be displayed in a windowed presentation can make use of the popups, for example.
- the GUI content is arranged in regards to layout, sequence, and navigational flow.
- Various navigational interactivity can be specified in the GUI content, for example anchors (clickable targets), forms, alternate . tracks and context menus, virtual presence (VRML-like navigation), and interactive stop mode, where playback breaks periodically pending user interaction.
- the file also defines and associates context menus to contextual descriptors; specify hierarchical positioning of context menu entry, description, and one or more of the following end actions (local-offline, remote, and transitional (if remote is defined)).
- the process 10 includes receiving a media file representative of the GUI, the media file containing a plurality of GUI streams (step 12).
- the method determines hardware resources available to the device (step 14) and selects one or more GUI streams based on the available hardware resources (step 16).
- the GUI is rendered based on the selected one or more GUI streams (step 18).
- the method detects a user interaction with the GUI, such as a user selection of a button, for example (step 20). Based on the user selection, the method refreshes the screen in accordance with the user interaction (step 22).
- the refreshing of the screen can include receiving a second media file representative of a second GUI; and rendering the second GUI on the screen.
- the media file can be a time- based media file such as an MPEG file or a QuickTime file.
- the media file can be stored at a remote location accessible through a data processing network, or can be stored on a machine- readable medium at a local location.
- the media file can be sent from a remote data processing system in response to a selection of an icon on the GUI associated with the media file.
- the media file can be playback in response to selection of the media icon associated with the media file.
- the media file can be one of video data, audio data, visual data, and a combination of audio and video data.
- the process includes dynamically generating customized audio or video content according to the user's preferences; merging the dynamically generated customized audio or video content with the selected audio or video content; and displaying the customized audio or video content as the GUI.
- the method includes registering content with the server.
- the method also includes annotating the content with scene information.
- the user's behavior can be correlated with the scene information.
- Additional audio or video content can be correlated with an annotation such as a scene annotation.
- the scene information includes one or more of the following: background music, location, set props, and objects corresponding to brand names.
- Customized advertisement can be added to the customized video content.
- a presentation context descriptor and a semantic descriptor can be generated.
- Customized content can be provided to a viewer by archiving the viewer's behavior on a server coupled to a wide area network and collecting the viewer's preferences over time; receiving a request for a selected audio or video content; dynamically generating customized audio or video content according to the viewer's preferences; merging the dynamically generated customized audio or video content with the selected audio or video content; and displaying the customized audio or video content to the viewer.
- Fig. 2 A shows an exemplary application for supporting the GUI on top of an operating system
- Fig. 2B shows an exemplary operating system that directly supports the GUI.
- an application (such as a browser) runs on top of an operating system such as Windows, OsX, Linux, or Unix and renders a time-based media file such as an MPEG-4 file.
- the file is parsed into elements to be displayed, and the browser makes OS calls to render elements of the MPEG-4 file.
- the operating system is Windows
- the browser makes calls to the Windows graphics display kernel to render the parsed MPEG-4 elements.
- an exemplary GUI is discussed next.
- the GUI is displayed by an application such as an MPEG-4 enabled browser.
- an elementary stream ES
- An access unit AU
- a presentation consists of a number of elementary streams representing audio, video, text, graphics, program controls and associated logic, composition information (i.e. Binary Format for Scenes), and purely descriptive data in which the application conveys presentation context descriptors (PCDs).
- composition information i.e. Binary Format for Scenes
- PCDs presentation context descriptors
- streams are demultiplexed before being passed to a decoder. Additional streams noted below are for purposes of perspective (multi-angle) for video, or language for audio and text.
- the following table shows each ES broken by access unit, decoded, then prepared for composition or transmission.
- AUn AU2 AU1 Decoder Action content elementary streams An-* A2* Al* video decode scene composition video base layer
- An-* A2* Al* video decode scene composition video enhancement layers An* A2* Al* video decode scene composition additional video base layers
- An* A2* Al* video decode scene composition additional video enhancement layers An-* A2* Al* video decode scene composition audio
- An* A2* Al* audio decode scene composition text overlay An-* A2* Al* text decode scene composition additional text overlays An* A2* Al* text decode scene composition
- a timeline indicates the progression of the scene.
- the content streams render the presentation proper, while presentation context descriptors reside in companion streams. Each descriptor indicates start and end time code. Pieces of context may freely overlap.
- the presentation context is attributed to a particular ES, and each ES may or may not have contextual description. Presentation context of different ESs may reside in the same stream or different streams.
- Each presentation descriptor has a start and end flag, with a zero for both indicating a point in between. Whether or not descriptor information is repeated in each access unit corresponds to the random access characteristics of the associated content stream. For instance, predictive and bi-directional frames of MPEG video are not randomly accessible as they depend upon frames outside themselves. Therefore, in such cases, PCD info need not be repeated in such instances.
- PCD is absolute, that is, its context is always active when its temporal definition is valid, or conditional, in which case it is only active upon user selection.
- the PCD refers to presentation content (not context) to jump to, enabling contextual navigation.
- the conditional context may also be regarded as interactive context.
- GUI harness is provided that eliminates the distinction between player and content.
- the GUI harness provides a general-purpose user interface mechanism as opposed to a traditional multimedia playback interface, as later is not appropriate for all types of content.
- the GUI harness creates a flexible GUI framework that defers the definition of appropriate, content-specific interactive controls to the content itself. These definitions are constructed via a compact grammar.
- the method to execute a stream is the most powerful and flexible command, because it facilitates dynamic injection of BiFS commands, such as replace or modify scene.
- the visual appearance and positioning of these controls are implemented as graphical content (synthetic AVOs) within a dedicated BiFS-anim stream.
- a mask may enable non-rectangular control objects. For example, utilizing alpha blending, a semi- transparent overlap could depict the graphical interaction primitives. Further more, an invisible or visible container primitive can be utilized to group a number of interaction primitives.
- the GUI harness makes the GUI a part of the content, enabling a content- specific user interface.
- the GUI harness allows content behavior to utilize a rich event model, such as responding to keyboard and directional input device events.
- graphical interaction primitives can be contextually triggered, such as in response to an directional input device event, such as ReceiveDirectionallnputDeviceFocus, to only depict the controls in specific circumstances. This would be in contrast to depicting these controls all the time in a dedicated window. It is necessary for the GUI harness to provide this level of control as content may vary dramatically from the traditional audio-video clips utilized by existing multimedia playback systems. These graphical interaction controls might also be overridden depending on the content segment, whereas some controls might be omitted, and others added. For instance, the content may be more information- and control- based, as well as more event driven than sequentially oriented. It's not important what types of input devices are present.
- the content refers to these abstractly, such as directional input device focus, whereas the device in question might turn out to be a mouse, game controller, or stylus.
- abstract specifiers are used as well, such as directional input device button 1 and 2 to represent the equivalent of right and left mouse buttons.
- Each group of graphical interaction controls might have a keyboard short cut mapped by the platform-specific implementation of the GUI harness.
- a specify context menu event is similarly mapped, such as to gain access to contextual information.
- non-content specific control such as audio volume level and color control, the GUI harness will provide its own access mechanism.
- Figs. 2A and 2B illustrate the functionally layering of the GUI Harness subsystem.
- exemplary operating system (OS) communication layers include a hardware abstraction layer 52 that rests above the hardware 50.
- a kernel 54 runs within a system services and device layer 56.
- a virtual machine (VM) layer 58 such as the Java VM layer runs on top of the layer 56.
- platform interface glue layer 60 resides above the VM layer 58, and a platform abstraction layer 62 resides between the glue layer 60 and the GUI harness 64.
- the platform abstraction layer 62 provides an interface to the event model and the streamable GUI model consisting of the generation of graphical interactive primitives.
- the OS appears as special, privileged interactive content to the GUI harness, enabling its own look-and-feel and behavior to be maintained.
- Visual items utilized by the OS GUI can be dynamically prepared just as they would in the traditional, native circumstance.
- Input device events are trapped by the GUI harness.
- the harness may process these events on behalf of the content's abstracted event specification, subject to Operating System overrides.
- the interactivity provides a thin wrapping of the native OS event model.
- traditional content might employ static navigation, the OS presentation employs a dynamic event model. For instance, at boot up, as the harness is loaded, the OS may query desktop objects then dynamically stream a visual representation to the harness, including interactive information that will map and trigger events to be caught and interpreted naturally to the host OS. This could be a JPEG, for instance, as well as an animated object represented by the VRML- derived syntax of BiFS .
- the OS communicates with the harness via content streams, such as to display message boxes. These streams will contain BiFS information concerning interactive objects, such as a dialog box tab.
- the OS would provide hooks for its UI primitives, so that it may trap its GUI API requests and translate them into streamable content to the harness. Interactions with operating system AVO objects, which may overlay that of independent content in certain instances, are trapped by the GUI harness and relayed to the OS to perform its implementation-specific event processing.
- GUI Harness running as an OS application
- a user is running an operating system such as Windows, OS X, Linux, Unix, Windows CE, or PalmOS, and wishes to run an ASP-hosted word processing application via the GUI Harness.
- Document files may be located on the local device or on remote storage.
- the user runs the GUI Harness application.
- the user logs into the ASP network for authentication and authorization purposes, and is admitted.
- the network could either be selected via a query of available services, or specified manually by the user.
- LDAP is likely the enabling architecture behind service lookup and access.
- the user selects a word processing application.
- Application information pertaining to licensing, including pricing and billing information is always available through the harness application, and likely is accessible in the directory in which the user browses for available applications. If the user does not have rights to the application, they must register and fulfill any initial licensing requirements before being granted access.
- the user requests initiates an ASP session, and application data is streamed to the client.
- the typical type of ASP application will be the thin client variety, in which the server conducts the bulk of application processing, but fatter clients are possible.
- executable code may be acquired from an elementary stream, or may already reside on storage accessible to the device, such as a hard drive. The distinction of whether code is run remotely or locally is gracefully handed through the Application Definable Event Model supported by the [iSC] GUI harness.
- Local code is associated to GUI elements via ids, so that the harness may route processing. This also makes caching possible, such that remote routines may be cached locally for some period of time through the harness.
- Each interactive primitive is articulated via BiFS data and must carry a unique identifier.
- an event consisting of the object's unique ID and event specific data.
- an application proxy runs in the background to receive messages. If the event is handled by a local routine, the message is sent to the application proxy, other wise, it is sent over the wire.
- the harness treats both cases identically.
- Application data may involve executable code, such as java routines, which is loaded into the memory space of the application proxy.
- Application data may involve audio, visual, and data streams (including BiFS information) pertaining to GUI resources.
- Visual includes stills, natural video, and synthetic video. While a word processing application primarily displays a text window, text and a cursor, it may have combo boxes and menu entries as well.
- the combo box exists as an element within the BiFS scene, and is overlaid with an additional text object, such as corresponding to a font name, also a part of the scene composition.
- the combo box has a selection arrow, which when triggered via an input device, displays a window of font names, and a scroll bar. This window is already part of the BiFS scene, but is hidden until triggered.
- the text in the window is accessed as a still image, and an additional scene element is the highlight visual object.
- local streaming is specified for the text window within the scene.
- the stream passes through the Delivery Multimedia Interface Framework, as discussed in the MPEG-4 Systems specification.
- the GUI harness renders the text as a still, which serves as an off-screen double buffer, and is streamed, being displayed with the next access unit.
- the standard MPEG-4 mechanism then operates to deliver of streaming data.
- the synchronized delivery of streaming information from source to destination, exploiting different QoS as available from the network, is specified in terms of the aforementioned synchronization layer and a delivery layer containing a two-layer multiplexer.
- a "TransMux" (Transport Multiplexing) layer models the layer that offers transport services matching the requested QoS. Only the interface to this layer is specified by MPEG-4 while the concrete mapping of the data packets and control signaling must be done in collaboration with the bodies that have jurisdiction over the respective transport protocol.
- Any suitable existing transport protocol stack such as (RTP)/UDP/IP, (AAL5)/ATM, or MPEG-2's Transport Stream over a suitable link layer may become a specific TransMux instance.
- RTP transport protocol
- UDP User Datagram Protocol
- AAL5 AAL5
- ATM MPEG-2's Transport Stream over a suitable link layer
- MPEG-4 System Layer Model it is possible to: • identify access units, transport timestamps and clock reference information and identify data loss.
- optionally interleave data from different elementary streams into FlexMux streams convey control information to: indicate the required QoS for each elementary stream and FlexMux stream; • translate such QoS requirements into actual network resources; associate elementary streams to media objects convey the mapping of elementary streams to FlexMux and TransMux channels
- Parts of the control functionalities are available only in conjunction with a transport control entity like the DMIF framework.
- the user observes a scene that is composed following the design of the scene's author. Depending on the degree of freedom allowed by the author, however, the user has the possibility to interact with the scene.
- Operations a user may be allowed to perform include: change the viewing/listening point of the scene, e.g. by navigation through a scene; • drag objects in the scene to a different position; trigger a cascade of events by clicking on a specific object, e.g. starting or stopping a video stream ; select the desired language when multiple language tracks are available; More complex kinds of behavior can also be triggered, e.g. a virtual phone rings, the user answers and a communication link is established.
- an Application Definable Event Model enables communication between the user and the application through the user interface hosted within the GUI harness.
- the harness utilizes metadata relating to the BiFS elements to indicate events and event context. For instance, when the user selects a different font from the combo box, the harness has the scene information to update the combo box. The event information must still be passed to the application, to indicate the new font selection, which will result in streaming data on behalf of the main text window object, which is likely to be processed locally, and updated with a dynamically generated stream, which passes through DMIF.
- the harness then, when running as an OS application, renders or processes elementary stream data, utilizing BiFS information. Whether something is rendered, such as video, or processed such as event information, a CODEC achieves this.
- the CODEC may result in information being passed to the harness to be relayed elsewhere, thus, corresponding to a back channel.
- the harness via its DMIF implementation, knows how to talk to a remote application or a local application.
- a chief feature of the harness is the dynamic creation of data streams. In the case that the harness is implemented in java, this necessitates a Java Virtual Machine. In any event, the harness runs as a typical computer application
- Fig. 2B shows a second embodiment of a GUI harness 84 embedded in an operating system 72 that runs above hardware 70.
- the OS 72 can be JavaOS, for example.
- a platform interface glue layer 76 resides above the OS layer 72, and a platform abstraction layer 82 resides between the glue layer 76 and the GUI harness 84.
- GUI Harness running as the OS GUI
- the user is operating a device whose GUI consists of the GUI Harness application.
- An OS GUI in some cases referred to as a desktop, is essentially a privileged application, through which the user may interact with the OS, and other applications may be run and displayed.
- Any abstraction layer an operation employs to interface with its GUI must interface with the Platform Abstraction Layer of the harness. This implementation corresponds to the Platform Interface Glue. Together they represent the harness' operating system interface.
- Much of the implementation specific code corresponds to drawing code and networking code.
- the core code is already implemented by virtue of a JVM or JavaOS.
- the OS GUI itself is authored as elementary streams corresponding to graphic representations. These streams are articulated by scene compositions through the BiFS layer.
- An icon for example, is a still image object the user can interact with via the scene composition.
- the harness relays its id and any event specific information. For instance, a folder icon being double clicked, could corresponding to a graphical interaction of the icon and the passing of the message to the OS, which would correspond to BiFS commands to update the scene, and display the folders contents.
- the harness' drawing API would be used to create a dynamic stream and route it through its DMIF implementation. Outside of this, the harness as an OS GUI works in the same manner as an ordinary application hosted on a given operating system.
- All rendering passes through a dynamic stream creation interface, which is then passed to DMIF, after which it is displayed as BiFS- enhanced audio-visual content.
- All processed information streams are passed from the CODEC to DMIF, and then from DMIF to the operating system via a back channel.
- Input controls are streams as animated AVO objects to the harness. This is critical when hosting program content. This even accommodates features such as drag-and-drop, in which the size and / or position of the AVO object is manipulated by the user. These objects may define audio feedback to the user.
- window can be quite conceptual.
- the specification of multiple windows may be used to combine different types of content within a presentation, for example.
- the player is faced with integrating multiple presentations, which may contain regularly time varying content, such as audio and / or video, as well as non-temporally driven content, such as input forms.
- the platform-specific implementation may handle the windows as it may, such as by only displaying one window at a time, or displaying the windows on the same viewing device or across multiple viewing devices.
- Executable code may be conveyed in elementary streams.
- the program may be loaded in RAM as normal, including on demand.
- the OS executes code as normal, but short circuits its native display mechanisms by conveying equivalent display as content to the GUI. This method supports traditional code delivery and execution, whether platform-specific C code, or portable java code.
- the implementation GUI harness can provide a much more radical means of program development and execution.
- a program's user interface may be authored as content, in which event-specific interaction with the UI is communicated to the executable module, such as a remote server hosted program on an ASP platform.
- the user interface is streamed as content to be handled on wide ranging device types.
- the use of alternate streams could provide alternate representations, such as text-based, simple 2D-based, and so forth.
- the above GUI can be automatically customized to the user's preferences.
- the automatic customization is done by detecting relationships among a user viewing content in particular context(s).
- the user interacts with a viewing system through the GUI described above.
- a default GUI is treamed and played to the user.
- the user can view the default stream, or can interact with the content by navigating the GUI, for example by clicking an icon or a button.
- the user interest exhibited implicitly in his or her selection and request is captured as the context.
- the actions taken by the user through the user interface is captured, and over time, the behavior of a particular user can be predicted based on the context.
- the user can be presented with additional information associated with a particular program.
- Fig. 3 shows an exemplary system that captures the context.
- the system also stores content and streams the content, as modified in real-time by the context, to the user on- demand.
- the system includes a switching fabric 50 connecting a plurality of local networks 60.
- the switching fabric 50 provides an interconnection architecture which uses multiple stages of switches to route transactions between a source address and a destination address of a data communications network.
- the switching fabric 50 includes multiple switching devices and is scalable because each of the switching devices of the fabric 50 includes a plurality of network ports and the number of switching devices of the fabric 50 may be increased to increase the number of local network 60 connections for the switch.
- the fabric 50 includes all networks which subscribe and are connected to each other and includes wireless networks, cable television networks, WAN's such as Exodus, Quest, DBN.
- Computers 62 are connected to a network hub 64 that is connected to a switch 56, which can be an Asynchronous Transfer Mode (ATM) switch, for example.
- Network hub 64 functions to interface an ATM network to a non-ATM network, such as an Ethernet LAN, for example.
- Computer 62 is also directly connected to ATM switch 66.
- Two ATM switches are connected to WAN 68.
- the WAN 68 can communicate with a switching fabric such as a cross-bar network or a Bayan network, among others.
- the switching fabric is the combination of hardware and software that moves data coming in to a network node out by the correct port (door) to the next node in the network.
- Each server 62 includes a content database that can be customized and streamed on-demand to the user. Its central repository stores information about content assets, content pages, content structure, links, and user profiles, for example. Each local server 62 also captures usage information for each user, and based on data gathered over a period, can predict user interests based on historical usage information. Based on the predicted user interests and the content stored in the server, the server can customize the content to the user interest.
- the local server 62 can be a scalable compute farm to handle increases in processing load. After customizing content, the local server 62 communicates the customized content to the requesting viewing terminal 70.
- the viewing terminals 70 can be a personal computer (PC), a television (TV) connected to a set-top box, a TV connected to a DVD player, a PC-TV, a wireless handheld computer or a cellular telephone.
- PC personal computer
- TV television
- TV-TV television
- wireless handheld computer or a cellular telephone.
- the program to be displayed may be transmitted as an analog signal, for example according to the NTSC standard utilized in the United States, or as a digital signal modulated onto an analog carrier, or as a digital stream sent over the Internet, or digital data stored on a DVD.
- the signals may be received over the Internet, cable, or wireless transmission such as TV, satellite or cellular transmissions.
- a viewing terminal 70 includes a processor that may be used solely to run a browser GUI and associated software, or the processor may be configured to run other applications, such as word processing, graphics ' , or the like.
- the viewing terminal's display can be used as both a television screen and a computer monitor.
- the terminal will include a number of input devices, such as a keyboard, a mouse and a remote control device, similar to the one described above. However, these input devices may be combined into a single device that inputs commands with keys, a trackball, pointing device, scrolling mechanism, voice activation or a combination thereof.
- the terminal 70 can include a DVD player that is adapted to receive an enhanced DVD that, in combination with the local server 62, provides a custom rendering based on the content 2 and context 3. Desired content can be stored on a disc such as DVD and can be accessed, downloaded, and/or automatically upgraded, for example, via downloading from a satellite, transmission through the internet or other on-line service, or transmission through another land line such as coax cable, telephone line, optical fiber, or wireless technology.
- An input device can be used to control the terminal and can be a remote control, keyboard, mouse, a voice activated interface or the like.
- the terminal includes a video capture card connected to either live video, baseband video, or cable.
- the video capture card digitizes a video image and displays the video image in a window on the monitor.
- the terminal is also connected to a local server 62 over the Internet using a modem.
- the modem can be a 56K modem, a cable modem, or a DSL modem.
- ISP Internet service provider
- the user connects to a suitable Internet service provider (ISP), which in turn is connected to the backbone of the network 60 such as the Internet, typically via a Tl or a T3 line.
- ISP Internet service provider
- the ISP communicates with the viewing terminals 70 using a protocol such as point to point protocol (PPP) or a serial line Internet protocol (SLIP) 100 over one or more media or telephone network, including landline, wireless line, or a combination thereof.
- PPP point to point protocol
- SLIP serial line Internet protocol
- a similar PPP or SLIP layer is provided to communicate with the ISP.
- a PPP or SLIP client layer communicates with the PPP or SLIP layer.
- a network aware application such as a browser receives and formats the data received over the Internet in a manner suitable for the user.
- the computers communicate using the functionality provided by Hypertext Transfer Protocol (HTTP).
- HTTP Hypertext Transfer Protocol
- the World Wide Web or simply the "Web” includes all the servers adhering to this standard which are accessible to clients via Uniform Resource Locators (URL's).
- URL's Uniform Resource Locators
- communication can be provided over a communication medium.
- the client and server may be coupled via
- SLIP Serial Line Internet Protocol
- TCP/IP connections for high-capacity communication.
- Active within the viewing terminal is a user interface provided by the browser that establishes the connection with the server 62 and allows the user to access information.
- the user interface is a GUI that supports Moving Picture Experts Group-4 (MPEG-4), a standard used for coding audio-visual information (e.g., movies, video, music) in a digital compressed format.
- MPEG-4 Moving Picture Experts Group-4
- the major advantage of MPEG compared to other video and audio coding formats is that MPEG files are much smaller for the same quality using high quality compression techniques.
- the GUI can be embedded in the operating system such as the Java operating system. More details on the GUI are disclosed in the copending application entitled "SYSTEMS AND METHODS FOR DISPLAYING A GRAPHICAL USER INTERFACE", the content of which is incorporated by reference.
- the terminal 70 is an intelligent entertainment unit that plays DVD.
- the terminal 70 monitors usage pattern entered through the browser and updates the local server 62 with user context data.
- the local server 62 can modify one or more objects stored on the DVD, and the updated or new objects can be downloaded from a satellite, transmitted through the internet or other on-line service, or transmitted through another land line such as coax cable, telephone line, optical fiber, or wireless technology back to the terminal.
- the terminal 70 in turn renders the new or updated object along with the other objects on the DVD to provide on-the-fly customization of a desired user view.
- the system handles MPEG (Moving Picture Experts Group) streams between a server and one or more clients using the switches.
- MPEG Motion Picture Experts Group
- the client is the terminal that actually delivers the final rendered presentation.
- the server broadcasts channels or addresses which contain streams. These channels can be accessed by a terminal, which is a member of a WAN, using IP protocol.
- the switch which sits at the gateway for a given WAN, allocates bandwidth to receive the channel requested.
- the initial Channel contains BiFS Layer Information, which the Switch can parse, process DMIF to determine the hardware profile for its network and determine the addresses for the AVO's needed to complete the defined presentation.
- the Switch passes the AVO's and the BiFS Layer information to a Multiplexor for final compilation prior to broadcast on to the WAN.
- the data streams (elementary streams, ES) that result from the coding process can be transmitted or stored separately, and need only to be composed so as to create the actual multimedia presentation at the receiver side.
- ES elementary streams
- MPEG-4 relationships between the audio-visual components that constitute a scene are described at two main levels.
- the Binary Format for Scenes (BIFS) describes the spatio-temporal arrangements of the objects in the scene. Viewers may have the possibility of interacting with the objects, e.g. by rearranging them on the scene or by changing their own point of view in a 3D virtual environment.
- the scene description provides a rich set of nodes for 2-D and 3-D composition operators and graphics primitives.
- Object Descriptors define the relationship between the Elementary Streams pertinent to each object (e.g the audio and the video stream of a participant to a videoconference) ODs also provide additional information such as the URL needed to access the Elementary Steams, the characteristics of the decoders needed to parse them, intellectual property and others.
- Media objects may need streaming data, which is conveyed in one or more elementary streams.
- An object descriptor identifies all streams associated to one media object. This allows handling hierarchically encoded data as well as the association of meta-information about the content (called Object content information') and the intellectual property rights associated with it.
- Each stream itself is characterized by a set of descriptors for configuration information, e.g., to determine the required decoder resources and the precision of encoded timing information.
- the descriptors may carry hints to the Quality of Service (QOS) it requests for transmission (e.g., maximum bit rate, bit error rate, priority, etc.)
- Synchronization of elementary streams is achieved through time stamping of individual access units within elementary streams.
- the synchronization layer manages the identification of such access units and the time stamping. Independent of the media type, this layer allows identification of the type of access unit (e.g., video or audio frames, scene description commands) in elementary streams, recovery of the media object's or scene description's time base, and it enables synchronization among them.
- the syntax of this layer is configurable in a large number of ways, allowing use in a broad spectrum of systems.
- the synchronized delivery of streaming information from source to destination, exploiting different QoS as available from the network, is specified in terms of the synchronization layer and a delivery layer containing a two-layer multiplexer.
- the first multiplexing layer is managed according to the DMIF specification, part 6 of the MPEG-4 standard. (DMIF stands for Delivery Multimedia Integration Framework)
- DMIF Delivery Multimedia Integration Framework
- This multiplex may be embodied by the MPEG-defined FlexMux tool, which allows grouping of Elementary Streams (ESs) with a low multiplexing overhead. Multiplexing at this layer may be used, for example, to group ES with similar QoS requirements, reduce the number of network connections or the end to end delay.
- the "TransMux" (Transport Multiplexing) layer models the layer that offers transport services matching the requested QoS.
- the BiFS Layer contains the necessary DMIF information needed to determine the configuration of the content. This can be looked at as a series of criteria filters, which address the relationships defined in the BiFS Layer for AVO relationships and priority.
- DMIF and BiFS determine the capabilities of the device accessing the channel where the application resides, which can then determine the distribution of processing power between the server and the terminal device.
- Intelligence built in to the fabric, will allow the entire network to utilize predictive analysis to configure itself to deliver QOS.
- the switch 16 can monitor data flow to ensure no corruption happens.
- the switch also parses the ODs and the BiFSs to regulate which elements it passes to the multiplexer and which it does not. This will be determined based on the type of network the switch sits as a gate to and the DMIF information.
- This "Content Conformation" by the switch happens at gateways to a given WAN such as a Nokia 144k 3-G Wireless Network. These gateways send the multiplexed data to switches at its respective POP's where the database is installed for customized content interaction and "Rules Driven" Function Execution during broadcast of the content.
- the BiFS can contain interaction rules that query a field in a database.
- the field can contain scripts that execute a series of "Rules
- This rules driven system can customize a particular object, for instance, customizing a generic can to reflect a Coke can, in a given scene.
- Each POP send its current load status and QOS configuration to the gateway hub where Predictive Analysis is performed to handle load balancing of data streams and processor assignment to deliver consistent QOS for the entire network on the fly.
- the result is that content defines the configuration of the network once its BiFS Layer is parsed and checked against the available DMIF Configuration and network status.
- the switch also periodically takes snapshots of traffic and processor usage. The information is archived and the latest information is correlated with previously archived data for usage patterns that are used to predict the configuration of the network to provide optimum QOS.
- the network is constantly re-configuring itself.
- the content on the fabric can be categorized in to two high level groups:
- A/V Audio and Video
- Programs can be created which contain AVO's (Audio Video Objects), their relationships and behaviors (Defined in the BiFS Layer) as well as DMIF (Distributed Multimedia Interface Framework) for optimization of the content on various platforms.
- Content can be broadcast in an "Unmultiplexed” fashion by allowing the GLUI to access a channel which contains the Raw BiFS Layer.
- the BiFS Layer will contain the necessary DMIF information needed to determine the configuration of the content. This can be looked at as a series of criteria filters, which address the relationships defined in the BiFS Layer for AVO relationships and priority.
- a person using a connected wireless PDA, on a 144k, 3- G WAN can request access to a given channel, for instance channel 345.
- the request transmits from the PDA over the wireless network and channel 345 is accessed.
- Channel 345 contains BiFS Layer information regarding a specific show. Within the BiFS Layer is the DMIF information, which says... If this content is being played on a
- the channels where these AVO's may be defined can be contained in the BiFS Layer of can be extensible by having the BiFS layer access a field on a related RR.UE database which supports the content. This will allow for the elements of a program to be modified over time.
- a practical example of this systems application is as follows: a broadcaster transmitting content with a generic bottle can receive advertisement money from Coke another from Pepsi. The Actual label on the bottle will represent the advertiser when a viewer from a given area watches the content.
- the database can contain and command rules for far more complex behavior. If/ Then Statements relative to the users profile and interaction with the content can produce customized experiences for each individual viewer on the fly.
- ASP Applications running on fabric represent the other type of Content. These applications can be developed to run on the servers and broadcast their interface to the GLUI of the connected devices. The impact of being able to write an application such as a word processor than can send its interface, in for example, compressed JPEG format to the end users terminal device such as a wireless connected PDA.
- FIG. 4 illustrates a process 450 for displaying the content using an MPEG-4 browser.
- a user initiates playback of content (step 452).
- the browser/player then demultiplexes any multiplexed streams (step 454) and parses a BiFS elementary stream (step 456).
- the user then fulfill any necessary licensing requirements to gain access if content is protected, this could be ongoing in the event of new content acquisitions (step 458).
- the browser/player invokes appropriate decoders (step 460) and begins playback of content (step 462).
- the browser/player continues to send contextual feedback to system (step 464), and the system updates user preferences and feedback into the database (step 466).
- the system captures transport operations such as fast forward and rewind, generate context information, as they are an aspect of how users interact with the title; for instance, what segments users tend to skip, and which users tend to watch repeatedly, are of interest to the system.
- the system logs the user and stores the contextual feedback, applying any relative weights assigned in the Semantic Map, and utilizing the Semantic Relationships table for indirect assignments, an intermediate table should be employed for optimized resolution; the assignment of relative weights is reflected in the active user state information.
- system sends new context information as available, such as new context menu items (step 468).
- the system may utilize rules-based logic, such as for sending customer focused advertisements, unless there are multiple windows, this would tend to occur during the remote content acquisition process (step 470).
- the system then handles requests for remote content (step 472).
- the user After viewing the content, the user responds to any interactive selections that halt playback, such as with menu screens that lack a timeout and default action (step 474). If live streams are paused, the system performs time-shifting if possible (step 476). The user may activate context menu at anytime, and make an available selection (step 478). The selection may be subject to parental control specified in the configuration of the player or browser.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/932,346 US6744729B2 (en) | 2001-08-17 | 2001-08-17 | Intelligent fabric |
US09/932,217 US20030043191A1 (en) | 2001-08-17 | 2001-08-17 | Systems and methods for displaying a graphical user interface |
US09/932,344 | 2001-08-17 | ||
US09/932,345 | 2001-08-17 | ||
US09/932,217 | 2001-08-17 | ||
US09/932,345 US20030041159A1 (en) | 2001-08-17 | 2001-08-17 | Systems and method for presenting customizable multimedia presentations |
US09/932,346 | 2001-08-17 | ||
US09/932,344 US20040205648A1 (en) | 2001-08-17 | 2001-08-17 | Systems and methods for authoring content |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003017082A1 true WO2003017082A1 (en) | 2003-02-27 |
Family
ID=27506013
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/026250 WO2003017122A1 (en) | 2001-08-17 | 2002-08-15 | Systems and method for presenting customizable multimedia |
PCT/US2002/026251 WO2003017059A2 (en) | 2001-08-17 | 2002-08-15 | Intelligent fabric |
PCT/US2002/026252 WO2003017082A1 (en) | 2001-08-17 | 2002-08-15 | System and method for processing media-file in graphical user interface |
PCT/US2002/026318 WO2003017119A1 (en) | 2001-08-17 | 2002-08-15 | Systems and methods for authoring content |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/026250 WO2003017122A1 (en) | 2001-08-17 | 2002-08-15 | Systems and method for presenting customizable multimedia |
PCT/US2002/026251 WO2003017059A2 (en) | 2001-08-17 | 2002-08-15 | Intelligent fabric |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/026318 WO2003017119A1 (en) | 2001-08-17 | 2002-08-15 | Systems and methods for authoring content |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1423769A2 (ja) |
JP (1) | JP2005500769A (ja) |
AU (1) | AU2002324732A1 (ja) |
WO (4) | WO2003017122A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008040565A1 (en) * | 2006-10-02 | 2008-04-10 | Sony Ericsson Mobile Communications Ab | Portable device and server with streamed user interface effects |
WO2008070502A3 (en) * | 2006-12-05 | 2008-08-28 | Palm Inc | Preserving a user experience with content across multiple computing devices using location information |
TWI574198B (zh) * | 2014-11-28 | 2017-03-11 | 鴻海精密工業股份有限公司 | 檔案層疊式顯示系統及方法 |
WO2018035117A1 (en) * | 2016-08-19 | 2018-02-22 | Oiid, Llc | Interactive music creation and playback method and system |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1937623A (zh) * | 2006-10-18 | 2007-03-28 | 华为技术有限公司 | 一种控制网络业务的方法及系统 |
US20090164452A1 (en) * | 2007-12-21 | 2009-06-25 | Espial Group Inc. | Apparatus and mehtod for personalization engine |
CN105487920A (zh) * | 2015-10-12 | 2016-04-13 | 沈阳工业大学 | 基于蚁群算法的多核系统实时任务调度的优化方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682326A (en) * | 1992-08-03 | 1997-10-28 | Radius Inc. | Desktop digital video processing system |
US5760767A (en) * | 1995-10-26 | 1998-06-02 | Sony Corporation | Method and apparatus for displaying in and out points during video editing |
US6026389A (en) * | 1996-08-23 | 2000-02-15 | Kokusai, Denshin, Denwa, Kabushiki Kaisha | Video query and editing system |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5848396A (en) * | 1996-04-26 | 1998-12-08 | Freedom Of Information, Inc. | Method and apparatus for determining behavioral profile of a computer user |
US6434530B1 (en) * | 1996-05-30 | 2002-08-13 | Retail Multimedia Corporation | Interactive shopping system with mobile apparatus |
US6021403A (en) * | 1996-07-19 | 2000-02-01 | Microsoft Corporation | Intelligent user assistance facility |
US6006241A (en) * | 1997-03-14 | 1999-12-21 | Microsoft Corporation | Production of a video stream with synchronized annotations over a computer network |
US6301586B1 (en) * | 1997-10-06 | 2001-10-09 | Canon Kabushiki Kaisha | System for managing multimedia objects |
US6363411B1 (en) * | 1998-08-05 | 2002-03-26 | Mci Worldcom, Inc. | Intelligent network |
US6067565A (en) * | 1998-01-15 | 2000-05-23 | Microsoft Corporation | Technique for prefetching a web page of potential future interest in lieu of continuing a current information download |
WO1999060504A1 (en) * | 1998-05-15 | 1999-11-25 | Unicast Communications Corporation | A technique for implementing browser-initiated network-distributed advertising and for interstitially displaying an advertisement |
US6154771A (en) * | 1998-06-01 | 2000-11-28 | Mediastra, Inc. | Real-time receipt, decompression and play of compressed streaming video/hypervideo; with thumbnail display of past scenes and with replay, hyperlinking and/or recording permissively intiated retrospectively |
US6119147A (en) * | 1998-07-28 | 2000-09-12 | Fuji Xerox Co., Ltd. | Method and system for computer-mediated, multi-modal, asynchronous meetings in a virtual space |
US6411946B1 (en) * | 1998-08-28 | 2002-06-25 | General Instrument Corporation | Route optimization and traffic management in an ATM network using neural computing |
US6385619B1 (en) * | 1999-01-08 | 2002-05-07 | International Business Machines Corporation | Automatic user interest profile generation from structured document access information |
US6466980B1 (en) * | 1999-06-17 | 2002-10-15 | International Business Machines Corporation | System and method for capacity shaping in an internet environment |
US6542295B2 (en) * | 2000-01-26 | 2003-04-01 | Donald R. M. Boys | Trinocular field glasses with digital photograph capability and integrated focus function |
-
2002
- 2002-08-15 WO PCT/US2002/026250 patent/WO2003017122A1/en not_active Application Discontinuation
- 2002-08-15 WO PCT/US2002/026251 patent/WO2003017059A2/en active Application Filing
- 2002-08-15 JP JP2003521906A patent/JP2005500769A/ja active Pending
- 2002-08-15 AU AU2002324732A patent/AU2002324732A1/en not_active Abandoned
- 2002-08-15 EP EP02759393A patent/EP1423769A2/en not_active Withdrawn
- 2002-08-15 WO PCT/US2002/026252 patent/WO2003017082A1/en not_active Application Discontinuation
- 2002-08-15 WO PCT/US2002/026318 patent/WO2003017119A1/en not_active Application Discontinuation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5682326A (en) * | 1992-08-03 | 1997-10-28 | Radius Inc. | Desktop digital video processing system |
US5760767A (en) * | 1995-10-26 | 1998-06-02 | Sony Corporation | Method and apparatus for displaying in and out points during video editing |
US6026389A (en) * | 1996-08-23 | 2000-02-15 | Kokusai, Denshin, Denwa, Kabushiki Kaisha | Video query and editing system |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008040565A1 (en) * | 2006-10-02 | 2008-04-10 | Sony Ericsson Mobile Communications Ab | Portable device and server with streamed user interface effects |
WO2008070502A3 (en) * | 2006-12-05 | 2008-08-28 | Palm Inc | Preserving a user experience with content across multiple computing devices using location information |
US7870272B2 (en) | 2006-12-05 | 2011-01-11 | Hewlett-Packard Development Company L.P. | Preserving a user experience with content across multiple computing devices using location information |
TWI574198B (zh) * | 2014-11-28 | 2017-03-11 | 鴻海精密工業股份有限公司 | 檔案層疊式顯示系統及方法 |
WO2018035117A1 (en) * | 2016-08-19 | 2018-02-22 | Oiid, Llc | Interactive music creation and playback method and system |
US11178457B2 (en) | 2016-08-19 | 2021-11-16 | Per Gisle JOHNSEN | Interactive music creation and playback method and system |
Also Published As
Publication number | Publication date |
---|---|
JP2005500769A (ja) | 2005-01-06 |
WO2003017059A8 (en) | 2004-04-22 |
WO2003017122A1 (en) | 2003-02-27 |
WO2003017059A3 (en) | 2003-10-30 |
WO2003017119A1 (en) | 2003-02-27 |
EP1423769A2 (en) | 2004-06-02 |
AU2002324732A1 (en) | 2003-03-03 |
WO2003017059A2 (en) | 2003-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20030043191A1 (en) | Systems and methods for displaying a graphical user interface | |
US6744729B2 (en) | Intelligent fabric | |
US12080323B2 (en) | Providing enhanced content | |
US20050182852A1 (en) | Intelligent fabric | |
US20030041159A1 (en) | Systems and method for presenting customizable multimedia presentations | |
US8042132B2 (en) | System and method for construction, delivery and display of iTV content | |
CA2738911C (en) | Video branching | |
US20090313122A1 (en) | Method and apparatus to control playback in a download-and-view video on demand system | |
US20090178089A1 (en) | Browsing and viewing video assets using tv set-top box | |
US20040034874A1 (en) | Pop-up PVR advertising | |
KR20090110205A (ko) | 사용자 인터페이스를 제공/수신하는 방법 및 장치 | |
WO2001060071A2 (en) | Interactive multimedia user interface using affinity based categorization | |
JP2005130087A (ja) | マルチメディア情報機器 | |
US11070890B2 (en) | User customization of user interfaces for interactive television | |
WO2003017082A1 (en) | System and method for processing media-file in graphical user interface | |
CA2321805A1 (en) | Digital interactive delivery system for tv/multimedia/internet | |
Hu et al. | An adaptive architecture for presenting interactive media onto distributed interfaces | |
Rauschenbach et al. | A scalable interactive TV service supporting synchronized delivery over broadcast and broadband networks | |
Papadimitriou et al. | Integrating Semantic Technologies with Interactive Digital TV | |
Kojo | A method to deliver multiple media content for digital television | |
Gerfelder et al. | An Open Architecture and Realization for the Integration of Broadcast Digital Video and Personalized Online Media |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BY BZ CA CH CN CO CR CU CZ DE DM DZ EC EE ES FI GB GD GE GH HR HU ID IL IN IS JP KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NO NZ OM PH PL PT RU SD SE SG SI SK SL TJ TM TN TR TZ UA UG US UZ VC VN YU ZA ZM Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ UG ZM ZW AM AZ BY KG KZ RU TJ TM AT BE BG CH CY CZ DK EE ES FI FR GB GR IE IT LU MC PT SE SK TR BF BJ CF CG CI GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |