MXPA00003828A - Emulation of streaming over the internet in a broadcast application - Google Patents

Emulation of streaming over the internet in a broadcast application

Info

Publication number
MXPA00003828A
MXPA00003828A MXPA/A/2000/003828A MXPA00003828A MXPA00003828A MX PA00003828 A MXPA00003828 A MX PA00003828A MX PA00003828 A MXPA00003828 A MX PA00003828A MX PA00003828 A MXPA00003828 A MX PA00003828A
Authority
MX
Mexico
Prior art keywords
file
network
station
client
server
Prior art date
Application number
MXPA/A/2000/003828A
Other languages
Spanish (es)
Inventor
Raoul Mallart
Atul Sinha
Original Assignee
Koninklijke Philips Electronics Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics Nv filed Critical Koninklijke Philips Electronics Nv
Publication of MXPA00003828A publication Critical patent/MXPA00003828A/en

Links

Abstract

In a broadcast application on a client-server network the streaming is emulated of animation data over the Internet to a large number of clients. The animation is considered a sequence of states. State information is sent to the clients instead of the graphics data itself. The clients generate the animation data itself under control of the state information. The server and clients communicate using a shared object protocol. Thus, streaming is accomplished as well as a broadcast without running into severe network bandwidth problems.

Description

EMULATION OF UNIDIRECTIONAL FLOW ON THE INTERNET IN AN EMISSION APPLICATION FIELD OF THE INVENTION The invention relates to the unidirectional flow of multiple media files via a network. The invention relates in particular to the fact that it allows the emulation of unidirectional flow of graphics or video animation on the Internet within a broadcast context.
PREVIOUS TECHNIQUE The term "unidirectional flow" refers to the transfer of data from a server to a client, so that they can be processed as a constant and continuous flow at the receiving end. Unidirectional flow technologies are becoming increasingly important with the growth of the Internet, because most users do not have sufficient access to download large, multi-media files that include, for example, graphics animation, audio, video, or a combination thereof, etc. Unidirectional flow, however, allows the explorer or exchangeable device to begin processing the data before all files have been received. For the unidirectional flow to work, the client side that receives the file must be able to collect data and send it as a constant flow to the application that is processing the data. This means that if the client receives the data faster than required, the excess data needs to be put into a buffer. If the data does not arrive on time, on the other hand, the presentation of the data will not be uniform. The term "file" is used herein to indicate an entity of related data elements available to a data processing and capable of being processed as an entity. Within the context of the invention, the term "file" can refer to data generated in real time, as well as data retrieved from a store. Among the technologies currently available or under development for the communication of graphics data via the Internet are VRML 97 and MPEG-4. He VRML 97 refers to the "Reality Modeling Language" Virtual ", and it is an International Standard file format (ISO / IEC 14772) to describe the content of multiple media n 3D, interactive, on the Internet. The MPEG-4 is an ISO / IEC standard developed by the MPEG (Group of Experts in Moving Images). In both standards, the graphic content is structured in the so-called scene graphic. The scene graphic is a family tree of coordinate systems and shapes, which collectively describe a world of graphics. The most superior element in the genealogical tree of the scene is the world coordinate system. The global coordinate system acts as the parent of one or more coordinate systems and daughter forms. Those systems coordinate children, are, in turn, parents of coordinate systems and additional daughter forms and so on. VRML is a file format for describing objects. The VRML defines a set of useful objects to make 3D graphics, multiple media, and build interactive objects / worlds. These objects are called nodes, and they contain elementary data, which are stored in fields and events. Typically, the scene graphic comprises structural nodes, leaf nodes, interpolation node and sensor nodes. Structural nodes define the spatial relationship of objects within a scene. The leaf nodes define the physical appearance of the objects. The interpolation nodes define the animations. The sensor nodes define the user interaction for the particular user input modes. VRML does not directly support the unidirectional flow of data from a server to a client. Facilities such as synchronization between flows and time stamping that are essential in unidirectional flow do not exist in VRML. However, VRML has a mechanism that allows external programs to interact with VRML clients. This has been used in sports applications to load animation data to the client. See, for example, "VirtualLive Soccer" by Orad zHi-Tec Systems, Ltd in <; http: //www.virtualive.com > . This network document discusses a process to produce fragments of three-dimensional, realistic, animated graphics that simulate the highlights of a real soccer match to be sent via the Internet. The system generates the content that complements the coverage of sports on television with Red pages rich in multiple media almost in real time. In this example, the process works in two steps. First the graphic models of the stadium and the football players soccex_ _ are downloaded together with an external program, in this case a Java Applet. The user can then interact with the external program or request a particular animation. The data of this animation is then downloaded to the client and interact with the user. In terms of the type of node, this process first unloads the structural nodes and leaf, and later the interpolation nodes. By changing the set of interpolation nodes, it is possible to execute a different animation sequence. The process used in this example is somewhat equivalent to a one-step process, in which the user can choose the complete VRML file that contains all the models (structural nodes) and all the animation data (interpolator nodes). This method leads to extended download times before any content can be executed by the client. This is experienced as a frustrating experience, especially when compared to a TV broadcast where content is instantly available. The other technology introduced earlier, the MPEG-4, defines a binary description format for scenes (BIFS) that has a wide overlap with the VRML 97. The MPEG-4, on the other hand, has been designed to support the unidirectional flow of graphics, as well as for video. The MPEG-4 defines two server / client protocols to update and animate scenes: BIFS-Update and BIFS-Anim. Some of the advantages of MPEG-4 over VRML are the encoding of the scene description and animation data, as well as the integrated unidirectional flow capability. The user does not have to wait to download the animation data completely. For example, in the application of a broadcast of a soccer match mentioned at the beginning, the animation can start as soon as the models of the players and the stadium are downloaded. The MPEG-4 also has the advantage that it is more efficient due to its BIFS transport protocol that uses a compressed binary format. Within the context of unidirectional flow, the known technologies mentioned above have several limitations with respect to the use of bandwidth, concealment or recovery of lost packets and interactivity of multiple users, especially in a broadcast to a large number of clients. For bandwidth, the complete animation is generated on the server. This results in a large amount of data that needs to be transported over the network, for example, the Internet, which connects the client to the server. For example, in the soccer broadcast application mentioned above, the 22 soccer players need to be animated. Each point of the animation data per individual player comprises a position in 3D space and a set of, say, 15 rotations together to model the player's posture. This represents 63 floating point values. Assuming that the update rate of the animation is 15 data points per second, a bit rate of 665 Kbps is required. This bit rate can be reduced through compression. Typically, the use of BIFS reduces the bit rate by a factor of 20, giving a bit rate of approximately 33 Kbps. However, this number has not taken into account the load required by Internet protocols (RTP, UDP and IP) and for additional data types, such as audio. However, typical modems currently commercially available in the consumer market have a capacity of 28.8 Kbps or 33.6 Kbps. It is clear that the unidirectional flow of animation causes a problem to the end user due to bandwidth limitations. In the case of an issue to a large number of clients, say 100,000 clients, the data flow will need to be duplicated in several routers. A router on the Internet determines the second point of the network to which the packet should be sent on its way to its final destination. The router decides how to send each packet of information based on its current understanding of the state of the network to which it is connected. A router is located in any union of networks or gates, including each point of presence on the Internet. It is clear that the issue should lead to an explosion of data that can not be handled through the Internet. To prevent that from happening, the actual bandwidth needs to be limited to much less than 28.8 Kbps. To hide lost packets, the VRML-based system uses reliable protocols (TCP). Packet losses are not a concern here. In the case of MPEG-4, BIFS uses RTP / UDP / IP. Therefore, a mechanism for recovering lost packets is required. In a point-to-point application, the retransmission of lost packets can be considered. In a broadcast situation, however, this is more complex. In both cases, however, the reliability of MPEG requires the use of higher bandwidth (redundancy) or higher latency (retransmission). For the interactivity of multiple users, both VRML and MPEG-4 are essentially based on a server-client communication. There are no provisions that allow communication between multiple clients. For more information about the VRML see, for example, "Key Concepts", March 5, 1996, in: <; http: // sgi. felk cvut cz / ~ ho1ecek / VRML / concepts. html > , or "Requirements of Infrastructure between Networks for Virtual Environments", D.P. Brutzman et al., January 23, 1996, publication available in: < http: // www. stl. nps. navy.mil/~brutzman/vrml/vrml_95 html > For more information on the MPEG-4 see, for example, "General MPEG-4 Standard List", ISO / IEC JTC1 / SC29 / WG11 N2323 ed. Rob Koenen, July 1998, publication available in < http: // drug. cselt. stet. it / mpeg / standa ds / mpeg-4 / mepg-4. htm > OBJECT OF THE INVENTION Therefore, an object of the invention is to provide a technology that allows a client to process data from multiple media as if there were a constant and continuous flow. Another object is to allow the continuous processing of a large number of clients in a broadcast over the Internet. It should be noted that the problems identified above become more acute in a broadcast application.
BRIEF DESCRIPTION OF THE INVENTION For this purpose, the invention provides a method for emulating the unidirectional flow of a multi media file via a network to a receiving station connected to the network. The respective descriptive state information of the respective states of the file is provided. The receiving station is allowed to receive the respective state information via the network and is allowed to locally generate the multi-media file under the control of the respective state information. In an animation broadcast, the invention relates to a method for supplying data via a network to allow the presentation of graphics animation. The respective state information is provided on the descriptive network of successive respective states of the animation. The respective state information 'is received via the network. The receiving station is allowed to generate the animation under the control of the respective state information upon receipt. In the invention, the multi-media file (animation, video or audio file) is described as a succession of states. This is the status information that is transmitted to the clients instead of the animation data itself. The term "emulation" therefore emphasizes that the information communicated to the client does not have to be in a unidirectional flow. The client generates data to play locally and based on the received status information. As a result, the user perceives a continuous and constant flow of data during execution as if the data were placed in a unidirectional flow over the network (under optimal conditions). In a preferred embodiment, a shared object protocol is used to achieve emulation. Both the server and the client have copies of the entire collection of objects. An object is a data structure that contains status information. Within the context of the virtual soccer match, an object- is, for example, a graphic representation of one of the soccer players. The server -receives a video file in unidirectional flow and vifies the objects. It should be noted that the MPEG-4 allows the creation of video objects' that are processed as an entity. If the server changes the state of this object, the shared object protocol causes a copy of the client to change accordingly. This is explained in more detail with reference to the drawings.
This status information is at a higher level of abstraction than the animation data itself. For example, in the broadcast application of a soccer match mentioned above, the status information comprises the current positions of the 22 players in the field and the parameters that specify their current action (for example, "running", "jumping" ", etc.) . The use of a higher level of information has several advantages, particularly in a broadcast application when the information is put in unidirectional flow over the Internet to a large audience. The content of the status information as it is communicated on the Internet is very compact, thus regulating a bandwidth smaller than that in case of the animation data itself, if a unidirectional flow is used. The animation is generated locally from a few parameters. In addition, the refresh rate of the animation data points is lower because the animation state changes at a slower speed than the animation data 'itself. This contributes to further decrease the bandwidth requirements. In addition, the invention provides better chances of recovering or hiding lost packets and masking the fluctuation of network latency. It is easy to interpolate or extrapolate between states and implement estimated point concepts. The interaction of the user with the animation is more easily programmable due to its higher level of abstraction. Another advantage is that the interaction of multiple users is feasible if the clients are allowed to share status information. Another plus is the fact that customers are able to convert status information into animation based on their individual processing power which may differ from client to client. The resources available to the client can be different per client or groups of clients. Within the context of the invention, reference is made to US Patent Application Serial No. 09 / 053,448 (PHA 23,383) of the same Beneficiary, entitled "Conference by group video using the 3D graphics model of the transmitted event" e incorporated here as a reference. This document is about a TV broadcast service to multiple end users distributed geographically. The broadcasting service is integrated with a conference mode. After a certain event in the broadcast, the specific end-user groups are switched to a conference mode under the control of programs and programming systems, so that the group is allowed to discuss the event. The conference mode is enhanced by a 3D graphics model of the video representation of the event that is downloaded to the groups. End users are able to interact with the model to discuss alternatives to the event.
BRIEF DESCRIPTION OF THE DRAWINGS The invention is exemplified by way of example with reference to the accompanying drawings, wherein: Figure 1 is a diagram of the VRML client-server system; Figure 2 is a diagram of a MPEG-4 dietary system; and Figures 3-6 are diagrams of systems in the invention. Through the figures, the same reference numbers indicate similar or corresponding characteristics.
PREFERRED MODALITIES Figure 1 is a block diagram of a client-server system 100 based on the VRML. The system 100 comprises a server 102 coupled to a client 104 via a communication channel 106, here the Internet. The system 100 may comprise more clients but those are not displayed so as not to obscure the drawing. The server 102 comprises an origin encoder 108 and a channel encoder 110. The client 104 comprises a channel decoder 112 and an origin decoder 114. The source encoder 108 is considered a content generation tool. For example, this may be a tool that generates the VRML animation data of motion capture devices (not shown) that operate on video. Channel encoder 110 is a subsystem that takes as input the VRML animation generated in source encoder 108 and transforms it into a form that can be transported over the Internet. The VRML animation data is stored in a file. The transport of this file uses a standard file transport protocol. In the client 104, the channel decoder is contained in an external program 116. It obtains the animation data of the downloaded file and sends it to a VRML player 118 which performs the function of the source decoder. The function of the source decoder is essentially an administration of the scene graphic. This server-client communication procedure is not a unidirectional flow solution. The VRML specification does not consider a unidirectional flow requirement. Facilities such as synchronization between flows and time stamping, both essential for unidirectional flow, do not exist in VRML. Figure 2 is a block diagram of a client-server system 200 based on the MPEG-4. The system 200 has a server 292 coupled to a client 204 via a communication channel 206. The server 202 has an origin encoder 208 and a channel encoder 210. The client 204 has a channel decoder 212 and a source decoder. 214. As mentioned above, the MPEG-4 has been designed to support unidirectional flow. Among other things, the MPEG-4 has defined a binary description format for scenes (BIFS) that has a wide overlap with the VRML 97. In addition, the MPEG-4 defines two server / client protocols to update and animate scenes, to know BIFS-Update and BIFS-Anim. The advantages of MPEG-4 over VRML within the context of unidirectional flow are the encoding of the scene description and animation data, as well as the integrated unidirectional flow capability. The source encoder 208 is, like the encoder 108, a content generation tool. The channel encoder 210 is different from the channel encoder 110. It generates a bit stream in the BIFS and BIFS-Anim formats. This bitstream contains the graphic models of the players and the stadium (in the animation of a soccer game), as well as the animation data. However, both systems 100 and 200 have several serious drawbacks when used in an environment to convey animation to a large number of clients, say, 100-100,000 clients. The limitations are related to the use of network bandwidth, the cancellation of lost packets and the interactivity of multiple users as mentioned above. A preferred embodiment of the invention It provides a solution to these problems by emulating the unidirectional flow used by a communication protocol that supports sharing objects by an owner of the object and an observer (or listener) of the object. A shared object is a data structure that contains status information. The set of shared objects that defines the entire state is known as a world or world model. The clients and the server have their own copy of the world or world model. For example, an object within the context of a representation of a soccer game is the representation of a soccer player. The status information of the object is then, for example, the position of the soccer player in 3D space or an action state such as "running" or "jumping" or "sliding" or "falling on the ground, apparently injured , but that has a history of comedian ". Each shared object is owned by a particular party, for example, the server. The owner can change the status information contained in the object. When this occurs, the protocol automatically synchronizes status information through the network. Such a protocol is referred to hereinafter as a protocol that supports shared objects. The protocol ensures that all copies of the world or world model remain consistent when the state of the world or world model evolves. Examples of protocols that can be used for this purpose are the DIS (Distributed Interactive Simulation) and the ISTP (Transfer Protocol for Sharing Resources Interactively). An idea underlying the invention is to describe animation as a succession of states. For example, in the application of a soccer program, the animation is described as a succession of player positions on the field and states of action of the players. As time passes, the state evolves and the protocol synchronizes the state of the world or world model through the network. This can also be explained in terms of the shared objects. These objects contain the state information that the game describes at a given moment in time. The updating of that status information for each object results in the generation of messages that are sent through the network to the clients. Figure 3 is a block diagram of a system 300 in the invention. The System 300 comprises a server 302 coupled to a client 304 via a network 306. The server 302 comprises an origin encoder 308 and a channel encoder 310. The client 304 comprises a channel decoder 312 and a source decoder 314. The server 302 has a copy 316 of a world or world model and customer 304 has a copy 318 of the world or world model. The data is sent by means of a unidirectional flow to the source encoder 308 in an input 320. The origin generator 308 generates the required state information based on the received input and updates the state of the objects in the world model copy or world 316 as the unidirectional flow process continues. This type of technology is used, for example, by the VirtuaLive Soccer system mentioned above. The channel coder 310 verifies the copy of the world or world model 316 and encodes the state changes of the shared objects. The coded state changes are sent to the client 304 via the network 306. The channel decoder receives the state changes and updates the copy of the world or world 318 local model. The source decoder 314 performs two tasks. First, it generates the animation based on the received status information. Second, the source decoder 314 manages the scene graph according to the animation. The source decoder 314 is now an intelligent component: it performs the animation calculation and, in addition, is capable of performing other tasks such as interpolation or extrapolation of state to hide lost packets or network latency fluctuations.
Within this context, reference is made to U.S. Patent Application. Serial No. 08 / 722,414 (PHA 23,155) from the same Beneficiary, entitled "Video Game with multiple players with local updates that mitigate the effects of latency", incorporated here as a reference. This reference discusses a system where multiple users share a virtual environment through an application of interactive programs. The changes of state of a specific user are transmitted to one or more users depending on the respective relative distances in the virtual environment between the specific user and each one of the other users. This conditional transmission reduces message traffic and allows the virtual environment to be scaled virtually indefinitely. Also, reference is made to the Request for U.S. Patent Serial No. 08 / 722,413 (PHA 23,156) of the same Beneficiary, entitled "Effect of latency in multiplayer video game reduced by a substitute agent" and incorporated herein by reference. This document is related to a data processing system that processes an application of interactive programs for a competition between two or more users. The system comprises user interconnection machines for operation by the respective users. The machines are interconnected via a network. To effectively eliminate latency, a user is represented in the other user machines as an agent whose reaction to an action of the other user is governed by a base rule stored in the system. Reference is also made to U.S. Patent Application. Serial No. 08 / 994,827 (PHA 23,319) of the same Beneficiary and incorporated herein by reference, entitled "Amusement Agent Using Cinematographic Techniques to Mask Latency". This document relates to a programming agent as a functional part of an application of an interactive user program that runs in a data processing system. The agent creates a perceptible effect to the user to mask the latency present in the data release to the user. The agent creates the effect of employing cinematographic techniques. It should be noted that copies 316 and 318 of the world model do not need to be identical, for example, in appearance when they occur, as an object in a copy of the world or world model and another object in another copy of the world or world model are being treated as if they were shared in the sense that they share changes of State. The feasibility and degree of lack of identity depend on the application. For example, if a client user wishes to represent soccer players, say, as penguins, and another client user prefers representation of, say, ballet dancers, the representation in both clients remains consistent through the system by means of shared state changes. As another example, the client 304 may allow the user to enter additional status information to control the conversion of the world or world model in the game. For example, the user can select a particular point of view when watching the soccer game VituaLive. This state information is not, and need not, be present in the server 302. It should be noted that the conversion of the viewpoint based on the status information and the world or world model is much less complicated and requires fewer resources than if the image was actually sent as a directional flow to the client 304 as a bitmap with deep information. Consequently, in addition to the advantages of the invention mentioned at the beginning, the invention facilitates interactivity with the user. The configuration of the system 300 assumes that the client 304 is capable of executing an application of programs and programming systems and has a powerful CPU and a sufficiently large warehouse. Some customers may not have those capabilities on board. Therefore, it is desirable to consider lower end terminals, also known as "thin clients". Such terminals could be, for example, low-profile MPEG-4 terminals that accept a flow of BIFS as input but are not powerful enough. This is explained with reference to Figure 4 which is a block diagram of a system 400 according to the invention. The system 400 comprises a server 302 that communicates with the client 204 via a translation station 406. The configuration of the server 302 and the client 204 has been discussed above. Translation station 406 maintains a local copy of the world or world model. This world or world model is updated by messages from server 302, so that the model represents the current state. Based on this status information, translation station 406 calculates the animation. The animation data is encoded in the BIS-Anim format and transmitted to the MPEG-4 client 204. The server 302 is similar to that of the system 300. The translation station 406 is a module that performs a conversion between messages transmitted under the protocol it supports shared objects on the one hand, and the BIFS-Anim bitstream on the other hand. The station 406 has a channel decoder 312 discussed above, an origin transcoder 410 and a channel encoder 412. The decoder 312 interprets the messages received from the server 302 and updates the local copy of the world or world model 318. The transcoder origin 410 comprises a program that calculates the animation based on the status information. This module preferably performs tasks such as recovering lost packets (based on interpolation or extrapolation), estimated points, local animation, etc., similar to the previous source decoder 314. The channel encoder 412 generates a bit stream in the BIFS and BIFS-Anim formats based on the output of the source transcoder 410. Figure 5 is a block diagram of a system 500 in the invention. System 500 combines the configuration of systems 300 and 400. System 500 comprises a server 302, a network 502, clients 504, 506, 508 and 510 connected to server 302 via network 502. System 500 comprises, in addition, a translation station 406 and clients 512, 514 and 516. The clients 512-516 are coupled to the server 302 via the network 502 and the translation station 406. The clients 512-516 are served by the translation station 406. with BIFS bit streams, while clients 504-510 receive status information in a protocol that supports shared objects and generates the animation itself. Figure 6 is a block diagram of a system 600 of the invention that allows interaction between clients. The system 600 comprises a server 302 coupled to the clients 602 and 604 via the network 606. The configuration of the server 302 was discussed above. Server 302 has a copy of a world or world model with objects 608, 610, 612 and 614. Clients 602 and 604 have similar copies of the world or world model with similar objects 608-614. The world or world model copies are kept consistent through the system 600 through the status information sent by the server 302. This forms the basis for the emulation of the unidirectional flow of a graphics animation, a video animation or an audio file as discussed above. Clients 602 and 604 now share objects 616 and 618 with each other, but not with server 302. For example, client 602 owns an object "viewpoint" representing the view of the graphic representation of the football game. soccer chosen by the client 602. Based on the status information received from the server 302, the client 602 sends a graphic image of the game as if it were viewed from a particular position in the stadium. The conversion of the image is based on the combination of the current status information received from the server 302, the copy of the world or local world model and the user input via the input means 620, for example a lever or mouse, which allows the selection of the point of view. The client 604 shares the point of view object that is being kept constant with the client 602 for the control of the latter and using the shared objects support protocol.
Objects 616-618 are not shared with the other clients in the system. It should be noted that the conversion of the viewpoint based on the state information and the world or world model is much less complicated and requires less resources than if the images were actually sent in the form of unidirectional flow to clients 602 and 604 as a bitmap with deep information. System 600 can still be a fully distributed system without a server with primary ownership. Each respective one of the multiple clients possesses respective objects in the world or world model that are perceptible to all the clients. The owner of an object triggers a state change that propagates through the network to maintain the consistency of the world or shared world model. In an application of multiple users the effect is a continuous game in each client without severe limitations of bandwidth as a consequence of the emulation of the unidirectional flow of the animation.

Claims (11)

CHAPTER CLAIMEDICATORÍO Having described the invention, it is considered as a novelty and, therefore, what is contained in the following CLAIMS is claimed:
1. A method for emulating the unidirectional flow of a multi-media file via a network to a receiving station connected to the network, the method is characterized in that it comprises: supplying respective state information descriptive of the respective states of the file; - allow the station to receive the respective status information via the network; and - allowing the station to generate the file under the control of the respective state information.
The method according to claim 1, characterized in that it comprises: - receiving the file through unidirectional flow; and generate the respective state information based on the received file.
The method according to claim 2, characterized in that it comprises using a shared object protocol to communicate the status information to the station.
4. The method according to claim 1, characterized in that the respective state information is emitted over the network to multiple receiving stations.
5. The method according to claim 1, characterized in that the file comprises graphics animation.
6. The method of compliance with the claim 1, characterized in that the file comprises audio.
The method according to claim 1, characterized in that the file comprises video.
8. A method for supplying data via a network to allow executing a file, the method is characterized in that it comprises: supplying on the network respective descriptive status information of the respective successive states of the file; - allow the reception of respective status information via the network; and - allow generation of the file under the control of the respective state information after receipt.
9. The method according to claim 8, characterized in that it comprises supplying the data using a shared object protocol.
10. The method according to claim 9, characterized in that it comprises supplying the data in an emission or transmission.
11. A station for use in a server-client system, characterized in that: the server is coupled to at least one client; the system is capable of emulating the unidirectional flow of a multi-media file via a network to the station; - the server supplies respective state information descriptive of the respective states of the file, - the station is able to receive the respective status information via the network; the station is capable of generating the file under the control of the respective state information; and the station sends the generated file to at least one client as a unidirectional flow.
MXPA/A/2000/003828A 1998-08-24 2000-04-18 Emulation of streaming over the internet in a broadcast application MXPA00003828A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09138782 1998-08-24

Publications (1)

Publication Number Publication Date
MXPA00003828A true MXPA00003828A (en) 2001-06-26

Family

ID=

Similar Documents

Publication Publication Date Title
US6697869B1 (en) Emulation of streaming over the internet in a broadcast application
US7480727B2 (en) Method and devices for implementing highly interactive entertainment services using interactive media-streaming technology, enabling remote provisioning of virtual reality services
Signes et al. MPEG-4's binary format for scene description
JP5160704B2 (en) Real-time video games that emulate streaming over the Internet in broadcast
Taleb et al. Extremely interactive and low-latency services in 5G and beyond mobile systems
Battista et al. MPEG-4: A multimedia standard for the third millennium. 2
Hijiri et al. A spatial hierarchical compression method for 3D streaming animation
JP4194240B2 (en) Method and system for client-server interaction in conversational communication
Signes Binary Format for Scene (BIFS): Combining MPEG-4 media to build rich multimedia services
MXPA00003828A (en) Emulation of streaming over the internet in a broadcast application
JP2004537931A (en) Method and apparatus for encoding a scene
Hosseini et al. Suitability of MPEG4's BIFS for development of collaborative virtual environments
AU739379B2 (en) Graphic scene animation signal, corresponding method and device
WO2000042773A9 (en) System and method for implementing interactive video
Signès et al. MPEG-4: Scene Representation and Interactivity
Law et al. The MPEG-4 Standard for Internet-based multimedia applications
Todesco et al. MPEG-4 support to multiuser virtual environments
Katto et al. System architecture for synthetic/natural hybrid coding and some experiments
Laier et al. Content-based multimedia data access in Internet video communication
Horne et al. MPEG-4 visual standard overview
Nguyen et al. A graphics adaptation framework and video streaming technique for 3D scene representation and interaction on mobile devices
Doenges Caspar Horne Mediamatics, Inc., Fremont, California Atul Puri AT&T Labs, Red Bank, New Jersey
Deicke et al. A client/server application as an example for MPEG-4 systems
Zhang et al. Application of MPEG-4 in distributed virtual environment