WO2001022198A2 - Data compression through offset representation - Google Patents

Data compression through offset representation Download PDF

Info

Publication number
WO2001022198A2
WO2001022198A2 PCT/US2000/040911 US0040911W WO0122198A2 WO 2001022198 A2 WO2001022198 A2 WO 2001022198A2 US 0040911 W US0040911 W US 0040911W WO 0122198 A2 WO0122198 A2 WO 0122198A2
Authority
WO
WIPO (PCT)
Prior art keywords
data point
data
offsets
determined
motion
Prior art date
Application number
PCT/US2000/040911
Other languages
French (fr)
Other versions
WO2001022198A3 (en
Inventor
Peter D. Smith
Jeremy A. Kenyon
Original Assignee
Wild Tangent, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wild Tangent, Inc. filed Critical Wild Tangent, Inc.
Priority to AU12529/01A priority Critical patent/AU1252901A/en
Priority to EP00974111A priority patent/EP1222508B1/en
Priority to DE60026346T priority patent/DE60026346T2/en
Publication of WO2001022198A2 publication Critical patent/WO2001022198A2/en
Publication of WO2001022198A3 publication Critical patent/WO2001022198A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/001Model-based coding, e.g. wire frame

Definitions

  • the present invention generally relates to the fields of data compression, and more particularly, to compressing 3D multimedia transfers over a network connection.
  • 3D multimedia includes video conferencing, interactive games, web- page content, audio/visual (A/V)recordings, to name but a few (hereafter collectively "A/V data").
  • A/V data requires significant storage space, as well as substantial bandwidth to transmit the data over a network. Since most data recipients do not have sufficient bandwidth to receive the AV data in its original form, A/V data has traditionally been retrieved over a local high-speed bus or specialized high-speed data link.
  • Games include simple single-user simulators for pinball, cards, gambling, fighting, etc., or more complex multiple-player turn-taking games where each player competed against the game and ultimately compared scores.
  • Well-known high-tech gaming systems include the Nintendo® and Sony PlayStation® gaming systems. These and other games use geometry to describe two and three- dimensional objects within gaming models.
  • complex object surfaces are usually represented by a combination of one or more basic object shapes, such as splines, non-uniform rational splines (NURBs), texture maps, and (monohedral) triangle tesselation.
  • an arbitrary object is defined by triangle tesselation, each triangle having associated spatial coordinate tuples X, Y (and perhaps Z), color, normal, and other attributes.
  • This information when multiplied by hundreds or thousands of polygons in moderately complex objects, amounts data that must be retrieved from dedicated graphics systems and local storage of graphics data. The data transfer requirements prohibit play against remote players. Although some games have been designed to use a modem to directly call a remote player and establish a game, this solution was often clumsy, slow, and inconsistent; rich content transfer was infeasible.
  • the provider and recipient are in communication over a network.
  • first offsets are determined from the first data point for the second data point.
  • the second data point can then be re-coded in terms of the determined first offsets.
  • the first offsets are coded to require less data storage than required for the first data point, thus allowing them to be transferred more quickly.
  • Second offsets can be cascaded off the first offsets for a third data point defined within the model.
  • Other compression methods and apparatus are disclosed.
  • FIG. 1 illustrates a content provider in communication with several content recipients.
  • FIG. 2 illustrates a triangle having vertices defined in 3D space.
  • FIG. 3 illustrates using estimation functions to estimate future vertex locations for a clockwise rotation of the FIG. 2 triangle.
  • FIG. 4 graphs linear motion as a basis for an estimation function.
  • FIG. 5 graphs oscillating motion as a basis for an estimation function.
  • FIG. 6 illustrates physical distortion as a basis for an estimation function.
  • FIG. 7 illustrates a general environment in which the invention or parts thereof may be practiced.
  • the present invention is applicable to a wide range of application programs, services, and devices which require transmitting rich content (such as A/V data) over a network
  • rich content such as A/V data
  • the following description focuses on delivering rich multimedia content from a gaming environment to players distributed over the network.
  • the gaming paradigm has been chosen since it teaches delivery of A/V data as required for applications such as video conferencing, while also discussing the logistical complexity inherent in having multiple participants interactively affecting the delivery of A/V data.
  • FIG. 1 illustrates a game content provider 100 in communication with several users / game players 102-108 over a publicly accessible network 110 such as the Internet. Also shown is a coordinator 112 that, as discussed below, may be coordinating gaming activity.
  • the content provided is an interactive three-dimensional game (hence the users are designated as players).
  • the game is assumed to incorporate a 3D model, where objects within the model have attributes such as position, color, texture, lighting, orientation, etc., and where the objects are ultimately defined by one or more triangles.
  • the present invention is applicable and may be practiced with all forms of multimedia content delivery.
  • the players utilize an Internet browser as a playing device, where the browser has an installed plug-in (e.g., helper application) to aid in processing content transmitted by the provider.
  • plug-in e.g., helper application
  • other network application programs such as dedicated gaming applications, can be used.
  • the provider 100 acts as a central data distribution point for game data, transmitting all required data to each player 102-108.
  • gaming software can be configured so that players directly send each other information, or that one player or other network location may be used as a distribution point for other players (e.g., to distribute processing load).
  • a game coordinator 112 that can be used as a central point for initiating or joining in to games in progress.
  • a coordinator is useful in contexts such as the Internet, since players are routinely assigned random network addresses by their Internet Service Provider. Since a network connection between computers usually requires the computers to know each others' network address, a known coordinator can facilitate such connections by allowing players to contact the coordinator and publish their currently assigned network address. The coordinator can then redirect interested players to one or more content providers (e.g., 100).
  • a coordinator may also be used to hide player identities from content providers, such as through network address hiding, or to coordinate player registration with different providers.
  • a significant amount of such data includes transmitting coordinate values for objects within a 3D model. It is advantageous to somehow further reduce the amount of space required for storing such coordinates.
  • FIG. 2 illustrates a typical triangle 200 having three vertices 202, 204, 206 defined according to a coordinate system for a particular model.
  • the invention is applicable to 2, 3, or n-dimensional coordinates, for simplicity, assume that the each vertex is defined in 3D space with coordinate tuple X, Y, and Z.
  • vertex values are encoded using 32-bit ANSI/IEEE-754-1985 floating-point numbers (a standard promulgated by the Institute for Electrical and Electronic Engineers (IEEE) and the American National Standards Institute (ANSI)).
  • IEEE Institute for Electrical and Electronic Engineers
  • ANSI American National Standards Institute
  • the standard 32-bit IEEE representation is replaced with a special encoding format. Rather than assigning distinct coordinate tuples to all vertices, instead some vertices are encoded using offsets from other vertices.
  • the basis for the offsets can, as described below, be based on a combination of object geometry, predicted movement for the object, and other factors. (Some or all of these bases may be used as desired.)
  • a key vertex can be defined for an object or region thereof, and the rest of the vertices for the object or region can be functionally assigned values according to the key vertex and analysis of the movement.
  • FIG. 3 shows an encoding for the FIG. 2 triangle 200 where position values for the second vertex S 254 and third vertex T 256 are computed as a function of a root vertex R 252.
  • a "root” vertex is a fully- defined (e.g., with typical coordinate values) vertex from which other vertices are defined.) Assume that the triangle is undergoing a slight-clockwise rotation, so that vertices RST rotate into positions R A 258 ("A" for actual), S A 260, and T A 262.
  • estimation functions are described as predicting the position at the next time frame, or moment of time.
  • the number of intervening steps may vary, or a function may be required to directly jump to a particular time frame.
  • One reason for such variability is to maintain synchronization between recipients using network connections of differing speeds / throughput.
  • the overall issue here is how to encode the change in vertex values for the triangle.
  • each R A S A TA vertex is encoded with a delta value. This value does not refer to recording a change in position for the vertex between two time frames.
  • a post-rotation position RA 258 for vertex R 252 is determined by first applying an estimation function A() 270 to vertex R 252.
  • the estimation function takes into account factors such as the triangle's geometry and motion to derive an estimated position for other triangle vertices.
  • a future position SA 260 for vertex S 254 can be estimated and encoded as a delta value.
  • this computation yields an estimated location S E 266 near to S A 260.
  • a delta value ⁇ 2 can be defined to correct the estimated value S E , and stored as the value for S A .
  • T A C(B(A(R)+ ⁇ 1 )+ ⁇ 2 )+ ⁇ 3 .
  • delta values contain sufficient precision to allow storing a value to exactly reconstruct vertex locations RA, SA, and T A .
  • delta values are encoded with a bit size smaller than the 32-bit standard numbers (if not, then there is no need for ⁇ -estimating positions). With each chained estimation, error can increase, ultimately exceeding ⁇ precision. When this occurs, a new "root" node is used to base subsequent vertex estimations.
  • vertex positions By encoding vertex positions as delta values within a certain ⁇ precision, positions can be encoded with arbitrarily fewer bits than required under the ANSI / IEEE-754-1985 32-bit format. The effect, then is to provide a trade off between ⁇ bit-size requirements, and the frequency of needing a new root node (e.g., potentially a full 96-bit vertex encoding). The smaller the ⁇ precision, the more frequent the root nodes.
  • delta values can be encoded as integer offsets (allowing for some rounding errors), thus reducing the number of bits required for encoding the values.
  • a ⁇ values may be limited to a precision of only a few (e.g., 4-8) bits, and rounding errors used to provide reconstructed values approximating actual vertex positions.
  • rounding errors may be tracked to identify when a new root node needs to be transferred for background scenery.
  • a stepping factor can be incorporated into the ⁇ values used for distant scenery. Such a value effectively increases the bit size of the delta values by decreasing precision.
  • the Z (depth) coordinate can be used as a multiplier of the ⁇ value, so the further the object's distance from a current viewing perspective, the larger the multiplier.
  • the effective range of the ⁇ values can be arbitrarily large, with a corresponding precision decrease.
  • FIG. 4 is a graph illustrating linear motion. As shown, movement of an object 300 is tracked with respect to its change in height 302 over time 304. Object 300 is smoothly changing height with respect to time. Consequently, when a content provider 100 (FIG. 1) seeks to encode the object 300, a determination can be made that the object (or sub-region thereof) is undergoing linear motion. The provider can then apply a function, e.g., A() 256, that takes advantage of the object's linear motion to predict future spatial positions for the object's vertices as the object moves over time.
  • A() 256 e.g., a function that takes advantage of the object's linear motion to predict future spatial positions for the object's vertices as the object moves over time.
  • delta values can be used instead of having to encode all vertices with 32-bit IEEE-754 floating-point values. As discussed above, the delta values may be encoded with arbitrarily few bits.
  • a table of estimation formulas can be stored on both the content provider and the data recipient. These formulas can be indexed according to typical model properties and different types of movement, thus allowing them to be applied to diverse data transfers, such as multimedia content, game content, etc. Assuming there are multiple formulas related to linear motion, a content provider can compute an estimation using plural functions (in parallel, for example) to identify which function yields a "best" estimate (e.g., a result having minimum error over actual coordinates SA 264, T A 268). An index entry into the table of formulas can be embedded in a data stream sent to a recipient.
  • FIG. 5 graphs simple oscillating motion as a basis for an estimation function. Shown is a graph of an object having height values which oscillate between a higher 310 and lower 312 height position.
  • a content provider can determine that an object (or sub-region thereof) is undergoing oscillating motion. The provider can then apply a function, e.g., A() 256, that takes advantage of the oscillation to predict future spatial positions for the object's vertices as the object moves over time.
  • A() 256 a function that takes advantage of the oscillation to predict future spatial positions for the object's vertices as the object moves over time.
  • the provider need only transmit compressed data corresponding to one cycle, along with an embedded message to the data recipient that such data corresponds to a cycle.
  • the recipient may then be responsible for rendering the cycle without further input from the provider, until the recipient or some other agent interferes / interacts with the object.
  • entries may be entered into a table of estimation formulas for various different cyclical motions, and a best estimator chosen from those available.
  • FIG. 6 illustrates physical distortion as a basis for an estimation function. Shown is a rod 330 that is being bent and then snapping back to original position 330. Shown with dashed lines are two intermediary bent positions 332, 334.
  • a content provider can determine that an object (or sub-region thereof) is being distorted.
  • object distortion such as bending, squeezing, twisting, can generate non-uniform motion of key positions within the object.
  • the rod's middle-point 336 undergoes little to no movement as the rod is distorted, while the illustrated end-points 338, 340 move more significantly.
  • a provider Rather than encoding each vertex of the bar 330 with traditional IEEE- 754 values, instead a provider recognizes that object's vertices are undergoing different levels of displacement, and uses this information to encode the movement with different sized delta values. That is, the middle-vertex requires a very small delta value, say just a few bits, while the end vertices require a higher precision (more bits) to properly encode their movement.
  • a provider can embed flags within a data stream to indicate the size of the delta values for different regions of the object 330.
  • object vertices can be grouped and transmitted together for a particular delta size, thus reducing on the number of flags needing to be transmitted.
  • delta values can also (as discussed above) be encoded as integer values, thus significantly reducing transfer requirements.
  • a further optimization is to recognize that for motion such as bending of an object, not all vertices need to be transmitted to a recipient. For example, here, one need only send position and delta values for the end-points 338, 340, since the original and delta positions for intermediary points can be interpolated along the length of the object.
  • FIG. 7 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented.
  • the invention may be described by reference to different high-level program modules and/or low-level hardware contexts. Those skilled in the art will realize that program module references can be interchanged with low-level instructions.
  • Program modules include procedures, functions, programs, components, data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • the modules may be incorporated into single and multi-processor computing systems, as well as hand-held devices and controllable consumer devices. It is understood that modules may be implemented on a single computing device, or processed over a distributed network environment, where modules can be located in both local and remote memory storage devices.
  • An exemplary system for implementing the invention includes a computing device 402 having system bus 404 for coupling together various components within the computing device.
  • the system 404 bus may be any of several types of bus structure including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of conventional bus architectures such as PCI, AGP, VESA, MicroChannel, ISA and EISA, to name a few. Note that only a single bus is illustrated, although plural buses typically achieve performance benefits.
  • attached to the bus 402 are a processor 406, a memory 408, storage devices (e.g., fixed 410, removable 412, optical/laser 414), a video interface 416, input output interface ports 418, and a network interface 420.
  • the processor 406 may be any of various commercially available processors, including Intel processors, or the DEC Alpha, PowerPC, programmable gate arrays, signal processors, or the like. Dual, quad processors, and other multi-processor architectures also can be used.
  • the system memory includes random access memory (RAM) 422, and static or reprogrammable read only memory (ROM) 424.
  • RAM random access memory
  • ROM static or reprogrammable read only memory
  • BIOS basic input/output system
  • BIOS stored in ROM, contains routines for information transfer between device 402 components or device initialization.
  • the fixed storage 410 generally refers to hard drive and other semipermanently attached media
  • removable storage 412 generally refers to a device-bay into which removable media such as a floppy diskette is removably inserted.
  • the optical/laser storage 414 include devices based on CD-ROM, DVD, or CD-RW technology, and are usually coupled to the system bus 404 through a device interface 426, 428, 430.
  • the storage systems and associated computer-readable media provide storage of data and executable instructions for the computing device 402. Note that other storage options include magnetic cassettes, tapes, flash memory cards, memory sticks, digital video disks, and the like.
  • the exemplary computing device 402 can store and execute a number of program modules within the RAM 422, ROM 424, and storage devices 410, 412, 414.
  • Typical program modules include an operating system 432, application programs 434 (e.g., a web browser or network application program), etc., and application data 436.
  • Program module or other system output can be processed by the video system 416 (e.g., a 2D and/or 3D graphics rendering device), which is coupled to the system bus 404 and an output device 438.
  • Typical output devices include monitors, flat-panels displays, liquid-crystal displays, and recording devices such as video-cassette recorders.
  • a user of the computing device 402 is typically a person interacting with the computing device through manipulation of an input device 440.
  • Common input devices include a keyboard, mouse, tablet, touch-sensitive surface, digital pen, joystick, microphone, game pad, satellite dish, etc.
  • the computing device 402 is expected to operate in a networked environment using logical connections to one or more remote computing devices.
  • One such remote computing device 442 may be a web server or other program module utilizing a network application protocol (e.g., HTTP, File Transfer Protocol (FTP), Gopher, Wide Area Information Server (WAIS)), a router, a peer device or other common network node, and typically includes many or all of the elements discussed for the computing device 402.
  • the computing device 402 has a network interface 420 (e.g., an Ethernet card) coupled to the system bus 404, to allow communication with the remote device 442.
  • a network interface 420 e.g., an Ethernet card
  • Both the local computing device 402 and the remote computing device 442 can be communicatively coupled to a network 444 such as a WAN, LAN, Gateway, Internet, or other public or private data-pathway. It will be appreciated that other communication links between the computing devices, such as through a modem 446 coupled to an interface port 418, may also be used.
  • a network 444 such as a WAN, LAN, Gateway, Internet, or other public or private data-pathway.
  • the present invention is described with reference to acts and symbolic representations of operations that are sometimes referred to as being computer-executed. It will be appreciated that the acts and symbolically represented operations include the manipulation by the processor 406 of electrical signals representing data bits which causes a resulting transformation or reduction of the electrical signal representation, and the maintenance of data bits at memory locations in the memory 408 and storage systems 410, 412, 414, so as to reconfigure or otherwise alter the computer system's operation and/or processing of signals.
  • the memory locations where data bits are maintained are physical locations having particular electrical, magnetic, or optical properties corresponding to the data bits.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Information Transfer Between Computers (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Diaphragms For Electromechanical Transducers (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Optical Communication System (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

A computing-device implemented method for compressing a data model, where such devices include a computer, personal digital assistant (PDA), home appliance, and the like. The data includes bandwidth intensive information such as that used in video conferencing, MPEG and equivalent types of digital video encoding, multi-media data transfers, and interactive gaming. In one implementation, a 3D model has objects defined therein. Each object is defined by plural data points that are transferred from a data provider to a recipient. Typically the provider and recipient are in communication over a network. For a first and a second data point defined in the model, first offsets are determined from the first data point for the second data point. The second data point can then be re-coded in terms of the determined first offsets. The first offsets are coded to require less data storage than required for the first data point, thus allowing them to be transferred more quickly. Second offsets can be cascaded off the first offsets for a third data point defined within the model.

Description

DATA COMPRESSION THROUGH OFFSET REPRESENTATION
Field of the Invention
The present invention generally relates to the fields of data compression, and more particularly, to compressing 3D multimedia transfers over a network connection.
Background
3D multimedia includes video conferencing, interactive games, web- page content, audio/visual (A/V)recordings, to name but a few (hereafter collectively "A/V data"). A/V data requires significant storage space, as well as substantial bandwidth to transmit the data over a network. Since most data recipients do not have sufficient bandwidth to receive the AV data in its original form, A/V data has traditionally been retrieved over a local high-speed bus or specialized high-speed data link.
For example, consider computerized games. Games include simple single-user simulators for pinball, cards, gambling, fighting, etc., or more complex multiple-player turn-taking games where each player competed against the game and ultimately compared scores. Well-known high-tech gaming systems include the Nintendo® and Sony PlayStation® gaming systems. These and other games use geometry to describe two and three- dimensional objects within gaming models. In particular, complex object surfaces are usually represented by a combination of one or more basic object shapes, such as splines, non-uniform rational splines (NURBs), texture maps, and (monohedral) triangle tesselation. Typically, an arbitrary object is defined by triangle tesselation, each triangle having associated spatial coordinate tuples X, Y (and perhaps Z), color, normal, and other attributes. This information, when multiplied by hundreds or thousands of polygons in moderately complex objects, amounts data that must be retrieved from dedicated graphics systems and local storage of graphics data. The data transfer requirements prohibit play against remote players. Although some games have been designed to use a modem to directly call a remote player and establish a game, this solution was often clumsy, slow, and inconsistent; rich content transfer was infeasible.
Or, consider video conferencing applications. As with games, these applications concern transferring large volumes of data. However, these applications must transfer the data to remote locations (e.g., conference participants). Therefore, they have required high-speed data links, e.g., at a minimum, a 128K-bit bonded ISDN connection to the remote participant, or more preferably, a T1 or faster frame-relay connection. Unfortunately, these speedy connection backbones are not generally available to users, and require complex technical support to maintain an active link. Conferencing also shares the modem-game limitation of requiring direct user-to-user connections.
With the recent advent of ubiquitous low-cost Internet connections, it has become a relatively straightforward matter to form a network communication link between multiple remote participants. This has spurred interest in using these generally available links to transfer A/V data. Unfortunately, due to the cost and technical complexity of maintaining ISDN, Frame Relay, and other high-speed links, Internet connections are commonly relatively slow modem- based connections. Since modem connections only generally realize an average modem bit rate of 14-40 KBits per second, these connections are not able to transfer, in reasonable time, rich game content, conferencing data, or other A/V data. This problem is exacerbated with each additional remote participant, since A/V data must now be distributed to multiple recipients - further consuming bandwidth resources.
In an effort to reduce bandwidth constraints, and take advantage of the easily-available slow networking connections, there have been efforts to compress A/V data. For example, data and geometry compression has previously been used to reduce information content in 2D and 3D models. Previous compression attempts include image compression (e.g., JPEG), defining objects with shared features (e.g., shared edges), small texture maps for large areas, etc. Examples of some of these and other techniques can be found in U.S. Patent No. 5,740,409 which teaches a 3D graphics accelerator for compressed geometry, and U.S. Patent Nos. 5,793,371 , 5,867,167, and 5,870,094 which teach various methods for more-efficiently encoding 3D models. These compression techniques are readily applicable to A/V game data (which use models), as well as other A/V data representing data in a compatible compressible format, such as Moving Picture Experts Group (MPEG) digital video encoding.
In addition to geometry compression, general purpose data compression procedures has also been applied to A/V data. Such techniques include Huffman encoding (See Huffman, "A Method For Construction Of Minimum Redundancy Codes", Proceedings IRE, 40, 10 pages 1098-1 100 (Sep. 1952)), Tunstall encoding (See Tunstall Doctoral thesis, "Synthesis of Noiseless Compression Codes", Georgia Institute of Technology (Sept. 1967)), and Lempel-Ziv encoding (See "A Universal Algorithm For Sequential Data Compression", IEEE Transactions on Information Theory, IT-23, 3, pages 337- 343 (May, 1977)), and run-length encoding of model data (see, e.g., U.S. Patent No. 3,656,178). These general purpose compression techniques are applicable to all data formats.
Unfortunately, even after application of general purpose and geometric compression, there still remains a significant amount of information that needs to be transferred before games, conferencing, viewers of 3D multimedia, interactive 3D chat rooms, and other applications of A/V data appear to operate as if they are retrieving their data from local storage or high-speed links. Thus, some further data reduction is needed.
Summary
A computing-device implemented method for compressing a data model, defined by plural data points, that is transferred from a provider to a recipient. Typically the provider and recipient are in communication over a network. For a first and a second data point defined in the model, first offsets are determined from the first data point for the second data point. The second data point can then be re-coded in terms of the determined first offsets. The first offsets are coded to require less data storage than required for the first data point, thus allowing them to be transferred more quickly. Second offsets can be cascaded off the first offsets for a third data point defined within the model. Other compression methods and apparatus are disclosed.
Brief Description of the Drawings
FIG. 1 illustrates a content provider in communication with several content recipients.
FIG. 2 illustrates a triangle having vertices defined in 3D space.
FIG. 3 illustrates using estimation functions to estimate future vertex locations for a clockwise rotation of the FIG. 2 triangle.
FIG. 4 graphs linear motion as a basis for an estimation function.
FIG. 5 graphs oscillating motion as a basis for an estimation function.
FIG. 6 illustrates physical distortion as a basis for an estimation function.
FIG. 7 illustrates a general environment in which the invention or parts thereof may be practiced.
Detailed Description
Although the present invention is applicable to a wide range of application programs, services, and devices which require transmitting rich content (such as A/V data) over a network, the following description focuses on delivering rich multimedia content from a gaming environment to players distributed over the network. The gaming paradigm has been chosen since it teaches delivery of A/V data as required for applications such as video conferencing, while also discussing the logistical complexity inherent in having multiple participants interactively affecting the delivery of A/V data.
FIG. 1 illustrates a game content provider 100 in communication with several users / game players 102-108 over a publicly accessible network 110 such as the Internet. Also shown is a coordinator 112 that, as discussed below, may be coordinating gaming activity. For ease of understanding, it is assumed that the content provided is an interactive three-dimensional game (hence the users are designated as players). The game is assumed to incorporate a 3D model, where objects within the model have attributes such as position, color, texture, lighting, orientation, etc., and where the objects are ultimately defined by one or more triangles. However, as will be readily apparent from the description to follow, the present invention is applicable and may be practiced with all forms of multimedia content delivery.
As shown multiple players 102-108 are in communication with a content provider. In one embodiment, the players utilize an Internet browser as a playing device, where the browser has an installed plug-in (e.g., helper application) to aid in processing content transmitted by the provider. However, instead of a browser, other network application programs, such as dedicated gaming applications, can be used. For simplicity, it is assumed that the provider 100 acts as a central data distribution point for game data, transmitting all required data to each player 102-108. However, it is understood that gaming software can be configured so that players directly send each other information, or that one player or other network location may be used as a distribution point for other players (e.g., to distribute processing load).
Also shown is a game coordinator 112 that can be used as a central point for initiating or joining in to games in progress. Such a coordinator is useful in contexts such as the Internet, since players are routinely assigned random network addresses by their Internet Service Provider. Since a network connection between computers usually requires the computers to know each others' network address, a known coordinator can facilitate such connections by allowing players to contact the coordinator and publish their currently assigned network address. The coordinator can then redirect interested players to one or more content providers (e.g., 100). A coordinator may also be used to hide player identities from content providers, such as through network address hiding, or to coordinate player registration with different providers. Many corporations are now providing centralized "hubs" to facilitate game play; see, e.g., the MSN Gaming Zone (formerly the Internet Gaming zone) by Microsoft Corporation of Redmond Washington at http://games.msn.com or http://www.microsoft.com/games. Typically, when a player contacts a provider, the provider attempts to transmit game content to the player. If the player's browser is not yet configured to receive such content, this can trigger an automatic notification to the player to install the requisite plug-in, driver, or other data needed to play the provider's game.
Once a player has come into communication with a content provider, the provider must send game content to the player. As discussed above, various methods have been employed to reduce the amount of data that actually needs to be sent to such players. A significant amount of such data includes transmitting coordinate values for objects within a 3D model. It is advantageous to somehow further reduce the amount of space required for storing such coordinates.
FIG. 2 illustrates a typical triangle 200 having three vertices 202, 204, 206 defined according to a coordinate system for a particular model. Although the invention is applicable to 2, 3, or n-dimensional coordinates, for simplicity, assume that the each vertex is defined in 3D space with coordinate tuple X, Y, and Z.
Typically, vertex values are encoded using 32-bit ANSI/IEEE-754-1985 floating-point numbers (a standard promulgated by the Institute for Electrical and Electronic Engineers (IEEE) and the American National Standards Institute (ANSI)). Thus each 3D tuples requires 96 bits to encode its coordinates. It is assumed that the vertices are used to define triangles for monohedrally tesellated objects within a 3D model or game. Although other non-triangle shapes can be used to define objects, triangle tesselation is assumed since it is a common requirement of rendering hardware.
To reduce transmission burden between content provider and data recipients, the standard 32-bit IEEE representation is replaced with a special encoding format. Rather than assigning distinct coordinate tuples to all vertices, instead some vertices are encoded using offsets from other vertices. The basis for the offsets can, as described below, be based on a combination of object geometry, predicted movement for the object, and other factors. (Some or all of these bases may be used as desired.)
For example, if a particular region of and object is undergoing a particular type of uniform motion, such as linear or rotational motion, then a key vertex can be defined for an object or region thereof, and the rest of the vertices for the object or region can be functionally assigned values according to the key vertex and analysis of the movement. Note that while discussion focuses on a content provider analyzing model activity and formatting output to a recipient accordingly, it is understood that these same techniques can be applied by a recipient for returning compressed data back to the provider. However, movement is not necessary; these techniques apply to static as well as changing models. Immobile objects are still represented by triangles, and hence triangle positions can be predicted (but with high accuracy due to lack of movement).
FIG. 3, for example, shows an encoding for the FIG. 2 triangle 200 where position values for the second vertex S 254 and third vertex T 256 are computed as a function of a root vertex R 252. (A "root" vertex is a fully- defined (e.g., with typical coordinate values) vertex from which other vertices are defined.) Assume that the triangle is undergoing a slight-clockwise rotation, so that vertices RST rotate into positions RA 258 ("A" for actual), SA 260, and TA 262. Note that this discussion concerns two moments in time, where at time t=0, the triangle is pre-rotation, and has vertices RST, and where at time t=1 , the triangle is post-rotation, and has vertices RASATA.
Note, though, that the concept of time is arbitrary for a given model. In particular, estimation functions are described as predicting the position at the next time frame, or moment of time. There may, however, be a number of intervening positional calculations before the "next" moment in time is reached. In particular, the number of intervening steps may vary, or a function may be required to directly jump to a particular time frame. One reason for such variability is to maintain synchronization between recipients using network connections of differing speeds / throughput. The overall issue here is how to encode the change in vertex values for the triangle. Rather than encoding the triangle RASATA with standard IEEE floating point values, requiring 3χ32-bits per vertex, or 288 bits, instead each RASATA vertex is encoded with a delta value. This value does not refer to recording a change in position for the vertex between two time frames. Instead, in one embodiment, a post-rotation position RA 258 for vertex R 252 is determined by first applying an estimation function A() 270 to vertex R 252. The estimation function takes into account factors such as the triangle's geometry and motion to derive an estimated position for other triangle vertices.
The result of this function 270 is an estimated location RE 264 ("E" for estimated) for RA 258. As shown there is a disparity in the estimated 264 and actual 258 vertex positions. Assuming that both a content provider and content recipient share a library of estimation functions, or share state-based analysis routines to adaptively perform estimations, the future RA position for vertex R can be encoded with an error-correction delta (Δ) value for the disparity, e.g., Δi = RA - RE- AS illustrated, RA = RE, SO ΔI is zero. A receiver / decoder need only know the proper function 270 and corresponding Δ value to determine that RA = A(R)+Δι.
As with encoding RA, a future position SA 260 for vertex S 254 can be estimated and encoded as a delta value. However, instead of estimating SA based on S, SA can be estimated by applying an estimation function B() 272 to RA, giving SE = B(A(R)+Δι). As illustrated, this computation yields an estimated location SE 266 near to SA 260. As with RA, a delta value Δ2 can be defined to correct the estimated value SE, and stored as the value for SA. When a receiver / decoder attempts to reconstruct vertex SA, it can do so based on information already received for reconstructing vertex RA. That is, knowing the value of R 252, SA = B(A(R)+Δι)+Δ2.
A similar procedure can be applied to determine a future position TA 262 for vertex T 256, where an estimate function C() 274 is applied to SA 260 to cascade determining TA 262 off of SA 260. Thus, TA = C(B(A(R)+Δ1)+Δ2)+Δ3.
Thus, all that is needed to decode the time t=1 triangle is the original value of a
"root" node R 252 from the time t=0 triangle, the delta values, and which function to apply. For complex objects having multiple triangles, a chain of such estimation corrections can be tracked.
An assumption so far, however, is that the delta values contain sufficient precision to allow storing a value to exactly reconstruct vertex locations RA, SA, and TA. However, since one goal is to reduce transmission requirements, delta values are encoded with a bit size smaller than the 32-bit standard numbers (if not, then there is no need for Δ-estimating positions). With each chained estimation, error can increase, ultimately exceeding Δ precision. When this occurs, a new "root" node is used to base subsequent vertex estimations.
By encoding vertex positions as delta values within a certain Δ precision, positions can be encoded with arbitrarily fewer bits than required under the ANSI / IEEE-754-1985 32-bit format. The effect, then is to provide a trade off between Δ bit-size requirements, and the frequency of needing a new root node (e.g., potentially a full 96-bit vertex encoding). The smaller the Δ precision, the more frequent the root nodes. (Note an assumption that the selection of a proper estimation function 270, 272, 274 will ultimately produce a value exceeding available Δ precision - this is not necessarily true, depending on the precision of the estimation functions.) Further, delta values can be encoded as integer offsets (allowing for some rounding errors), thus reducing the number of bits required for encoding the values.
Note that exact reconstruction of vertex values is not always desirable.
For example, it should also be appreciated that for certain objects, such as distant background scenery, accurate vertex construction is not as important as it is for closer foreground objects. Consequently, a Δ values may be limited to a precision of only a few (e.g., 4-8) bits, and rounding errors used to provide reconstructed values approximating actual vertex positions. As with previous error accumulation, rounding errors may be tracked to identify when a new root node needs to be transferred for background scenery. Alternatively, a stepping factor can be incorporated into the Δ values used for distant scenery. Such a value effectively increases the bit size of the delta values by decreasing precision. For example, the Z (depth) coordinate can be used as a multiplier of the Δ value, so the further the object's distance from a current viewing perspective, the larger the multiplier. Thus, the effective range of the Δ values can be arbitrarily large, with a corresponding precision decrease.
Not discussed so far are particular estimation functions. This is partly due to such functions having to vary widely according to application context. Essentially, such functions will take into account an object's (or sub-region thereof) geometry, and a type of motion occurring to the object, and use this data to predict future positions for object vertices. As discussed above, these predictions can be chained. The following figures illustrate typical object movement that can base prediction functions. It is understood that these simple examples are for exemplary purposes only and that more complex functions can and will be applied to particular model circumstances.
FIG. 4 is a graph illustrating linear motion. As shown, movement of an object 300 is tracked with respect to its change in height 302 over time 304. Object 300 is smoothly changing height with respect to time. Consequently, when a content provider 100 (FIG. 1) seeks to encode the object 300, a determination can be made that the object (or sub-region thereof) is undergoing linear motion. The provider can then apply a function, e.g., A() 256, that takes advantage of the object's linear motion to predict future spatial positions for the object's vertices as the object moves over time. Thus, instead of having to encode all vertices with 32-bit IEEE-754 floating-point values, instead delta values can be used. As discussed above, the delta values may be encoded with arbitrarily few bits.
Further, rather than having a single function for linear motion estimation, instead a table of estimation formulas can be stored on both the content provider and the data recipient. These formulas can be indexed according to typical model properties and different types of movement, thus allowing them to be applied to diverse data transfers, such as multimedia content, game content, etc. Assuming there are multiple formulas related to linear motion, a content provider can compute an estimation using plural functions (in parallel, for example) to identify which function yields a "best" estimate (e.g., a result having minimum error over actual coordinates SA 264, TA 268). An index entry into the table of formulas can be embedded in a data stream sent to a recipient.
FIG. 5 graphs simple oscillating motion as a basis for an estimation function. Shown is a graph of an object having height values which oscillate between a higher 310 and lower 312 height position.
As with linear motion, a content provider can determine that an object (or sub-region thereof) is undergoing oscillating motion. The provider can then apply a function, e.g., A() 256, that takes advantage of the oscillation to predict future spatial positions for the object's vertices as the object moves over time. Thus, instead of having to encode all vertices with 32-bit IEEE-754 floatingpoint values, instead delta values can be used. As discussed above, the delta values may be encoded with arbitrarily few bits.
Further, since for oscillating motion is one form of cyclical motion, the provider need only transmit compressed data corresponding to one cycle, along with an embedded message to the data recipient that such data corresponds to a cycle. The recipient may then be responsible for rendering the cycle without further input from the provider, until the recipient or some other agent interferes / interacts with the object.
As with linear motion, entries may be entered into a table of estimation formulas for various different cyclical motions, and a best estimator chosen from those available.
FIG. 6 illustrates physical distortion as a basis for an estimation function. Shown is a rod 330 that is being bent and then snapping back to original position 330. Shown with dashed lines are two intermediary bent positions 332, 334.
As with the other motion exemplars, a content provider can determine that an object (or sub-region thereof) is being distorted. However, object distortion, such as bending, squeezing, twisting, can generate non-uniform motion of key positions within the object. For example, as shown, the rod's middle-point 336 undergoes little to no movement as the rod is distorted, while the illustrated end-points 338, 340 move more significantly.
Rather than encoding each vertex of the bar 330 with traditional IEEE- 754 values, instead a provider recognizes that object's vertices are undergoing different levels of displacement, and uses this information to encode the movement with different sized delta values. That is, the middle-vertex requires a very small delta value, say just a few bits, while the end vertices require a higher precision (more bits) to properly encode their movement.
Thus, a provider can embed flags within a data stream to indicate the size of the delta values for different regions of the object 330. In particular, object vertices can be grouped and transmitted together for a particular delta size, thus reducing on the number of flags needing to be transmitted. As a further optimization, delta values can also (as discussed above) be encoded as integer values, thus significantly reducing transfer requirements. A further optimization is to recognize that for motion such as bending of an object, not all vertices need to be transmitted to a recipient. For example, here, one need only send position and delta values for the end-points 338, 340, since the original and delta positions for intermediary points can be interpolated along the length of the object.
FIG. 7 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. The invention may be described by reference to different high-level program modules and/or low-level hardware contexts. Those skilled in the art will realize that program module references can be interchanged with low-level instructions.
Program modules include procedures, functions, programs, components, data structures, and the like, that perform particular tasks or implement particular abstract data types. The modules may be incorporated into single and multi-processor computing systems, as well as hand-held devices and controllable consumer devices. It is understood that modules may be implemented on a single computing device, or processed over a distributed network environment, where modules can be located in both local and remote memory storage devices.
An exemplary system for implementing the invention includes a computing device 402 having system bus 404 for coupling together various components within the computing device. The system 404 bus may be any of several types of bus structure including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of conventional bus architectures such as PCI, AGP, VESA, MicroChannel, ISA and EISA, to name a few. Note that only a single bus is illustrated, although plural buses typically achieve performance benefits. Typically, attached to the bus 402 are a processor 406, a memory 408, storage devices (e.g., fixed 410, removable 412, optical/laser 414), a video interface 416, input output interface ports 418, and a network interface 420.
The processor 406 may be any of various commercially available processors, including Intel processors, or the DEC Alpha, PowerPC, programmable gate arrays, signal processors, or the like. Dual, quad processors, and other multi-processor architectures also can be used. The system memory includes random access memory (RAM) 422, and static or reprogrammable read only memory (ROM) 424. A basic input/output system (BIOS), stored in ROM, contains routines for information transfer between device 402 components or device initialization.
The fixed storage 410 generally refers to hard drive and other semipermanently attached media, whereas removable storage 412 generally refers to a device-bay into which removable media such as a floppy diskette is removably inserted. The optical/laser storage 414 include devices based on CD-ROM, DVD, or CD-RW technology, and are usually coupled to the system bus 404 through a device interface 426, 428, 430. The storage systems and associated computer-readable media provide storage of data and executable instructions for the computing device 402. Note that other storage options include magnetic cassettes, tapes, flash memory cards, memory sticks, digital video disks, and the like. The exemplary computing device 402 can store and execute a number of program modules within the RAM 422, ROM 424, and storage devices 410, 412, 414. Typical program modules include an operating system 432, application programs 434 (e.g., a web browser or network application program), etc., and application data 436. Program module or other system output can be processed by the video system 416 (e.g., a 2D and/or 3D graphics rendering device), which is coupled to the system bus 404 and an output device 438. Typical output devices include monitors, flat-panels displays, liquid-crystal displays, and recording devices such as video-cassette recorders.
A user of the computing device 402 is typically a person interacting with the computing device through manipulation of an input device 440. Common input devices include a keyboard, mouse, tablet, touch-sensitive surface, digital pen, joystick, microphone, game pad, satellite dish, etc. One can also provide input through manipulation of a virtual reality environment, or through processing the output from a data file or another computing device.
The computing device 402 is expected to operate in a networked environment using logical connections to one or more remote computing devices. One such remote computing device 442 may be a web server or other program module utilizing a network application protocol (e.g., HTTP, File Transfer Protocol (FTP), Gopher, Wide Area Information Server (WAIS)), a router, a peer device or other common network node, and typically includes many or all of the elements discussed for the computing device 402. The computing device 402 has a network interface 420 (e.g., an Ethernet card) coupled to the system bus 404, to allow communication with the remote device 442. Both the local computing device 402 and the remote computing device 442 can be communicatively coupled to a network 444 such as a WAN, LAN, Gateway, Internet, or other public or private data-pathway. It will be appreciated that other communication links between the computing devices, such as through a modem 446 coupled to an interface port 418, may also be used.
In accordance with the practices of persons skilled in the art of computer hardware and software programming, the present invention is described with reference to acts and symbolic representations of operations that are sometimes referred to as being computer-executed. It will be appreciated that the acts and symbolically represented operations include the manipulation by the processor 406 of electrical signals representing data bits which causes a resulting transformation or reduction of the electrical signal representation, and the maintenance of data bits at memory locations in the memory 408 and storage systems 410, 412, 414, so as to reconfigure or otherwise alter the computer system's operation and/or processing of signals. The memory locations where data bits are maintained are physical locations having particular electrical, magnetic, or optical properties corresponding to the data bits.
Having described and illustrated the principles of the invention with reference to illustrated embodiments, it will be recognized that the illustrated embodiments can be modified in arrangement and detail without departing from such principles.
For example, while the foregoing description focused - for expository convenience — on compressing floating point values for vertices, it will be recognized that the same techniques and analyses can be applied to different numeric values needing transport between a content provider and a player (e.g., for compressing sound effects). Consequently, in view of the wide variety of alternate applications for the invention, the detailed embodiments are intended to be illustrative only, and should not be taken as limiting the scope of the invention. Rather, what is claimed as the invention, is all such modifications as may come within the scope and spirit of the following claims and equivalents thereto.

Claims

What is claimed is:
1. A computing-device implemented method for compressing a data model, defined by plural data points, for transfer from a provider to a recipient, comprising: providing a data model having a first and a second data point; determining first offsets from the first data point for the second data point; and re-coding the second data point in terms of the determined first offsets.
2. A method according to claim 1 , wherein the first data point is a predicted value of the second data point.
3. A method according to claim 2, wherein the first data point is predicted using one or more non-speculative data values.
4. A method according to claim 2, wherein the first data point is predicted using at least one speculative data value.
5. A method according to claim 2, wherein the first data point is predicted using extrapolation.
6. A method according to claim 2, wherein the first data point is predicted using triangulation.
7. A method according to claim 1 , in which the data model includes a third data point, the method further comprising: determining second offsets from the second data point for the third data point; and re-coding the fourth data point in terms of the determined second offsets.
8. A method according to claim 7, in which the data model includes a fourth, a fifth and a sixth data point, the method further comprising: determining third offsets from the fourth data point for the fifth data point; re-coding the fourth data point in terms of the determined third offsets; determining fourth offsets from the fifth data point for the sixth data point; and re-coding the sixth data point in terms of the determined fourth offsets.
9. A method according to claim 8, wherein determining the first and second offsets are performed using a first movement factor, and wherein determining the third and fourth offsets are performed using a second movement factor.
10. An article of manufacture comprising a computing-device readable medium having instructions encoded thereon to cause a processor to perform the operations of claim 9.
11. An article of manufacture comprising a computing-device readable medium having instructions encoded thereon to cause a processor to perform the operations of claim 1.
12. A computing-device implemented method for decompressing a region of a compressed data model having a first data point and an original second data point, in which first offsets from the first data point are determined for the second data point, and where the second data point is re-coded in terms of the determined first offsets, comprising: receiving a first data point; receiving a re-coded second data point; comparing the first data point to the re-coded second data point; and expanding the re-coded second data point into the second data point according to the comparing.
13. A method according to claim 12, in which the re-coded second data point is encoded with offset values from the first data point, the method further comprising: wherein the expanding includes applying the offset values to the first data point.
14. A method according to claim 13, further comprising: receiving a motion type for the region; and defining supplementary data points based on the first data point and the expanded original second data point; wherein the defining is performed according to the motion type.
15. A method according to claim 14, wherein the motion type is a selected one of linear motion, oscillating motion, or a predetermined movement pattern, and wherein the defining is constrained so that the first, second, and supplemental data points correspond to a range of motion for the motion type.
16. An article of manufacture comprising a graphics rendering device in communication with a memory, such memory containing instructions capable of causing the rendering device to perform the operations of claim 14.
17. An article of manufacture comprising a computer readable medium having instructions encoded thereon to cause a processor to perform the operations of claim 14.
18. A computing-device implemented method for predicting movement for a region of an object, such region defined in part by plural spatial coordinates arranged as tuples, the method comprising: providing a first coordinate tuple and a second coordinate tuple; applying an estimation function to the first coordinate tuple to get an estimated coordinate tuple; and defining a delta tuple comprising a difference between the estimated tuple and the second tuple.
19. A method according to claim 18, further comprising: determining a motion type for the region; wherein the estimation function is selected from plural estimation functions according to the motion type.
20. A method according to claim 19, wherein the estimation function is a selected one of a linear motion estimator, an oscillating motion estimator, a triangulation estimator, or a movement pattern estimator.
PCT/US2000/040911 1999-09-18 2000-09-15 Data compression through offset representation WO2001022198A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU12529/01A AU1252901A (en) 1999-09-18 2000-09-15 Data compression through offset representation
EP00974111A EP1222508B1 (en) 1999-09-18 2000-09-15 Data compression through offset representation
DE60026346T DE60026346T2 (en) 1999-09-18 2000-09-15 DATA COMPRESSION BY OFFSET PRESENTATION

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/399,063 1999-09-18
US09/399,063 US6512515B1 (en) 1999-09-18 1999-09-18 Data compression through motion and geometric relation estimation functions

Publications (2)

Publication Number Publication Date
WO2001022198A2 true WO2001022198A2 (en) 2001-03-29
WO2001022198A3 WO2001022198A3 (en) 2002-11-07

Family

ID=23577976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/040911 WO2001022198A2 (en) 1999-09-18 2000-09-15 Data compression through offset representation

Country Status (6)

Country Link
US (1) US6512515B1 (en)
EP (1) EP1222508B1 (en)
AT (1) ATE319127T1 (en)
AU (1) AU1252901A (en)
DE (1) DE60026346T2 (en)
WO (1) WO2001022198A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7716333B2 (en) 2001-11-27 2010-05-11 Accenture Global Services Gmbh Service control architecture
US7734793B2 (en) 2001-11-27 2010-06-08 Accenture Global Services Gmbh Service control framework for seamless transfer of a multimedia conference over different media

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2002314902A1 (en) * 2001-06-02 2002-12-16 Polycom, Inc. System and method for point to point integration of personal computers with videoconferencing systems
US7610563B2 (en) * 2002-03-22 2009-10-27 Fuji Xerox Co., Ltd. System and method for controlling the display of non-uniform graphical objects
US7249327B2 (en) * 2002-03-22 2007-07-24 Fuji Xerox Co., Ltd. System and method for arranging, manipulating and displaying objects in a graphical user interface
US20060189393A1 (en) * 2005-02-22 2006-08-24 Albert Edery Real action network gaming system
US7752303B2 (en) * 2006-02-23 2010-07-06 Wily Technology, Inc. Data reporting using distribution estimation
US10796484B2 (en) * 2017-06-14 2020-10-06 Anand Babu Chitavadigi System and method for interactive multimedia and multi-lingual guided tour/panorama tour
US11509865B2 (en) * 2020-05-12 2022-11-22 True Meeting Inc Touchups, denoising and makeup related to a 3D virtual conference

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0536801A2 (en) * 1991-10-11 1993-04-14 Spacelabs, Inc. A method and system for lossless and adaptive data compression and decompression
EP0757332A2 (en) * 1995-08-04 1997-02-05 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics
EP0889440A2 (en) * 1997-06-30 1999-01-07 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3656178A (en) 1969-09-15 1972-04-11 Research Corp Data compression and decompression system
US5155772A (en) 1990-12-11 1992-10-13 Octel Communications Corporations Data compression system for voice data
DE4225434A1 (en) 1991-08-02 1993-02-04 Sony Corp DEVICE FOR RECORDING AND PLAYING BACK COMPRESSED DIGITAL DATA ON OR FROM A RECORD CARRIER AND APPLICABLE METHOD FOR BIT REMOVAL
US5740409A (en) 1996-07-01 1998-04-14 Sun Microsystems, Inc. Command processor for a three-dimensional graphics accelerator which includes geometry decompression capabilities

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0536801A2 (en) * 1991-10-11 1993-04-14 Spacelabs, Inc. A method and system for lossless and adaptive data compression and decompression
EP0757332A2 (en) * 1995-08-04 1997-02-05 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics
EP0889440A2 (en) * 1997-06-30 1999-01-07 Sun Microsystems, Inc. Method and apparatus for geometric compression of three-dimensional graphics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
DANSKIN J: "HIGHER BANDWIDTH X" ACM MULTIMEDIA, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE, NEW YORK, NY, US, 15 October 1994 (1994-10-15), pages 89-96, XP000618613 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7716333B2 (en) 2001-11-27 2010-05-11 Accenture Global Services Gmbh Service control architecture
US7734793B2 (en) 2001-11-27 2010-06-08 Accenture Global Services Gmbh Service control framework for seamless transfer of a multimedia conference over different media

Also Published As

Publication number Publication date
DE60026346T2 (en) 2006-11-02
DE60026346D1 (en) 2006-04-27
EP1222508B1 (en) 2006-03-01
EP1222508A1 (en) 2002-07-17
ATE319127T1 (en) 2006-03-15
US6512515B1 (en) 2003-01-28
WO2001022198A3 (en) 2002-11-07
AU1252901A (en) 2001-04-24

Similar Documents

Publication Publication Date Title
EP1222635B1 (en) Data compression
US10230565B2 (en) Allocation of GPU resources across multiple clients
Shi et al. A survey of interactive remote rendering systems
WO2022100522A1 (en) Video encoding method, video decoding method, apparatus, electronic device, storage medium, and computer program product
JP5943330B2 (en) Cloud source video rendering system
CN103329526B (en) moving image distribution server and control method
US6512515B1 (en) Data compression through motion and geometric relation estimation functions
Bao et al. A framework for remote rendering of 3-D scenes on limited mobile devices
US6549206B1 (en) Graphic scene animation signal, corresponding method and device
JP3955178B2 (en) Animated data signal of graphic scene and corresponding method and apparatus
Preda et al. A model for adapting 3D graphics based on scalable coding, real-time simplification and remote rendering
CN115731324A (en) Method, apparatus, system and medium for processing data
Moran et al. 3D game content distributed adaptation in heterogeneous environments
Tack et al. Eliminating CPU overhead for on-the-fly content adaptation with MPEG-4 wavelet subdivision surfaces
Vázquez et al. Bandwidth reduction for remote navigation systems through view prediction and progressive transmission
Morán et al. Adaptive 3d content for multi-platform on-line games
WO2024010588A1 (en) Cloud-based gaming system for supporting legacy gaming applications with high frame rate streams
TWI532005B (en) An animation distribution server, an animation reproduction apparatus, a control method, a program, and a recording medium
Vázquez et al. Bandwidth reduction techniques for remote navigation systems
WO2021001745A1 (en) Forward and inverse quantization for point cloud compression using look-up tables
CN117062656A (en) Low latency multi-pass frame-level rate control using shared reference frames
Hassan A Hybrid Remote Rendering Approach for Graphic Applications on Cloud Computing
Martin et al. Interactive 3D Rendering and Visualization in Networked Environments.
Meyer et al. Stateless Remote Environment Navigation with View Compression

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2000974111

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2000974111

Country of ref document: EP

AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

NENP Non-entry into the national phase

Ref country code: JP

WWG Wipo information: grant in national office

Ref document number: 2000974111

Country of ref document: EP