EP1532818A2 - Verfahren zur übertragung von einem 3-d gitternetzanimationsdatenstrom - Google Patents
Verfahren zur übertragung von einem 3-d gitternetzanimationsdatenstromInfo
- Publication number
- EP1532818A2 EP1532818A2 EP03793092A EP03793092A EP1532818A2 EP 1532818 A2 EP1532818 A2 EP 1532818A2 EP 03793092 A EP03793092 A EP 03793092A EP 03793092 A EP03793092 A EP 03793092A EP 1532818 A2 EP1532818 A2 EP 1532818A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- layer
- layers
- animation
- mesh
- wireframe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims description 74
- 230000000007 visual effect Effects 0.000 claims abstract description 39
- 238000000638 solvent extraction Methods 0.000 claims abstract description 19
- 230000015556 catabolic process Effects 0.000 claims abstract description 3
- 238000006731 degradation reaction Methods 0.000 claims abstract description 3
- 230000003068 static effect Effects 0.000 claims description 18
- 230000002452 interceptive effect Effects 0.000 claims description 8
- 238000005192 partition Methods 0.000 claims description 5
- 230000001747 exhibiting effect Effects 0.000 claims 1
- 238000013459 approach Methods 0.000 abstract description 8
- 238000012937 correction Methods 0.000 abstract description 7
- 239000010410 layer Substances 0.000 description 88
- 230000008569 process Effects 0.000 description 13
- 238000006073 displacement reaction Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 11
- 238000007906 compression Methods 0.000 description 7
- 230000006835 compression Effects 0.000 description 7
- 238000013461 design Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000002474 experimental method Methods 0.000 description 7
- 230000036962 time dependent Effects 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000005457 optimization Methods 0.000 description 6
- 230000002829 reductive effect Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 5
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000000052 comparative effect Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000013139 quantization Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 239000002356 single layer Substances 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000003466 anti-cipated effect Effects 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000000670 limiting effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 210000000540 fraction c Anatomy 0.000 description 1
- 238000013467 fragmentation Methods 0.000 description 1
- 238000006062 fragmentation reaction Methods 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
- PICXIOQBANWBIZ-UHFFFAOYSA-N zinc;1-oxidopyridine-2-thione Chemical class [Zn+2].[O-]N1C=CC=CC1=S.[O-]N1C=CC=CC1=S PICXIOQBANWBIZ-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
- H04N19/29—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding involving scalability at the object level, e.g. video object layer [VOL]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/37—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability with arrangements for assigning different transmission priorities to video input data or to video coded data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/65—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
- H04N19/67—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience involving unequal error protection [UEP], i.e. providing protection according to the importance of the data
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/89—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
- H04N19/895—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
Definitions
- Non-Provisional Application entitled “Coding of Animated 3-D Wireframe Models For Internet Streaming Applications: Methods, Systems and Program Products", Serial Number 10/198129, filed July 19, 2002, assigned to the same assignee as that of the present applications and fully incorporated herein by reference.
- the present invention relates to streaming data and more specifically relates to a system and method of streaming 3-D wireframe animations.
- Figure 2 is a comparative plot of distortion metrics including PSNR, Hausdorff Distance and Visual Smoothness;
- Figure 3A represents a flowchart of method of error resilient wireframe streaming;
- Figure 3B illustrates a flowchart according to an aspect of the invention
- Figure 4 is a comparative plot of three error concealment methods for sequence for wireframe animation TELLY;
- Figure 5 is a comparative plot of Visual smoothness (VS) transmitted and decoded frames of 3 layers of the wireframe animation TELLY;
- Figure 6 is a comparative plot of Visual Smoothness between transmitted and decoded frames of 2 layers of wireframe animation BOUNCEBALL.
- an aspect of the invention comprises partitioning the animation stream into a number of layers and applying Reed-Solomon (RS) forward error correction (FEC) codes to each layer independently and in such a way as to maintain the same overall bitrate whilst minimizing the perceptual effects of error, as measured by a distortion metric related to static 3-D mesh compression.
- RS Reed-Solomon
- FEC forward error correction
- n 2 ..., n ⁇ , at time t, where n is the number of vertices in the mesh. Since a vertex has three space components (x:, v, Z j ), and assuming that no connectivity changes occur in time
- the objective of the 3D-Animation compression algorithm is to compress the sequence of matrices M t that form the synthetic animation, for transmission over a communications channel. Obviously, for free-form animations of a 3-D mesh the coordinates of the mesh may exhibit high variance, which makes the M,. matrices unsuitable for compression.
- the signal can be defined as the set of non-zero displacements of all vertices in all nodes at time t:
- an I-frame describes changes from the reference model M 0 to the model at the current time instant t.
- a P-frame describes the changes of a model from the previous time instant t - 1 to the current time instant t.
- the corresponding position and displacement matrices for I and P frames are denoted respectively by
- a DPCM decoder 120 is shown in Figure IB. The decoder 120 first decodes
- R(m, n) be the probability of m - 1 packet losses within the next n - 1 packets following a lost packet. This probability can be calculated from the recurrence:
- P (m, n) determines the performance of the FEC scheme, and can be expressed as a function of P B , L ⁇ using Eq. 3 and 4.
- the expression of P (m, n) can be used in a RS(m, n) FEC scheme for optimized source/channel rate allocation that rninimizes the visual distortion.
- the Node Table (an ordered Ust of all nodes in the scene) is either known a priori at the receiver since the reference wireframe model exists there already, or is downloaded by other means.
- the VertexMasks are defined, one per axis, for the vertices to be animated.
- one frame which represents one Application Data Unit (ADU)
- ADU Application Data Unit
- the 3D-Animation codec's output bit- stream is 'naturally packetizable' according to the known Application Level Framing (ALF) principle.
- An RTP packet payload format is considered starting with the NodeMask and VertexMasks, followed by the encoded samples along each axis.
- a more efficient packetization scheme is sought that satisfies the requirements set out above: (a) to accommodate layered bitstreams, and (b) to produce a constant bitrate stream.
- This efficiency can be achieved by appropriately adapting the block structure known as Block- Of-Packets (BOP).
- BOP Block- Of-Packets
- encoded frames of a single layer are placed sequentially in line order of an n-line by S P -column grid structure and then RS codes are generated vertically across the grid.
- S P n-column grid structure
- RS codes are generated vertically across the grid.
- error resilience information is appended so that the length of the grid is n for k frames of source data.
- This method is most appropriate for packet networks with burst packet errors, and can be fully described by the sequence frame rate FR, the packet size S P , the data frame rate in a BOP F BOP
- This equation serves as a guide to the design of efficient packetization schemes by appropriately balancing the parameters F B0P , n and S P . It also encompasses the trade-off between delay and resilience.
- F B0P , n and S P parameters that are needed for one BOP structure per layer.
- RS code rates can be allocated to each layer, thus providing unequal level of error protection to each layer. The way these parameters are adjusted in practice for the application of 3-D animation streaming, considering a measure of visual error, is explained next.
- the Hausdorff Distance is defined in the present case as the maximum minimum distance between the vertices of two sets, M j and
- distortion metrics can be derived by equivalence to natural video coding, such SNR and PSNR, but they are tailored to the statistical properties of the specific signal they encode, failing to give a uniform measure of user perceived distortion across a number of signals and encoding methods over different media. Moreover, especially for 3-D meshes, all these metrics give only objective indications of geometric closeness, or signal to noise ratios, and they fail to capture the more subtle visual properties the human eye appreciates, such as surface smoothness.
- Figure 2 illustrates a comparative plot 200 of distortion metrics: PSNR, Hausdorff Distance, and Visual Smoothness for 150 frames of the animated sequence BOUNCEBALL with I-frame frequency at 8 Hz.
- the two upper plots show the expected correlation between the corresponding metrics of geometric distance and Hausdorff Distance (eq. 7) they represent.
- the two lower plots indicate that the visual distortion (eq. 8) might be low in cases where the geometric distance is high and vice-versa.
- n(i) is the set of indices of the neighbors of vertex i, and l ;i is the geometric distance between vertices i and j.
- the new metric is defined as the average of the norm of the geometric distance between meshes and the norm of the Laplacian difference (m t ⁇ t are the
- This metric in Eq. 8 is preferably used in the present invention, and will be referred to hereafter as the Visual Smoothness metric (VS).
- VS Visual Smoothness metric
- Other equations that also relate to the visual smoothness of the mesh may also be used.
- the layering is performed in a way that the average VS value of each layer reflects its importance in the animation sequence.
- the VS from Eq. 8 is computed for every node in the mesh independently and the nodes are ordered according to their average VS in the sequence.
- a node, or group of nodes, with the highest average VS forms the first and most important layer visually, L 0 .
- Subsequent importance layers L ⁇ , ..., L M are created by correspondingly subsequent nodes, or group of nodes, in the VS order.
- a 3-D mesh has more nodes than the desirable number of layers, then the number of nodes to be grouped in the same layer is a design choice, and dictates the output bitrate of the layer. For meshes with only a few nodes but large number of vertices per node, node partitioning might be desirable. The partitioning would restructure the 3-D mesh's vertices in a new mesh with more nodes than originally. This process will affect connectivity, but not the overall rendered model. Mesh partitioning into nodes, if it is possible, should not be arbitrary, but should rather reflect the natural objects these new nodes will represent in the 3-D scene and their corresponding motion.
- Decoding layer L (which has been built with appropriate node grouping or node partitioning) does not necessarily only refine the quality of data contained in previous layers L o— i, but adds animation details to the animated model by, for example, adding animation to more vertices in the model.
- TELLY (discussed more fully below), which is a head-and- shoulders talking avatar.
- TELLY always faces the camera (static camera). Since the camera does not move, it is a waste of bandwidth to animate the back side of the hair.
- layer L is not transmitted.
- a user views the wireframe mesh or animation in a static mode, only the visible portions of the animation can be seen since the animation does not rotate or move.
- layer L In the case where the user should be able to examine the animation by rotating or zooming in on the avatar (or other model), or look at the back side of it, layer L, is sent. In this case, the user views the animation in an interactive mode that enables the user to view portions of the animation that were invisible in the static mode, due to the lack of motion of the animation. But, layer L does not refine the animation of the visible node of the hair in layer L j . It contains additional animation data for the invisible vertices. This provides an example result of the partitioning method.
- the "interactive mode" does not necessarily require user interaction with the animation.
- the interactive mode refers to any viewing mode wherein the animation can move or rotate to expose a portion of the animation previously hidden.
- the animation may move and rotate in a more human or natural way while speaking.
- the h ] layer or other invisible layers may be sent to provide the additional animation data to complete the viewing experience.
- the static or interactive mode may depend on bandwidth available. I.e., if enough bandwidth is available to transmit both visible and invisible layers of the animation, then the animation can be viewed in an interactive mode instead of a static mode.
- the user may select the static or interactive mode and thus control what layers are transmitted.
- Figure 3A illustrates an example set of steps according to an aspect of the invention.
- the method comprises partitioning the 3-D wireframe mesh (302), computing the VS value for each node in the mesh (304) and layering data associated with the wireframe mesh into a plurality of layers such that an average VS value associated with each layer reflects the respective layer's importance in an animation sequence (306).
- the same overall bitrate is maintained when transmitting the plurality of layers by applying the error correction code to each layer where the error correction code is unequal in the layer according to the layer's importance (308).
- partition can mean a preprocessing step such as partitioning the mesh into arbitrary or non-arbitrary sub-meshes that will be allocated to the same layer. Further, the term may also have other applications, such as the process of generating the various layers comprising one or more nodes.
- FIG. 3B illustrates a flowchart of another aspect of the invention.
- the method comprises allocating more redundancy to a layer of the plurality of layers that exhibits the greatest visual distortion (320). This may be, for example, a layer comprising visually coarse information.
- the redundancy is gradually reduced on layers having less contribution to visual smoothness (322).
- Interpolation-based concealment is applied to each layer at the receiver where an irrecoverable loss of packets occurs only within the respective layer (324) from the standpoint of the receiver. As packets belonging to a particular layer travel through the communications network, they may take different paths from the sender to the receiver, thus suffering variable delays and losses.
- steps 320 and 322 are performed on the coding/ transmitter end and step 324 is performed at the receiver over a communications network, such as a peer-to-peer network.
- VS (t) From Eqs. 9and 10, VS (t) can be described as:
- Equation 11 estimates in a statistical sense the expected visual smoothness experienced per frame at the decoder. The objective is to minimize this distortion with respect to the values of k jt 's in Eq. 11. From the way the bitstream is split into layers it is expected that the optimization process allocates more redundancy to the layer that exhibits the greatest visual distortion (coarse layer), and gradually reduces the redundancy rate on layers with finest contribution to the overall smoothness. There are L values of k t that need to be calculated at
- the present invention preferably uses interpolation-based error concealment at the receiver in the case where the channel decoder receives less than n - k jt BOP packets.
- the k jt 's that provide a solution to the optimization problem will also give minimum distortion if combined with concealment techniques. The expected distortion in such cases will be lower than the distortion without error concealment.
- n 32 the parameters, from Eq. 6 the calculations for each layer's packetization are tabulated in Table I. The value of n is chosen as a compromise between latency and efficiency, since higher n makes the RS codes more resilient, by sacrificing delay and buffer space.
- TELLY was split into 3 layers according to the suggested layering method presented in Section V, each consisting of the nodes shown in Table I.
- the suggested layering scheme allocated 2 out of 3 sparse nodes to the same layer, L v The total number of vertices of these two sparse nodes represents 65% of the vertices in the reference mesh.
- the third sparse node, Nostril was allocated to layer L 2 , but its individual motion relates to a very small fraction of the model's total number of vertices ( « 1.3%). This fact may bear some significance if one desires to relate the node- to-layer allocation (using the VS metric) to the density factor df calculated per layer 4 (Eq. 2), and to the output bitrates. If such relation exists, a dynamic layering scheme may be developed for applications with such needs.
- Figure 5 depicts a first diagram 502 illustrating VS as a function of the average packet loss rate, P B , for TELLY.
- the four curves on the plot represent each suggested resilience method, for the code (31, 22).
- the average calculated codes for the UEP are as follows
- UEP, and UEP+EC outperform NP and EEP for medium to high loss rates of P B > 9%.
- the layering is performed in such a way that the lowest layer exhibited high average visual distortion. Since the UEP method allocates higher codes to the lower layer (L 0 ), better resilience is expected for L 0 at high loss rates. This factor dominates in the average distortion, resulting in better performance. At low loss rates it was noticed that EEP and UEP behave in approximately the same way, as the RS codes are more than sufficient to recover all or most errors. It is also noted that the NP method under conditions of no loss is much better than any other. This is an intuitive result, since source information takes all available channel rate, thus better encoding the signal. It is also worth noticing the effect of EC: the distortion of the UEP+EC scheme is slightly improved over the simple UEP case. This is also expected.
- Figure 6 shows the results 602 achieved for the same experiment repeated over the
- BOUNCEBALL sequence which was 'symmetrically' layered as described earlier in this section.
- the same (31, 22) EEP code was used as before for comparison.
- the graph 602 shows the same trends and relative performances as in TELLY, with UEP4-EC being the one giving the best overall performance. It is noted, however, that the distance of the UEP curves from the EEP ones decreased considerably compared to the TELLY sequence at high P B 's.
- the present invention addresses the fundamental problem of how best to utilize the available channel capacity for streaming 3-D wireframe animation in such a way as to achieve optimal subjective resilience to error.
- the invention links channel coding, packetization, and layering with a subjective parameter that measures visual smoothness in the reconstructed image. On this basis, it is believed that the result may help open the way for 3- D animation to become a serious networked media type.
- the disclosed methods attempt to optimize the distribution of the bit budget allocation reserved for channel coding amongst different layers, using a metric that reflects the human eye's visual property of detecting surface smoothness on time-dependent meshes. Using this metric, the encoded bitstream is initially partitioned into layers of visual importance, and experimental results show that UEP combined with EC yields good protection against burst packet errors occurring on the Internet.
- Embodiments within the scope of the present invention may also include computer- readable media for carrying or having computer-executable instructions or data structures stored thereon.
- Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer.
- Such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures.
- a network or another communications connection either hardwired, wireless, or combination thereof
- any such connection is properly termed a computer-readable medium.
- Computer-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- Computer-executable instructions also include program modules that are executed by computers in stand-alone or network environments.
- program modules include routines, programs, objects, components, and data structures, etc. that perform particular tasks or implement particular abstract data types.
- Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represents examples of corresponding acts for implementing the functions described in such steps.
- Embodiments of the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. Embodiments may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination thereof) through a communications network. For example, peer-to-peer distributed environments provide an ideal communications network wherein the principles of the present invention would apply and be beneficial. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
- Detection And Prevention Of Errors In Transmission (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US40441002P | 2002-08-20 | 2002-08-20 | |
US404410P | 2002-08-20 | ||
PCT/US2003/025761 WO2004019619A2 (en) | 2002-08-20 | 2003-08-15 | Method of streaming a 3-d wireframe animation |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1532818A2 true EP1532818A2 (de) | 2005-05-25 |
Family
ID=31946722
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP03793092A Withdrawn EP1532818A2 (de) | 2002-08-20 | 2003-08-15 | Verfahren zur übertragung von einem 3-d gitternetzanimationsdatenstrom |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP1532818A2 (de) |
JP (2) | JP2005536802A (de) |
KR (1) | KR20050032118A (de) |
CA (1) | CA2495714A1 (de) |
WO (1) | WO2004019619A2 (de) |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2817066B1 (fr) * | 2000-11-21 | 2003-02-07 | France Telecom | Procede de codage par ondelettes d'un maillage representatif d'un objet ou d'une scene en trois dimensions, dispositifs de codage et decodage, systeme et structure de signal correspondants |
JP4704558B2 (ja) * | 2000-12-25 | 2011-06-15 | 三菱電機株式会社 | 3次元空間データ送信表示システム、3次元空間データ送信方法、3次元空間データ送信方法をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体、3次元空間データ送信表示方法、及び3次元空間データ送信表示方法をコンピュータに実行させるためのプログラムを記録したコンピュータ読み取り可能な記録媒体 |
-
2003
- 2003-08-15 JP JP2004531041A patent/JP2005536802A/ja active Pending
- 2003-08-15 KR KR1020057002778A patent/KR20050032118A/ko not_active Application Discontinuation
- 2003-08-15 WO PCT/US2003/025761 patent/WO2004019619A2/en active Application Filing
- 2003-08-15 EP EP03793092A patent/EP1532818A2/de not_active Withdrawn
- 2003-08-15 CA CA002495714A patent/CA2495714A1/en not_active Abandoned
-
2009
- 2009-03-26 JP JP2009075288A patent/JP2009181586A/ja active Pending
Non-Patent Citations (1)
Title |
---|
See references of WO2004019619A2 * |
Also Published As
Publication number | Publication date |
---|---|
KR20050032118A (ko) | 2005-04-06 |
WO2004019619A3 (en) | 2004-12-02 |
JP2009181586A (ja) | 2009-08-13 |
JP2005536802A (ja) | 2005-12-02 |
CA2495714A1 (en) | 2004-03-04 |
WO2004019619A2 (en) | 2004-03-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10262439B2 (en) | System and method of streaming 3-D wireframe animations | |
US6947045B1 (en) | Coding of animated 3-D wireframe models for internet streaming applications: methods, systems and program products | |
Stuhlmuller et al. | Analysis of video transmission over lossy channels | |
JP2005531258A (ja) | スケーラブルで頑強なビデオ圧縮 | |
CN101895753B (zh) | 基于网络拥塞程度的视频传输方法、系统及装置 | |
Al-Regib et al. | An unequal error protection method for packet loss resilient 3D mesh transmission | |
Tan et al. | Rate-distortion optimization for stereoscopic video streaming with unequal error protection | |
Li et al. | Middleware for streaming 3D progressive meshes over lossy networks | |
Lee et al. | Adaptive UEP and packet size assignment for scalable video transmission over burst-error channels | |
AlRegib et al. | An unequal error protection method for progressively transmitted 3D models | |
Mayer-Patel et al. | An MPEG performance model and its application to adaptive forward error correction | |
Chen et al. | Fine-grained rate shaping for video streaming over wireless networks | |
WO2004019619A2 (en) | Method of streaming a 3-d wireframe animation | |
Varakliotis et al. | Optimally smooth error resilient streaming of 3d wireframe animations | |
CN113038126A (zh) | 基于帧预测神经网络的多描述视频编码方法和解码方法 | |
Cernea et al. | Scalable joint source and channel coding of meshes | |
Pereira et al. | Multiple description coding for internet video streaming | |
Gadgil et al. | Multiple description coding | |
Chen et al. | Distortion metric for robust 3D point cloud transmission | |
Bajic | Robust coding and packetization of images and intraframe-coded video | |
Cernea et al. | Unequal error protection of the reference grid for robust transmission of MeshGrid-represented objects over error-prone channels | |
Norkin et al. | Low-complexity multiple description coding of video based on 3D block transforms | |
Tian | Streaming three-dimensional graphics with optimized transmission and rendering scalability | |
Fu et al. | A joint source and channel coding algorithm for error-resilient SPIHT-coded video bitstreams | |
Al-Regib | Delay-constrained three-dimensional graphics streaming over lossy networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20050107 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PT RO SE SI SK TR |
|
RBV | Designated contracting states (corrected) |
Designated state(s): DE FI FR GB NL |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1076964 Country of ref document: HK |
|
17Q | First examination report despatched |
Effective date: 20101220 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20150527 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1076964 Country of ref document: HK |