EP2422517A1 - Procédé et dispositif de modification d'un flux de données codé - Google Patents

Procédé et dispositif de modification d'un flux de données codé

Info

Publication number
EP2422517A1
EP2422517A1 EP10726419A EP10726419A EP2422517A1 EP 2422517 A1 EP2422517 A1 EP 2422517A1 EP 10726419 A EP10726419 A EP 10726419A EP 10726419 A EP10726419 A EP 10726419A EP 2422517 A1 EP2422517 A1 EP 2422517A1
Authority
EP
European Patent Office
Prior art keywords
coded
data packet
data
data stream
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP10726419A
Other languages
German (de)
English (en)
Inventor
Peter Amon
Norbert Oertel
Bernhard Agthe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unify GmbH and Co KG
Original Assignee
Siemens Enterprise Communications GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Enterprise Communications GmbH and Co KG filed Critical Siemens Enterprise Communications GmbH and Co KG
Publication of EP2422517A1 publication Critical patent/EP2422517A1/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23406Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving management of server-side video buffer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems
    • H04N7/152Multipoint control units therefor

Definitions

  • the invention relates to a method for modifying a coded data stream from data packets, of which each data packet comprises information in which the information of successive data packets has time intervals from one another that deviate from desired time intervals.
  • the invention further relates to a device for modifying such a coded data stream from data packets.
  • the time intervals of the information contained in successive data packets differ from desired time intervals between the information of these data packets .
  • the jitter of the data stream can lead to a flickering and jerky reproduction of the information contained in the data packets at the receiver.
  • the largest existing in the data stream delay between two consecutive data packets is introduced into the encoded data stream, resulting in a delay of the data stream by the value of this largest existing in the data stream delay between two consecutive coded data packets leads.
  • the object of the invention is to provide a method and a device for modifying a coded data stream from data packets which avoid the disadvantages of the prior art.
  • each data packet comprising information
  • the information of successive data packets has time intervals from one another that deviate from desired time intervals and the time intervals by inserting an artificial coded first data packet after a second data packet into the coded data stream in the coded domain or by removing a fourth data packet present in the coded data stream from the coded data stream in the coded domain.
  • the modification of the coded data stream from data packets can cause a compensation of jitter of this data stream.
  • the modification of the coded data stream may alternatively or additionally cause the desired time intervals between data packets of a coded data stream to correspond to the time intervals of
  • Data packets of another data stream correspond.
  • the adaptation of the desired time intervals between data packets of a coded data stream to the time intervals of data packets of a further data stream is particularly important in video and audio conferencing technology.
  • the artificially encoded first data packet comprising a first information which references a second information comprised by the second data packet is generated in the coded domain, and the artificially coded first data packet is injected into the coded data stream after the second data packet inserted in the desired time interval to the second data packet, if a second data packet subsequent third data packet at a time interval to the second data packet is available, which is greater than the desired time interval.
  • a compensation of jitter of the data stream can be effected.
  • the fourth data packet present in the coded data stream is removed from the coded data stream in the coded domain if a fifth data packet following the fourth data packet is available at the desired time interval to the third data packet preceding the fourth data packet , In this way, compensation for jitter of the data stream can likewise be brought about by the modification of the coded data stream in the coded domain.
  • the less extensive intervention in the data stream compared to the method by means of a transcoder leads to a lower delay of the data stream, since not all encoded data packets are completely decoded and completely re-encoded.
  • the quality of the coded data stream suffers through the decoding and re-coding of each coded data packet of the data stream, with a modification of the coded data stream in the coded domain for the compensation of jitter the information, also called "payloads", is called the include non-artificially encoded data packets, not altered, resulting in high quality quality of the data content of the data packets to be transmitted and a low effort is guaranteed.
  • the packet header of a non-artificially coded data packet is changed in the inventive method at most.
  • the data packets are temporarily stored in a jitter buffer prior to the insertion of the artificial coded first data packet into the coded data stream or the removal of the fourth data packet from the coded data stream and the insertion of the artificial coded first data packet into the coded data stream or the removal of the fourth data packet from the coded data stream takes place such that the number of data packets buffered in the jitter buffer can be set.
  • This can avoid an overflow in the jitter buffer, which leads to an increased overall delay of the data stream, as well as an underflow of the jitter buffer, which has the consequence that no data package is available for insertion or removal.
  • the third data packet and the fourth data packet can be buffered in the coded domain in the jitter buffer such that only the third data packet is inserted into the data stream of coded data packets after the artificial coded third data packet at the desired time interval. Due to the presence of several coded data packets in the jitter buffer, only the last cached data packet can be left in the data stream to reduce the delay of the data stream. The later cached data packet is skipped, wherein the desired time interval is ensured to the last cached data packet by leaving the data packet following the skipped data packet in the data stream.
  • the coded data stream is advantageously a video data stream, wherein each data packet of the coded data stream comprises a part of a video frame and the artificial coded first data packet comprises a part of a video frame which stores the information of the video frame of the second data packet without movement. vectors and / or transformation coefficients.
  • the video data stream according to this embodiment of the invention is when each data packet of the coded data stream comprises a video frame and the artificial coded first data packet comprises a video frame comprising the information of the video frame of the second data packet without motion vectors and / or transform coefficients. Due to the absence of motion vectors and / or transformation coefficients in the information in the video frame of the artificially encoded first data packet, this data packet has a reduced memory requirement compared to the video frame of the second data packet.
  • each data packet of the video data stream comprises a portion of a video frame encoded according to one of the video encoding standards H.264 / AVC, H.263 or MPEG-4 Visual, the artificially encoded first data packet comprising a portion of a video frame consisting of skipped macroblocks ) is composed.
  • each data packet of the video data stream comprises a video frame encoded according to one of the video encoding standards H.264 / AVC, H.263 or MPEG-4 Visual
  • the artificially encoded first data packet comprising a video frame composed of skipped macroblocks.
  • the skipped macroblocks can be generated in the video encoding standard H.264 / AVC by setting the macroblock mode to "skip."
  • the reference of these skipped macroblocks is the first video frame of the reference picture list or another already coded video frame , abbreviated to transformation coefficients, are not transferred from the first frame of the reference picture list to the skipped macroblocks, thereby reducing the required memory size for the artificial coded frames compared to the required memory size of the first video frame of the reference picture list
  • the decoder loop is preferably disabled during creation of the artificial coded first data packet for creation to ensure a data packet with a data content of high quality compared to the quality of the data content of the second data packet.
  • the artificially encoded first data packet ( ⁇ 2 ') advantageously comprises a part of a video frame ( ⁇ 2'), which is inserted as a non-referenced frame or as a reference frame.
  • This embodiment includes the case that the artificially coded first data packet comprises a video frame which is inserted as a non-referential frame. Even if the inserted artificially coded frame is not a high quality copy of the original frame, the picture frame following the artificially coded frame will not deteriorate because the prediction of this frame does not change with the insertion of the artificially coded frame. In addition, the prediction structure does not need to be changed when inserting a non-reference image.
  • the coded first data packet comprises a video frame which is inserted as a reference frame.
  • the prediction structure of that stream can be adapted to an existing further prediction structure of a second video stream to be mixed with the first video stream.
  • the fourth data packet removed from the video data stream comprises a part of a non-referenced single picture frame.
  • This embodiment comprises the case where the fourth data packet removed from the video data stream comprises a non-referenced frame.
  • the prediction structure of the data packets remaining in the data stream is not changed.
  • a data packet comprises only a part of the last video frame of a group of pictures, it is advantageous that all data packets are removed from the video data stream which only comprise part of the last video frame of a group of pictures.
  • it is advantageous to remove a data packet with a part of a video frame from the lowest temporal level from the video data stream the case comprising a data packet with a video frame from the lowest temporal level the video stream is removed.
  • a data packet comprises only a part of a video frame from the lowest temporal level, it is advantageous that all data packets are removed from the video data stream which only comprise a part of a video frame from the lowest temporal level.
  • an adaptive jitter buffer is executed in a further advantageous embodiment of the invention when the insertion and removal of data packets takes place dynamically.
  • the desired time intervals of a second video data stream correspond to the time intervals between consecutive coded individual images of a first video data stream in that a first video frame of the artificially coded first data packet of the second video data stream with a first video frame of a coded data packet of the first video stream is mixed into a video frame.
  • the second video data stream has a lower sampling frequency than the first video data stream
  • the artificially encoded inserted first video frame in the second video data stream with respect to the first video data stream lower sampling frequency that the second video data stream with the inserted artificially coded first video frame has the sampling frequency of the first video data stream.
  • the invention further relates to a device, in particular for carrying out the method of one of claims 1 to 13, for modifying a coded data stream from data packets, each of which data packet comprises information, wherein the information of successive data packets have time intervals from each other, from the device deviates means for inserting an artificial coded first data packet after a second data packet into the coded data stream in the coded domain, and / or means for removing a fourth data packet present in the coded data stream (P4) from the encoded data stream in the encoded domain.
  • a device in particular for carrying out the method of one of claims 1 to 13, for modifying a coded data stream from data packets, each of which data packet comprises information, wherein the information of successive data packets have time intervals from each other, from the device deviates means for inserting an artificial coded first data packet after a second data packet into the coded data stream in the coded domain, and / or means for removing a fourth data packet present in the coded data stream (P4) from the encoded data
  • the device is preceded by a jitter buffer into which the coded data packets can be buffered, wherein the means for inserting the artificial coded first data packet into the coded data stream and / or for removing the fourth data packet from the coded data stream are designed such that the number of data packets (PI, P2, P3, P4, P5) buffered in the jitter buffer (JB) can be set.
  • JB jitter buffer
  • FIG. 3 shows temporal courses of coded data packets on a transmission path of one Transmitter to a receiver before entering a network, after exiting the network and after exiting a transcoder, which includes a decoder, an image memory and an encoder
  • Fig. 4 shows a transmission path from a transmitter to a receiver, in which after exit the coded data packets from a network before entry into the transcoder shown in Fig. 3, a jitter buffer is arranged in the transmission path, and
  • FIG. 5 shows time profiles of coded data packets on a transmission path from a transmitter to a receiver before entering a network, after leaving the network, and after emerging from a device for compensation of jitter in the coded domain in a first embodiment of the invention
  • FIG. 6 shows a transmission path of coded data packets from a transmitter to a receiver, the coded data packets, after exiting the network and before entering the device for compensating for jitter in the coded domain, passing through a jitter buffer which lies in the transmission path between the network and the device for compensation of jitter is arranged in the coded domain
  • 7 shows a comparison of the time profiles of the coded data packets before entering the network and after emerging from the jitter buffer shown in FIG. 2, after emerging from the transcoder shown in FIG. 3 and FIG. 4 and after emerging from the circuit shown in FIG 5 and FIG. 6 for compensation of jitter in the coded domain
  • FIG. 2 shows a transmission path of coded data packets from a transmitter to a receiver, the coded data packets, after exiting the network and before entering the device for compensating for jitter in the coded domain, passing through a jitter buffer which lies in the transmission path between the network and the device for compensation of jitter is arranged in the coded domain
  • FIG. 8 shows schematic sequences of precalculated images before insertion, after insertion as reference image and after insertion as non-reference image in a linear prediction structure
  • FIG. 9 shows schematic sequences of precalculated images before insertion and after insertion in a hierarchical prediction structure
  • FIG. 10 shows schematic sequences of precalculated images before removal and after removal from a linear prediction structure
  • FIG. 11 shows schematic sequences of precalculated images before removal and after removal from a hierarchical prediction structure
  • FIG. 12 shows a schematic arrangement of a mixture of coded macroblocks, wherein macroblocks from a first video data stream are mixed with skipped macroblocks from a second video data stream;
  • FIG. 13 shows schematic sequences of precomputed images of a first video data stream and a second video data stream before resampling and mixing and after resampling and mixing.
  • FIGS. 2, 3 and 4 Previously known examples of methods for compensating jitter of a data stream are shown in FIGS. 2, 3 and 4. Embodiments of the invention will be described with reference to FIGS. 5 to 13.
  • the occurrence of jitter caused by different maturities under defencel erer data packets when passing through a network or network arise is explained using the example of a transmission path of coded data packets from a sender to a receiver.
  • Coded data packets PI, P2, P3, P4, P5 of a coded data stream are transmitted by a transmitting device, hereinafter referred to as transmitter S, after passing through a network or network N to a receiving device, abbreviated to receiver R in the following.
  • the data packets PI, P2, P3, P4, P5 may comprise information of different content. Possible information encompassed by the data packets is audio information, image or video information, wherein in principle any kind of temporally successive information can be included in the data packets. Video information can be present in particular in the form of individual images. For the sake of simplicity, it is assumed in the following that the information of successive data packets PI, P2, P3, P4, P5 have time reference distances from one another which correspond to the desired time intervals dl of successive data packets PI, P2, P3, P4, P5. The assumption is, for example, if each data packet comprises a video frame.
  • the data packet P3 Due to an increased delay when passing through the network N, the data packet P3 has a position relative to the data packets PI and P2, which is shifted by the time interval d3 from its intended position t3 in the intended time interval, also referred to as the nominal distance.
  • the data packet P4 is shifted by the time interval d4 from its intended position t4, which is spaced apart from the position of the data packet P2 by two provided time intervals d1.
  • the data packet P3 at its intended time position, a time delay d3 and the data packet P4 to its temporal position, a delay d4.
  • the data packet P5 has the intended time interval d1 at the intended time position t4 of the data packet P4.
  • the intended time interval d1 between the data packet P4 and the data packet P5 is reduced by the delay d4 of the data packet P4.
  • the data packets P3 and P4 have lost their relative temporal coupling with respect to the preceding data packets PI and P 2 and with respect to the subsequent data packet P5 during the transmission through the network N. If the data packets PI, P2, P3, P4, P5 represent video data packets, decoding and playback without corresponding measures for compensating the different transit times of the data packets PI, P2, P3, P4, P5 leads to a flickering the jerky playback of the video whose information the video packs contain.
  • the example illustrated in FIG. 1 assumes that the time interval between successive data packets is constant and the packet length of each data packet for easy explanation of the jitter PI, P2, P3, P4, P5 is constant. However, it is not necessary that the data packets have an equal time interval or a constant packet length. Instead, the data packets may have arbitrary time intervals from one another and any desired packet lengths, provided that the different time intervals of the data packets relative to one another and the then different packet lengths of the data packets are determined.
  • a jitter buffer JB is arranged between the transmitter S and the receiver R, the data packets PI, P2, P3, P4, P5, after exiting the network N and before entering the receiver R, pass through the jitter buffer JB After exiting the network N and before the data packets PI, P2, P3, P4, P5 in the jitter buffer JB, the data packets P3 and P4 each have a time interval d3, d4 at their intended time positions t3, t4, as shown in Fig. 1.
  • the data packets PI, P2, P3, P4, P5 entering the jitter buffer JB are stored for a specific duration, also referred to as buffer delay It is possible to store the data packets for a certain period of time. Lich to send the data packets PI, P2, P3, P4, P5 in the scheduled time sequence with the time interval dl between successive data packets to the receiver R. In this way, the intended time sequence is produced with the intended time interval dl between successive data packets.
  • a disadvantage of this method shown in FIG. 2 lies in the delay of the data packets associated with the jitter buffer JB. If different data streams, each comprising data packets PI, P2, P3, P4, P5 and to each of which a jitter buffer JB is assigned, are to be mixed, the data packets which have passed through the jitter buffer JB are delayed by a value corresponding to that
  • Delay of the data stream with the highest Verzögerungsj itter corresponds. Therefore, the delay of all the data packets to be mixed is increased to the value that the data stream with the highest delay time has. The consequence is a delay of the data packets of all other incoming data streams to the delay associated with the data stream with the highest delay time.
  • the high degree of delay of the data packets due to the video data stream with the highest delay time inhibits communication between the participants in the audio or video conference.
  • FIG. 1 and FIG. 2 A further known prior art solution for the correction of different delays of different data packets when passing through the data packets through a network N is shown in FIG.
  • the data packets P3 and P4 each have a time delay d3, d4 at their intended time positions t3, t4 after passing through the network N.
  • the data packets PI, P2, P3, P4, P5 pass through a transcoder TR comprising a decoder DE, an image memory PB and an encoder EN.
  • each of the data packets PI, P2, P3, P4, P5 comprises one video frame each, the information required for playing a video data stream comprises this video frame.
  • the video data entering the transcoder TR are decoded in the decoder DE, subsequently buffered in the picture memory PB and subsequently encoded in the encoder EN.
  • these video frames are in the form of decoded and subsequently re-encoded frames in the data packets PI *, P2 *, P3 *, P4 *, P5 *.
  • the decoding in the decoder DE, the intermediate storage in the image memory PB and the encoding in the encoder EN is indicated by an asterisk * in the data packets shown in FIG.
  • the content of the data packet P2 is coded a second time as a data packet P2 ** because, in contrast to data packet P3, the data packet P2 at the time of the required to comply with the intended time interval dl coding of the data packet following the data packet P2, is stored in the image memory PB.
  • the decoded version of the data packet P4 is also not available in time for coding. Therefore, the data content of the data packet P3, which is present after the data packet P2 ** leaves the encoder EN in the image memory PB, is coded in the form of the data packet P3 *.
  • the data packet P5 * is encoded after the decoded data packet P5 is available in time for encoding.
  • the data content of the data packet P4 is not re-encoded and forwarded to the receiver R.
  • the data packets PI *, P2 *, P2 **, P3 *, P5 * have the intended timelines. chen distance dl from each other, wherein the data content of the data packet P2 twice and the data content of the packet P4 are not present over time t (e).
  • the method shown in Figure 3 by means of transcoder TR is used in multipoint conference units (MCU) in which a plurality of video input data streams are to be mixed.
  • MCU multipoint conference units
  • the mixing of the input video data streams is accomplished by decoding all input video data streams, mixing these video input data streams in the uncompressed pixel domain, and encoding the rearranged video frames.
  • the decoded video frames of the input video data streams be scaled.
  • a first video stream does not contain a current video frame due to delay jitter for timely merging with the video frame of a second video stream
  • the video frame preceding that missing current video frame of the first video stream of that video stream already stored in the frame buffer PB after decoding is merged and Coding used. If, due to a multiple encoding of a video frame, two or more frames of the input video data stream are present in decoded form in the transcoder TR before another encoded video frame exits the encoder EN, the last frame decoded in the decoder DE stored in the frame memory PB becomes using the encoding, with the other decoded frame (or the other other decoded frames) in the decoder DE is (are) neglected.
  • Transcoding is cumbersome in fully decoding and encoding all the data packets of the video data stream or streams with respect to the required computational power.
  • transcoder TR In order to carry out the method by means of transcoder TR, therefore, expensive hardware to be assigned to the method for decoding and coding is required in most cases.
  • the quality of the transmitted data of the video data stream is reduced by the transcoding, because only qualitatively reduced data content and not the original data content of the data packets with the individual images of the video data stream can be used for the encoding.
  • Fig. 4 shows a transmission path from the transmitter S to the receiver R of another prior art example for the compensation of jitter, in which after the exit of the coded data packets from the network N at the location b before entering the transcoder TR shown in Fig. 3, a jitter buffer JB is arranged in the transmission path of the data packets.
  • a jitter buffer JB is arranged in the transmission path of the data packets.
  • a queuing delay in the jitter buffer JB can be controlled by the use of the transcoder TR.
  • the queue delay is controlled by skipping the encoding of individual images in the image memory PB or by multiple encoding of identical individual images in the encoder EN. By skipping the encoding of frames in the frame buffer PB, the queuing delay is reduced. By coding the same images several times, the queuing delay in the jitter buffer JB is increased.
  • the queue delay is considered to be the duration of a data packet that this data packet on average remains in the jitter buffer JB before it exits the jitter buffer JB.
  • the queue delay is approximately the duration required to play the data packets stored in the jitter buffer JB.
  • FIG. 5 shows time profiles t (a), t (b), t (f) of coded data packets PI, P2, P3, P4, P5 on a transmission path from the transmitter S to a receiver R according to a first embodiment of the invention .
  • the data packets After exiting the network N and before entering the receiver R, the data packets pass through a device for compensating jitter in the coded domain, the coded domain of the data packets also being called a compressed domain.
  • the data packets are transmitted by the device for compensating jitter in the compressed domain (CDD) in the compressed domain, ie the coded data stream its coded data packets, freed from delay jitter.
  • CDD compressed domain
  • an artificial coded video frame is formed or more artificial coded data packets inserted into the encoded video data stream.
  • the artificial coded video frame references the frame preceding that frame and repeats the data content of that previous frame.
  • the artificial coded video frame can be generated by composing so-called skipped macroblocks.
  • the skipped macroblocks are generated in the video encoding standard H.264 / AVC by setting the macroblock mode to "skip."
  • the reference of these skipped macroblocks is the first video frame of the reference picture list or another already encoded video frame
  • the motion vectors for the skipped macroblocks are calculated by means of the skipped macroblocks of adjacent macroblocks, which precedes the skipped macroblocks, with respect to the memory size of the artificial coded frames compared to the memory size of the first video frame of the reference picture list Since all the macroblocks in the video frame have the "skip" mode, the motion vectors in this calculation become zero g setup.
  • Skipped macroblocks in addition to the video encoding standard H.264 / AVC, are also defined in video encoding standards H.263 and MPEG-4 Visual. However, the skipped macroblocks of video encoding standards H.263 and MPEG-4 Visual are not motion compensated.
  • a deblocking filter in the decoder loop should be turned off to produce a perfect copy of the previous frame. This can be implicitly determined by the algorithm or explicitly signaled.
  • the device for compensating for jitter in the coded plane CDD can also process audio data streams, picture data streams or another type of temporally successive information. In the exemplary embodiment illustrated in FIG.
  • the data packets PI and P2 are available in good time and are forwarded to the receiver R without any further modification when passing through the device for compensating for jitter in the coded domain CDD.
  • the data packet P3 arrives at the intended time t3 delayed due to the time delay d3 in the device for compensation of jitter in the coded plane CDD. Therefore, the data packet P2 'is encoded as a copy of the data packet P2 using skipped macroblocks.
  • the artificial coded data packet P2 ' should not be inserted in the reference picture list in order to avoid existing reference pictures being removed from the reference picture list.
  • the data packet P3 is forwarded to the receiver R without changing the data packet P3 by the device for compensating for jitter in the coded plane CDD.
  • modifications of the video coding elements of an artificially coded data packet of the video data stream such as coded transform coefficients, mode information and motion vectors, are not required.
  • Only high-level syntax is modified, such as the image sequence number of the data packet.
  • the Real-Time Transport Protocol uses frame sequence numbers to detect loss of data packets. These frame sequence numbers should therefore be rewritten.
  • the device for compensating for jitter in the coded domain CDD is used as an RTP mixer and in this case terminates the RTP session of the transmitter S.
  • the data packet P5 has dependencies on the data content of the data packet P4 with respect to its data content, the data packet P4 does not fail in the time course t (f) at the location f after the exit of the data packets from the device for compensating for jitter in the coded domain CDD the data stream as shown in Fig. 5, but forwarded, otherwise the data packet P5 could not be decoded in the receiver R.
  • the data packet P5 is passed through the data packet P5 by the device for compensation of jitter in the coded domain CDD forwarded to reduce the transmission delay of the data stream. Due to the non-forwarding of the data packet P4, the data packet P4, such as a comparison of the time profiles before entering the device for compensation of jitter in the coded domain CDD at location b and after the exit of the data packets from the device for the compensation of jitter in the coded level CDD at location f shows skipped. By skipping the first non-referenced frame, which corresponds to the data packet P4 in the method shown in FIG. 5, the overall delay of the data stream can be reduced.
  • the jitter compensation device in the coded domain CDD can be combined with the jitter buffer JB.
  • FIG. 6 shows, in a further embodiment of the invention, a transmission path from the coded data packet from the transmitter S to the receiver R, wherein the coded data packets exit the network N and enter the device for compensating for jitter in the coded domain CDD traversing a jitter buffer JB arranged in the transmission path between the network N and the device for compensating for jitter in the coded domain CDD.
  • the number of steps of insertion of artificial coded data packets and the number of steps for removing, also called skipping, of data packets can be reduced , For example, it may be the case that a network factory delay, in which all data packets of a data stream are delayed, and / or a
  • jitter buffer JB can smooth the playback of the data packets.
  • the level of the jitter buffer JB can be controlled by inserting or removing data packets / frames into the data stream by means of the jitter compensation device in the coded domain CDD.
  • the jitter buffer JB By using a jitter buffer JB, which passes through the data packets before entering the device for compensating for jitter in the coded domain CDD, it is possible that when a network delay and / or a network delay occurs, only individual data packets / frames are inserted or removed, so that the modification of the data stream during playback in the receiver R is not or hardly noticeable.
  • the jitter buffer JB sends via a line 2 outside the transmission path of the data packets from the jitter buffer JB to the device for compensation of jitter in the coded domain CDD information about the currently detected in the jitter buffer JB network delay and / or the current in the jitter buffer JB identified
  • Network delay to the device for compensation of jitter in the coded domain CDD By inserting in each case an artificially coded data packet and / or distances of in each case one data packet from the data stream into the jitter buffer JB in the given time interval in the coded domain CDD, the fill level of the jitter buffer JB can be controlled. This may result in an overflow in the jitter buffer JB leading to an increased overall delay of the data stream and an underflow of the jitter buffer JB which results in no data packet / frame being available to the jitter compensation device in the coded domain CDD , be avoided. With the arrangement shown in Fig.
  • the network delay decreases: As a result, the level in jitter buffer JB increases, and hence the queue delay, as more images enter Jitter Buffer JB as compared to a time before the network delay is removed than to maintain the designated time interval dl between successive ones Data packets are requested from the jitter compensation device in the coded domain CDD from the jitter buffer.
  • a delay in the network N increases:
  • the filling of the jitter buffer JB should be increased by inserting artificial coded data packets / frames.
  • the network delay and the network delay are based on statistical information of the data packet entered into the jitter buffer. kete, for example their timing can be calculated. The calculation can be done within the jitter buffer JB.
  • the arrangement shown in FIG. 6 may be considered as a special case of the arrangement shown in FIG. 5, the jitter buffer JB in FIG. 6 having negligible queuing delay of the data packets contained in the jitter buffer JB.
  • FIG. 7 shows a comparison of the time profiles of the coded data packets before entry into the network N and after exiting from the jitter buffer shown in FIG. 2, after exiting from the transcoder TR shown in FIG. 3 and FIG - Occurs from the device shown in Fig. 5 and Fig. 6 for compensation of jitter in the coded domain CDD.
  • the time course t (a) at location a after the data packets have left the transmitter S and before they enter the network N is assigned to the time profiles t (b), t (c), t (e) and t (f), where the places b, c, e and f correspond to the places b, c, e, f in the figures 1, 2, 3, 4, 5, 6, by a time-shifted (average) transmission delay dN.
  • the data packets PI, P2, P3, P4, P5 can be processed by the receiver R or a component arranged between the transmitter S and the receiver R.
  • a jitter buffer JB As a component arranged between the transmitter S and the receiver R, a jitter buffer JB, a transcoder TR or the device for compensating for jitter in the coded domain CDD can be used.
  • the jitter buffer JB has to compensate for a Verzögerungsj itter, which is a special case of the jitter, the entire occurring during the transmission of the data stream with the coded data packets PI, P2, P3, P4, P5 occurring delay j itter. Therefore, the delay dJ by using the jitter buffer JB is relatively high.
  • the transcoder TR and the device for compensating for jitter in the coded domain CDD have additional means for compensation of delay elements. Due to the possibility of inserting and / or removing data packets from the data stream, the jitter buffer JB the delays dT by the transcoder TR and dC by the device for compensation of jitter in the coded domain CDD lower.
  • FIG. 8 shows schematic sequences 10, 11, 12 of images before insertion 10, after insertion as reference image 11 and after insertion as non-reference image 12 in the case of a linear prediction structure with respect to time t (see arrow).
  • the word "image” is used for the word "video frame.”
  • an artificial coded image is provided which encodes the data content corresponding to the data content represented artificial coded image previous image, inserted into the data stream to maintain the intended time interval dl between successive images.
  • FIG. 8 shows for clarity the insertion of a precalculated image in the form of an artificially coded video frame in a linear prediction structure, also called IPPP.
  • the first image of a sequence of images, the intraframe Bl does not refer to the image preceding the intraframe Bl.
  • the images B2, B3, B4, B5 following the intraframe B1 refer to the preceding image B1, B2, B3, B4. Therefore, in the linear prediction structure 10 except for the intraframe Bl, the preceding image of each precalculated image is a reference image.
  • FIG. 8 shows no coded data packets PI, P2, P3, P4, P5, but precalculated images B1, B2, B3, B4, B5.
  • Each of the data packets PI, P2, P3, P4, P5 may comprise as information in each case one of the images B1, B2, B3, B4, B5 or a part of in each case one of the images B1, B2, B3, B4, B5.
  • the image sequence 10 Before inserting the precalculated image B2 ' , the image sequence 10 has an order of the precalculated images in ascending order starting with the intraframe Bl.
  • the inserted precomputed image B2 ' has the data content of its predecessor B2. If the video encoding standard H.264 / AVC is used, the inserted artificial encoded pre-calculated image B2 'can be composed solely of skipped macroblocks as explained above.
  • a precalculated image B2 ' is inserted as a non-reference image into the linear IPPP prediction structure of the images B1, B2, B3, B4, B5 after the precalculated image B2. Since the artificial coded video frame B2 'is a non-reference frame, the frame B 3 following the artificial coded predicted frame B2' does not refer to the artificially coded precomputed frame B2 ' but to the precalculated frame B2.
  • the referencing of the precalculated image B3 to the pre-calculated image B2 is represented by an arrow R1.
  • both the artificially encoded newly inserted precomputed image B2 'and the existing precalculated image B3 each reference the precalculated image B2.
  • the insertion of a new image as a non-reference image has some advantages: Even if the newly inserted artificially encoded precomputed image B2 'is not a high quality copy of the predicted image B2, the image B3 following the artificially encoded newly inserted precomputed image B2' will not deteriorate the image quality, since the temporal predictor, which indicates the referencing of the image B4, of the pre-computed image B3 has not changed by the insertion of the artificially encoded precomputed image B2 ' .
  • the prediction structure need not be changed in any way when inserting a non-reference image. Even if the prediction structure is more extensive than shown in FIG. 8, for example when using a
  • the inclusion of an image composed entirely of skipped macroblocks as a reference frame has advantages in video merging in the compressed domain. For example, by inserting an additional reference frame into a first frame of video data stream, the prediction structure of that stream may be adapted to an existing further prediction structure of a second video stream to be merged with the first video stream.
  • the image preceding a precomputed image is to be declared as the reference image if the image preceding the precalculated image is a non-reference image B2 'in the schematic sequence 12. Further references may be made to maintain the original prediction structures.
  • the artificially encoded newly inserted predicted calculated image B2 ' is inserted as a reference image, wherein the pre-computed image B3 referenced prior to insertion onto the precalculated image B2 now references the newly inserted artificially encoded precalculated image B2'.
  • the precomputed image B2 preceded by the artificially encoded precomputed image B2 ' can not be declared as a reference image, for example because an overflow of a reference image buffer would occur as a reference image upon declaration of the precomputed image B2, the insertion of the artificially coded precomputed image to be inserted becomes the Non-reference image repeated. Since the reference image buffer has not changed by the insertion of the newly coded precoded image, the prediction is performed as for the previous precomputed image. For clarification, FIG.
  • FIG. 9 shows schematic sequences 21, 22 of precalculated images before insertion 21 and after insertion 22 in a hierarchical prediction structure in which the image B2 preceding the preprocessed image B2 'to be inserted is a non-reference image.
  • the precalculated image B3 does not refer to the precalculated image B2, but to the precalculated image Bl, as shown by an arrow rl.
  • both the newly inserted artificially encoded precomputed image B2' and the predicted image B 3 following the precalculated image B2 'refer to the precalculated image B1, as in the temporal image Sequence 22 is represented by the references rl of B2 'and r2 of B3.
  • high-level data information such as RTP, frame sequence number and a time stamp of the pre-computed image B2 are changed.
  • the signal processing data content such as coefficients, motion vectors, and mode information, etc.
  • FIG. 9 shows, when an artificially coded predicted curve is inserted, In the time sequence 12, a hierarchical prediction structure emerges from the linear prediction structure shown in the time sequence 10. In a hierarchical prediction structure, in contrast to the time sequences 10, 11 shown in FIG. 8, there is a precalculated image B3 whose previous precalculated image B2, as shown in the temporal sequence 21, is a non-reference image.
  • the older of the present in the device for compensating for jitter in the coded plane CDD images are skipped to reduce the delay of the data stream.
  • the desired time interval dl between successive data packets PI, P2, P3, P4, P5 is halved when successive data packets enter the device for compensating for jitter in the coded plane CDD.
  • the data packet arriving prematurely in the device for compensating for jitter in the coded plane CDD can be removed from the data stream without having previously inserted an artificially coded data packet into the data stream.
  • the frame, comprised of a coded data packet, is removed from the data stream in the coded domain.
  • FIG. 10 shows schematic sequences 31, 32 and 41, 42 of precalculated images from a linear prediction structure (Fig. 10) and a hierarchical prediction structure (Fig. 11).
  • FIG. 10 shows schematic sequences of precalculated images before removal 31 and removal 32 from a linear prediction structure.
  • FIG. 10 shows schematic sequences of precalculated images before removal 31 and removal 32 from a linear prediction structure.
  • FIG. 10 shows, it is clear from the linear IPPP Prediction structure only the last frame B4 a group of pictures (Group Of Pictures: GOP) from the data stream of images Bl, B2, B3, B4, B5, B6, B7 away.
  • GOP Group Of Pictures
  • a group of pictures GOP is composed of precalculated pictures, which, with the exception of the first picture of the picture group, each refer to the preceding picture of the picture group precomputed.
  • the images B1, B2, B3, B4 and the images B5, B6, B7 each form individual groups of images.
  • the images B2 and B3 remaining in the data stream of the schematic sequence 32 each refer to the images B1, B2 preceding these images B2, B3.
  • the individual images B4, B6, B8 form those individual images of the lowest temporal level.
  • the individual images B4, B6, B8 are non-reference images and can be removed from the data stream in a simple manner, since no referenced images refer to these images B4, B6, B8. Since the number of frames that can be skipped is greater in a hierarchical prediction structure than in a linear prediction structure, as shown by a comparison of the schematic sequences 31 in Figures 10 and 41 in Figure 11 for the easily removable non-reference pictures, it is advantageous to generate temporally scalable data streams with an existing encoder, which have hierarchical prediction structures.
  • both the insertion of an artificially coded data packet, which comprises a single image, and the removal of a data packet, which comprises a single image, takes place dynamically.
  • the device for compensating for jitter in the coded plane performs the function of an adaptive jitter buffer. Compensation of jitter in the encoded domain, also called compensation of jitter in the compressed domain, can be combined with video merging in the compressed domain, also called video merging in the encoded domain.
  • FIG. 12 shows a schematic arrangement of a mixture of 50 coded macroblocks Ml, M2, wherein macroblocks Ml from A first video data stream 51 is mixed with skipped macroblocks M2 from a second video data stream 52.
  • original macroblocks M1 of the first video data stream 51 are in a matrix of 8x10 macroblocks M1, ie an eight-row matrix U.N d ten columns, shown in Fig. 12.
  • the matrix of macroblocks M1 can have any number of macroblocks M1 in row and column form.
  • the macroblocks M1 of the first video data stream consist of original macroblocks of the incoming first video data stream.
  • the macroblocks M2 of the incoming second video stream consist of skipped macroblocks (skip mode) since the original macroblocks of the incoming second video stream are not available in time for merging with the original macroblocks M1 of the incoming first video stream 12, original macroblocks Ml to artificially encoded inserted macroblocks M2 for mixing the macroblocks Ml of the first incoming video data stream and the macroblocks M2 of the second incoming video data stream to each other for a more detailed description of video mixing at macroblock level (for example Entropy decoding and re-encoding) is applied to those described in document WO
  • motion vectors for skipped macroblocks are calculated from coded macroblocks adjacent to the skipped macroblocks which are coded in time before the skipped macroblocks. It would therefore be disadvantageous in the schematic arrangement shown in FIG. 12 to use adjacent macroblocks M1 of the first video data stream to calculate the skipped macroblocks M2 of the second video data stream. Therefore, in a mixture of macroblocks of a first video stream and a second video stream, the calculation of the motion vectors for the skipped macroblocks is different from the computation of the skipped macroblock motion vectors in the case of a single unmixed data stream.
  • the motion information for the skipped macroblocks M2 of the second incoming video stream 52 is explicitly encoded, for example, by using the regular P-macroblock mode with a division for the original macroblocks of 16x16 macroblocks to be used to compute the motion vectors, ie the largest partitioning in video encode standard H .264 / AVC.
  • the motion vectors are implicitly set so that the motion vectors are set to zero.
  • the data rate for skipped macroblocks calculated in this way is larger than the data rate of a video stream not to be mixed, it is small in relation to the data rate of non-skipped macroblocks.
  • the insertion of "skip" mode macroblocks may be advantageously used to resample video image sequences, for example, to mix two or more video image sequences having different sampling frequencies, for clarity, schematic sequences 60, 61, 66, 67 are shown in FIG precomputed images of a first video stream and a second video stream prior to resampling and mixing 60, 61 and after resampling and mixing 66, 67.
  • the schematic sequence of predicted images 60 of a first video stream for example, has a sampling frequency of 30 fps (frames per second). while the schematic sequence 61 of precomputed images of a second video data stream has a sampling frequency of 15 fps, the sampling frequency of the second video data stream being halved compared to the first video data stream is artificially encoded between the precomputed images B12, B22, B32, B42 of the second video data stream precalculated images B21, B41, B61 arranged.
  • artificially encoded video frames B52, B62, B72 are inserted into the second video data stream as shown in schematic sequence 67 in FIG.
  • the prediction structure of the mixed video data streams 1 and 2 has a linear prediction structure.
  • the ratio of the sampling frequencies of the first incoming video data stream and the second incoming video data stream is a natural number.
  • FIG. 13 is for mixing a first incoming video data stream and a second incoming video data stream into an outgoing video data stream at a sampling frequency that matches the higher sampling frequency of the first incoming video data stream or the second incoming video data stream. matched to the video data stream, with each ratio of the sampling frequencies of the first incoming video data stream and the second incoming video data stream possible.
  • a frame of video data stream at the lower sampling frequency which is not available at any time can be inserted into the video data stream having the lower sampling frequency, composed of skipped macroblocks, as already shown in FIG 13 above.
  • a reduced delay of the data stream with the compensated jitter can be achieved compared to a solution with a jitter buffer.
  • a prior art jitter buffer inserts the maximum delay in the data stream between two consecutive coded data packets into the data stream to compensate for jitter.
  • the device for compensating for jitter in the coded domain has additional means for compensating jitter of different types, for example delay elements, by inserting coded data packets into and / or removing existing data packets from the data stream.
  • the method by means of a device for modifying a coded data stream of data packets in the coded domain on a less extensive intervention in the data stream since in contrast to the method by means of transcoder (decoding and re-coding) only skipped macroblocks can be encoded alongside other non-re-encoded data packets.
  • a lower delay of the data stream occurs.
  • the delay associated with this high level of signal processing is avoided in the method by means of a device for compensation of jitter in the coded plane in that not all data packets are decoded and coded, but only some artificially coded data packets, for example in the form of skipped macroblocks Reference frames or non-referenced frames are inserted into the otherwise untreated encoded data stream.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un procédé de modification d'un flux de données codé, composé de paquets de données (P1, P2, P3, P4, P5), dont chaque paquet de données (P1, P2, P3, P4, P5) comprend une information (B1, B2, B3, B4, B5). Dans le flux de données, les informations (B1, B2) présentent des paquets (P1, P2, P3, P4, P5) qui se succèdent à des intervalles temporels (d1, d3), mais qui s'écartent des intervalles temporels théoriques (d1), et qui sont réglés sur les intervalles temporels théoriques (d1), en insérant un premier paquet de données artificiellement codé (P2'), dans le temps, après un second paquet de données (P2) dans le flux de données dans le domaine codé, ou en enlevant du flux de données codé dans le domaine codé, un quatrième paquet (P4) présent dans le flux de données codé.
EP10726419A 2010-05-07 2010-05-07 Procédé et dispositif de modification d'un flux de données codé Withdrawn EP2422517A1 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2010/002833 WO2011137919A1 (fr) 2010-05-07 2010-05-07 Procédé et dispositif de modification d'un flux de données codé

Publications (1)

Publication Number Publication Date
EP2422517A1 true EP2422517A1 (fr) 2012-02-29

Family

ID=43085730

Family Applications (1)

Application Number Title Priority Date Filing Date
EP10726419A Withdrawn EP2422517A1 (fr) 2010-05-07 2010-05-07 Procédé et dispositif de modification d'un flux de données codé

Country Status (6)

Country Link
US (1) US8873634B2 (fr)
EP (1) EP2422517A1 (fr)
CN (1) CN102318356B (fr)
BR (1) BRPI1006913A2 (fr)
TW (1) TW201208378A (fr)
WO (1) WO2011137919A1 (fr)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060242277A1 (en) 2005-03-31 2006-10-26 Tripwire, Inc. Automated change approval
JP2012095053A (ja) * 2010-10-26 2012-05-17 Toshiba Corp ストリーム伝送システム、送信装置、受信装置、ストリーム伝送方法及びプログラム
US20140187331A1 (en) * 2012-12-27 2014-07-03 Nvidia Corporation Latency reduction by sub-frame encoding and transmission
US10033658B2 (en) * 2013-06-20 2018-07-24 Samsung Electronics Co., Ltd. Method and apparatus for rate adaptation in motion picture experts group media transport
CN103780908B (zh) * 2014-02-25 2017-02-15 成都佳发安泰科技股份有限公司 一种高效的h264解码方法
US10602388B1 (en) * 2014-09-03 2020-03-24 Plume Design, Inc. Application quality of experience metric
US9548876B2 (en) * 2015-05-06 2017-01-17 Mediatek Inc. Multiple transmitter system and method for controlling impedances of multiple transmitter system
CN105049906A (zh) * 2015-08-07 2015-11-11 虎扑(上海)文化传播股份有限公司 一种数据处理方法及电子设备
US10809928B2 (en) 2017-06-02 2020-10-20 Western Digital Technologies, Inc. Efficient data deduplication leveraging sequential chunks or auxiliary databases
US10503608B2 (en) 2017-07-24 2019-12-10 Western Digital Technologies, Inc. Efficient management of reference blocks used in data deduplication
US11115604B2 (en) * 2018-01-02 2021-09-07 Insitu, Inc. Camera apparatus for generating machine vision data and related methods
CN108769786B (zh) * 2018-05-25 2020-12-29 网宿科技股份有限公司 一种合成音视频数据流的方法和装置
US11159965B2 (en) 2019-11-08 2021-10-26 Plume Design, Inc. Quality of experience measurements for control of Wi-Fi networks
CN113784209B (zh) * 2021-09-03 2023-11-21 上海哔哩哔哩科技有限公司 多媒体数据流处理方法及装置

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5881245A (en) * 1996-09-10 1999-03-09 Digital Video Systems, Inc. Method and apparatus for transmitting MPEG data at an adaptive data rate
US7801132B2 (en) * 1999-11-09 2010-09-21 Synchrodyne Networks, Inc. Interface system and methodology having scheduled connection responsive to common time reference
US7310678B2 (en) * 2000-07-28 2007-12-18 Kasenna, Inc. System, server, and method for variable bit rate multimedia streaming
US7409094B2 (en) * 2001-05-04 2008-08-05 Hewlett-Packard Development Company, L.P. Methods and systems for packetizing encoded data
US7305704B2 (en) * 2002-03-16 2007-12-04 Trustedflow Systems, Inc. Management of trusted flow system
GB2396502B (en) * 2002-12-20 2006-03-15 Tandberg Television Asa Frame synchronisation of compressed video signals
US20050008240A1 (en) 2003-05-02 2005-01-13 Ashish Banerji Stitching of video for continuous presence multipoint video conferencing
US8737219B2 (en) * 2004-01-30 2014-05-27 Hewlett-Packard Development Company, L.P. Methods and systems that use information about data packets to determine an order for sending the data packets
US7594154B2 (en) * 2004-11-17 2009-09-22 Ramakrishna Vedantham Encoding and decoding modules with forward error correction
US9154395B2 (en) 2006-10-05 2015-10-06 Cisco Technology, Inc. Method and system for optimizing a jitter buffer
JP4518111B2 (ja) * 2007-07-13 2010-08-04 ソニー株式会社 映像処理装置、映像処理方法、及びプログラム
US20090154347A1 (en) * 2007-12-12 2009-06-18 Broadcom Corporation Pacing of transport stream to compensate for timestamp jitter
US8094234B2 (en) 2008-10-14 2012-01-10 Texas Instruments Incorporated System and method for multistage frame rate conversion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2011137919A1 *

Also Published As

Publication number Publication date
TW201208378A (en) 2012-02-16
US8873634B2 (en) 2014-10-28
BRPI1006913A2 (pt) 2016-02-16
CN102318356B (zh) 2015-01-28
WO2011137919A9 (fr) 2011-12-22
CN102318356A (zh) 2012-01-11
US20120027093A1 (en) 2012-02-02
WO2011137919A1 (fr) 2011-11-10

Similar Documents

Publication Publication Date Title
EP2422517A1 (fr) Procédé et dispositif de modification d'un flux de données codé
DE69535553T2 (de) Videokompression
DE60207381T2 (de) Verfahren und system zum puffern von stream-daten
DE60106286T2 (de) Zeitbasisreferenzdatumregeneration für mpeg-transportströme
DE60023576T2 (de) Verfahren und Vorrichtung zur Bewegtbilddatentranscodierung
DE19635116C2 (de) Verfahren zur Videokommunikation
EP2198610B1 (fr) Procédé et dispositif permettant de créer un flux vidéo de sortie codé à partir d'au moins deux flux vidéo d'entrée codés, utilisation du dispositif
DE69011422T2 (de) Paketstruktur und Übertragung der von einem Videosignal-Kodierer erzeugten Information.
DE69435000T2 (de) Bildkodierungsvorrichtung
EP0682454B1 (fr) Procédé et dispositif de transcodage des flux de bits de données vidéo
DE69814642T2 (de) Verarbeitung codierter videodaten
DE69835211T2 (de) Umschaltung zwischen komprimierten videobitströmen
DE10048735A1 (de) Verfahren zur Codierung und Decodierung von Bildsequenzen sowie Einrichtungen hierzu
DE10392268T5 (de) Auf einem Strom basierender Bitraten-Codeumsetzer für MPEG-codiertes Video
DE60211790T2 (de) Videokodierung mit konstanter Qualität
DE60310249T2 (de) System und verfahren zur bereitstellung von fehlerbehebung für streaming-fgs-codierte videosignale über ein ip-netzwerk
EP2425627B1 (fr) Méthode pour la synchronisation temporelle du codage de type intra de plusieurs sous-images pendant la génération d'une séquence vidéo d'images mixtes
DE102011107161A1 (de) Verfahren und Vorrichtungen zum verzögerungsarmen Ein- oder Umschalten auf ein digitales Videosignal
EP0703711B1 (fr) Codeur à segmentation d'un signal vidéo
DE60131602T2 (de) Multiplexabhängige Videokomprimierung
EP2230784A1 (fr) Dispositif et procédé de transmission d'une multitude de signaux d'informations dans un multiplexe temporel flexible
EP2206311B1 (fr) Procédé et système de transmission à bande passante optimisée de flux de données tvhd par un réseau de distribution à base ip
DE102006012449A1 (de) Verfahren zum Dekodieren eines Datenstroms und Empfänger
DE19948601B4 (de) Videodaten-Kompressionssystem und Verfahren zum Puffern von Videodaten
DE102005046382A1 (de) Verfahren, Kommunikationsanordnung und dezentrale Kommunikationseinrichtung zum Übermitteln von Multimedia-Datenströmen

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20111123

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

17Q First examination report despatched

Effective date: 20120911

RIN1 Information on inventor provided before grant (corrected)

Inventor name: AMON, PETER

Inventor name: OERTEL, NORBERT

Inventor name: AGTHE, BERNHARD

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

DAX Request for extension of the european patent (deleted)
18W Application withdrawn

Effective date: 20130723