WO2008082762A1 - Procédé et système de traitement de données vidéo codées - Google Patents

Procédé et système de traitement de données vidéo codées Download PDF

Info

Publication number
WO2008082762A1
WO2008082762A1 PCT/US2007/082822 US2007082822W WO2008082762A1 WO 2008082762 A1 WO2008082762 A1 WO 2008082762A1 US 2007082822 W US2007082822 W US 2007082822W WO 2008082762 A1 WO2008082762 A1 WO 2008082762A1
Authority
WO
WIPO (PCT)
Prior art keywords
macroblocks
macroblock
encoded
missing
vector
Prior art date
Application number
PCT/US2007/082822
Other languages
English (en)
Other versions
WO2008082762B1 (fr
Inventor
Nachiappan Sundaram
Raghavan Subramaniyan
Original Assignee
Motorola, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola, Inc. filed Critical Motorola, Inc.
Publication of WO2008082762A1 publication Critical patent/WO2008082762A1/fr
Publication of WO2008082762B1 publication Critical patent/WO2008082762B1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention generally relates to video coding, and more particularly, to a method and system for processing encoded video data.
  • WAN Wide Area Network
  • WLAN Wireless Local Area Network
  • a video data can be transferred from a video conferencing device located at Boston to another video conferencing device located at Washington through a wireless network. Due to a large quantity of the video data, the video data can be transferred in a compressed format. This is performed to reduce the number of bits required to store the video data and in addition, to reduce the rate at which the video data is transferred. These video compressions make the size of the video data compact, with little perceptible loss in quantity.
  • DVDs use a video coding standard called Moving Picture Experts Group-2 (MPEG-2) that makes the video data 15 to 30 times smaller in size while still producing a high quality picture.
  • MPEG-2 Moving Picture Experts Group-2
  • This compressed video data is transferred in form of data packets which are network friendly and allow more flexibility in transfer of the video data.
  • These data packets include several reference frames and several motion compensations. These reference frames and motions compensations can be used to predict several video frames when the data is uncompressed at a destination where the data packets are received. However, when these data packets are transmitted over an error-prone network, some of the data packets are lost or dropped. These lost packets result in visually displeasing video frames when decoded and reconstructed using the remaining data packets.
  • FIG. 1 illustrates a system where various embodiments of the present invention can be practiced
  • FIG. 2 is a flow diagram illustrating a method for processing encoded video data, in accordance with an embodiment of the present invention
  • FIG. 3 illustrates a position of a missing macroblock in an encoded frame, in accordance with an embodiment of the present invention
  • FIG. 4 illustrates a position of a missing macroblock in an encoded frame, in accordance with another embodiment of the present invention
  • FIG. 5 illustrates a position of a missing macroblock in an encoded frame, in accordance with yet another embodiment of the present invention.
  • FIG. 6 illustrates a block diagram of an electronic device, in accordance with an embodiment of the present invention.
  • the terms 'comprises,' 'comprising,' or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, system or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent in such a process, article or apparatus.
  • An element proceeded by 'comprises ... a' does not, without more constraints, preclude the existence of additional identical elements in the process, method, article, system or apparatus that comprises the element.
  • the term 'another,' as used in this document, is defined as at least a second or more.
  • the terms 'includes' and/or 'having,' as used herein, are defined as comprising.
  • a method for processing encoded video data includes a plurality of encoded frames. Each of the plurality of encoded frames is predicted from a plurality of reference frames. Further, each of the plurality of encoded frames includes a plurality of macroblocks. The method includes determining a position in an encoded frame of the plurality of encoded frames where a macroblock is missing. The position is surrounded by a plurality of neighboring macroblocks. The method also includes selecting one or more macroblocks based on a set of conditions. The set of conditions is arranged in a predefined order. Further, the method includes ranking the selected one or more macroblocks based on a set of predefined criteria.
  • Each macroblock of the selected one or more macroblocks includes a representative motion-vector. Furthermore, the method includes determining a predicted motion-vector based on the representative motion-vector of one or more of the ranked macroblocks. Moreover, the method includes processing the encoded video data based on the predicted motion-vector.
  • an electronic device in another embodiment, includes a receiver configured to receive an encoded video data.
  • the encoded video data includes a plurality of encoded frames. Each of the plurality of encoded frames is predicted from a plurality of reference frames.
  • Each of the plurality of encoded frames includes a plurality of macroblocks.
  • the electronic device also includes a processor configured to determine a position in an encoded frame of the plurality of encoded frames where a macroblock is missing, select one or more macroblocks based on a set of conditions arranged in a predefined order, rank the selected one or more macroblocks based on a set predefined criteria and determine a predicted motion-vector based on each of the representative motion-vector of the ranked one or more macroblocks.
  • FIG. 1 illustrates a system 100, where various embodiments of the present invention can be practiced.
  • the system 100 includes a video source 102 and a video destination 104.
  • Examples of the video source 102 and the video destination 104 include, but are not limited to, a computer, a videophone, a video conferencing device and a Personal Digital Assistant (PDA).
  • PDA Personal Digital Assistant
  • the video source 102 includes an encoder 106.
  • the video destination 104 includes a decoder 108 and a video display 110.
  • the video source 102 can transfer video data to the video destination 104 through a network.
  • the network can be, for example, a Wide Area Network (WAN), a wireless network, a Bluetooth network, a WiMax network, a wired network, a Local Area Network (LAN) and the like.
  • this video data can be large in size. Due to large size of the video data, more bits are required to store the video data. Also, a higher bit-rate is required to transfer the video data from the video source 102 to the video destination 104.
  • the video source 102 compresses the video data into a compressed format using the encoder 106 prior to transferring the video data to the video destination 104. Examples of the compressed format of the video data include, but are not limited to, a Moving Picture Experts Group-4 (MPEG-4) format and an H. 264 format.
  • MPEG-4 Moving Picture Experts Group-4
  • the video data in the compressed format are in a form of data packets, which can be transferred over the network at a lower bit-rate. Further, these data packets include several reference frames and several motion compensations.
  • the decoder 108 of the video destination 104 derives encoded frames by using the reference frames and the motion compensations. The video destination 104 on extracting these encoded frames displays them using a video display 110.
  • FIG. 2 is a flow diagram illustrating a method for processing encoded video data, in accordance with an embodiment of the present invention.
  • the encoded video data can include multiple encoded frames. Further, each of the multiple encoded frames can include multiple macroblocks. Each encoded frame of the multiple encoded frames can be predicted from multiple reference frames and corresponding motion compensations received from a network in form of data packets.
  • data packets for the encoded video data can be received by the video destination 104 from the video source 102 through a wireless network. Examples of the network include, but are not limited to, a WAN, a wireless network, a Bluetooth network, a WiMax network, a wired network, a LAN and the like.
  • step 202 the method for processing encoded video data is initiated.
  • a position in an encoded frame of the multiple encoded frames is determined where a macroblock is missing.
  • the position where a macroblock is missing is surrounded by multiple neighbouring macroblocks.
  • some of the macroblocks of the neighboring macroblocks can be used to predict the position of the missing macroblock in a previous encoded frame.
  • one or more macroblocks are selected based on a set of conditions.
  • the set of conditions are arranged in a predefined order. Further, the set of conditions is based on the position of the missing macroblock. For example, when the determined position of the missing macroblock does not lie on the encoded frame boundary, the first condition of the set of conditions can be selecting one or more edgewise-adjacent macroblocks of the neighboring macroblocks which have been correctly decoded. The second condition can be, selecting a macroblock corresponding to the position of the missing macroblock in a previous encoded frame.
  • the third condition can be, selecting one or more edgewise-adjacent concealed macroblocks from the encoded frame.
  • the concealed macroblocks are those macroblocks, which have not been correctly decoded and have been reconstructed using, for example, the previous encoded frame.
  • the fourth condition can be, selecting one or more macroblocks from the previous encoded frame corresponding to the positions of the one or more edgewise-adjacent missing macroblocks.
  • the fifth condition can be, selecting one or more diagonally-adjacent macroblocks of the neighboring macroblocks which have been correctly decoded.
  • the sixth condition can be, selecting one or more diagonally-adjacent concealed macroblocks from the encoded frame.
  • the seventh condition can be, selecting one or more macroblocks from the previous encoded frame corresponding to the positions of the diagonally-adjacent missing macroblocks. Selection of the one or more macroblocks based on the set of conditions has been explained in detail in conjunction with FIG. 3.
  • the first condition of the set of conditions can be, selecting one or more edgewise-adjacent macroblocks of the multiple neighboring macroblocks which are also located on the encoded frame boundary.
  • the second condition of the set of conditions can be, selecting one or more edgewise- adjacent macroblocks of the multiple neighboring macroblocks, which are not located on the encoded frame boundary.
  • the one or more neighboring macroblocks located outside the encoded frame boundary are not considered for the selection. Selection of the one or more macroblocks based on the set of conditions when the missing macroblock lies on the encoded frame boundary has been explained in detail in conjunction with FIG. 4.
  • the selected one or more macroblocks are ranked based on a set of pre-defined criteria.
  • the set of pre-defined criteria can include determining an orientation of a selected macroblock of the one or more macroblocks. This includes determining whether the selected macroblock is edgewise adjacent or diagonally adjacent to the position of the missing macroblock. This criterion further includes determining whether the selected macroblock lies on the right side, the left side, below or above the position of the missing macroblock.
  • the set of pre-defined criteria further includes determining whether a selected macroblock of the one or more macroblocks is selected from a previous encoded frame.
  • the set of pre-defined criteria further includes determining whether a selected macroblock of the one or more macroblocks is a concealed macroblock.
  • the concealed macroblock is a macroblock, which have not been correctly decoded and have been reconstructed using the previous encoded frame.
  • the concealed macroblock can be an INTER macroblock.
  • first preference is given to, for example, one or more correctly decoded edgewise-adjacent macroblocks.
  • the macroblock lying above the position of the missing macroblock is ranked first, then the macroblock lying on the right side of the position of the missing macroblock is ranked, then the macroblock lying below the position of the missing macroblock is ranked and then the macroblock lying on the left side of the position of the missing macroblock is ranked.
  • Second preference can be given to, for example, one or more correctly decoded diagonally adjacent macroblocks.
  • the macroblock lying above on the right side of the position of the missing macroblock is ranked first, then the macroblock lying above on the left side of the position of the missing macroblock is ranked, then the macroblock lying below on the right side of the position of the missing macroblock is ranked and then the macroblock lying below on the left side of the position of the missing macroblock is ranked.
  • Third preference can be given to, for example, a macroblock in a previous encoded frame corresponding to the position of the missing macroblock.
  • Fourth preference can be given to, for example, one or more edge-wise adjacent concealed macroblocks.
  • the concealed macroblock is a macroblock, which have not been correctly decoded and have been reconstructed using the previous encoded frame.
  • the macroblock lying above the position of the missing macroblock is ranked first, then the macroblock lying on the right side of the position of the missing macroblock is ranked, then the macroblock lying below the position of the missing macroblock is ranked and then the macroblock lying on the left side of the position of the missing macroblock is ranked.
  • Fifth preference can be given to, for example, one or more edge-wise adjacent missing macroblocks.
  • the macroblock lying above the position of the missing macroblock is ranked first, then the macroblock lying on the right side of the position of the missing macroblock is ranked, then the macroblock lying below the position of the missing macroblock is ranked and then the macroblock lying on the left side of the position of the missing macroblock is ranked.
  • Sixth preference can be given to, for example, one or more diagonally adjacent concealed macroblocks.
  • the macroblock lying above on the right side of the position of the missing macroblock is ranked first, then the macroblock lying above on the left side of the position of the missing macroblock is ranked, then the macroblock lying below on the right side of the position of the missing macroblock is ranked, and then the macroblock lying below on the left side of the position of the missing macroblock is ranked.
  • Seventh preference can be given to, for example, one or more diagonally adjacent missing macroblocks.
  • macroblocks without any motion-vectors can be given the least preference while ranking.
  • Macroblocks without any motion- vectors can be, for example, INTRA macroblocks. In case, there are more than one INTRA macroblocks, then relative ranking of these INTRA macroblocks can be the same as the default raking described above.
  • INTRA macroblocks may not be considered for ranking.
  • the INTRA macroblocks can be ranked ahead of concealed or INTER macroblocks. Further, for some cases, INTRA macroblocks can be considered to have a zero motion-vector. Further, ranking of the one of more selected macroblocks based on the set of pre-defined criteria has been explained in detail in conjunction with FIG. 3. [0024] It will be apparent to a person ordinarily skilled in the art that the above stated ranking order of the macroblocks has been described as an example. The ranking order can be different for various embodiments of the present invention.
  • each macroblock of the one or more ranked macroblocks includes a representative motion-vector.
  • the representative motion- vector of a selected macroblock is determined from one or more motion-vectors associated with a selected macroblock.
  • the selected macroblock may have one or more motion-vectors representing different regions of the selected macroblock.
  • the motion- vector corresponding to the region closest to the centre of the position of the missing macroblock can be selected as the representative motion- vector for the selected macroblock.
  • the motion- vector of the region that appears first in a clockwise scan around the position of the missing macroblock can be selected as the representative motion-vector. Determination of the representative motion- vector of the multiple motion- vectors has been explained in detail in conjunction with FIG. 5.
  • each of the representative motion- vectors can include a first-coordinate value and a second-coordinate value.
  • the first and the second coordinate value can be selected from an x-coordinate value and a y-coordinate value.
  • a predicted motion-vector is determined based on the representative motion-vector of the ranked one or more macroblocks.
  • the predicted motion- vector can be determined from the representative motion- vectors of the top four ranked macrob locks.
  • the predicted motion- vector can be determined by calculating a first-coordinate median value and a second-coordinate median value.
  • the first-coordinate median value can be determined by calculating median of the first coordinate values of the representative motion-vectors of the one or more ranked macroblocks.
  • the second-coordinate median value can be determined by calculating median of the second coordinate values of the representative motion- vectors of the one or more ranked macroblocks.
  • the predicted motion- vector can be determined by calculating a first-coordinate mean value and a second- coordinate mean value.
  • the encoded video data can be processed based on the predicted motion-vector.
  • the encoded video data can be processed by reconstructing the encoded video data based on the predicted motion-vector.
  • the encoded video data can be reconstructed by placing a macroblock from a previous encoded frame at the position of the missing macroblock.
  • the position of the macroblock from the previous encoded can be determined by calculating a first predicted coordinate value and a second predicted coordinate value.
  • the first predicted coordinate value can be calculated by adding the first-coordinate median value with a first-coordinate value of the position of the missing macroblock.
  • FIG. 3 illustrates a position 302 of a missing macroblock in an encoded frame, in accordance with an embodiment of the present invention.
  • the position 302 can be surrounded by a correctly decoded macroblock 304, a concealed macroblock 306, a correctly decoded macroblock 308, a concealed macroblock 310, a position 312 where a macroblock is missing, a position 314 where a macroblock is missing, a position 316 where a macroblock is missing and a position 318 where a macroblock is missing.
  • one or more macroblocks can be selected based on the set of conditions in the following order.
  • the macroblock 304 can be selected as the macroblock 304 is edgewise adjacent to the position 302 and have been correctly decoded.
  • a macroblock 320 from a previous encoded frame corresponding to the position 302 can be selected.
  • the macroblock 306 can be selected.
  • two macroblocks can be selected, a macroblock 322 from the previous encoded frame corresponding to the position 312 and a macroblock 324 from the previous encoded frame corresponding to the position 314.
  • the macroblock 308 can be selected as the macroblock 308 is diagonally adjacent to the position 302 and have been correctly decoded.
  • the macroblock 310 can be selected.
  • two macroblocks can be selected, a macroblock 326 from the previous encoded frame corresponding to the position 316 and a macroblock 328 from the previous encoded frame corresponding to the position 318.
  • first rank can be assigned to the macroblock 304 as it correctly decoded and is edgewise adjacent.
  • Second rank can be assigned to the macroblock 308.
  • Third rank can be assigned to the macroblock 320 in the previous encoded frame corresponding to the position 302.
  • Fourth rank can be assigned to the macroblock 306.
  • Fifth rank can be assigned to the macroblock 322 in the previous encoded frame corresponding to the position 312.
  • Sixth rank can be assigned to the macroblock 324 in the previous encoded frame corresponding to the position 314.
  • Seventh rank can be assigned to the macroblock 310.
  • Eighth rank can be assigned to the macroblock 326 in the previous encoded frame corresponding to the position 316.
  • ninth rank can be assigned to the macroblock 328 in the previous encoded frame corresponding to the position 318.
  • the representative motion-vector of one or more top ranked macroblocks are selected to determine predicted motion-vector for the lost macroblock.
  • the representative vectors of, for example, top three ranked macroblocks can be considered to determine the predicted motion-vector.
  • the representative motion- vectors of the macroblock 304, the macroblock 308 and the macroblock 320 in the previous encoded frame corresponding to the position 302 can be considered for determining predicted motion- vector.
  • FIG. 4 illustrates a position 402 of a missing macroblock in an encoded frame, in accordance with another embodiment of the present invention.
  • the position 402 can lie on, for example, the top frame boundary of the encoded frame.
  • the position 402 can be surrounded by a macroblock 404, a macroblock 406, a macroblock 408, a macroblock 410, a macroblock 412, a macroblock 414, a macroblock 416 and a macroblock 418.
  • one or more macroblocks can be selected based on the set of conditions in the following order. First, the macroblock 404 and the macroblock 406 can be selected as they lie along the boundary of the encoded frame and are edgewise adjacent to the position 402. Second, the macroblock 416 can be selected as the macroblock 416 is edge-wise adjacent to the position 402. In this case, the macroblocks 408, 410 and 412 cannot be considered for selection as they lie outside the encoded frame boundary.
  • FIG. 5 illustrates a position 502 of a missing macroblock in an encoded frame, in accordance with yet another embodiment of the present invention.
  • the position 502 can have a neighboring macroblock 504.
  • the macroblock 504 can include eight regions a, b, c, d, e, f, g and h. Each region of the eight regions can have a corresponding motion- vector. Of these regions, for example, a and b regions are equally close to the center of the of the missing macroblock position 502. Now, as the region b appears first in clockwise scan, the motion-vector of the region b can be selected as the representative motion-vector.
  • the region a and the region b are not selected but the motion vector of a region which is closet to both the top frame boundary and the position 502 of the missing macroblock can be selected.
  • the motion vector of the region h can be selected as the representative motion-vector for the macroblock 504.
  • FIG. 6 is a block diagram of an electronic device 600, in accordance with an embodiment of the present invention.
  • the electronic device can include a receiver 602 and a processor 604.
  • the receiver 602 can be configured to receive a video data from a network. Examples of the network include, but are not limited to, a WAN, a wireless network, a Bluetooth network, a WiMax network, a wired network, a LAN.
  • the video data received can be in form of data packets, which includes multiple reference frames and several motion compensations. Further, these data packets can be encoded to form an encoded video data.
  • the encoded video data can include multiple encoded frames. Each encoded frame of the multiple encoded frames can be predicted from the multiple reference frames and corresponding motion compensations. Also, each of the multiple encoded frames can include multiple macrob locks.
  • the processor 604 can be configured to determine a position in an encoded frame of the multiple encoded frames where a macroblock is missing. Further, the position can be surrounded by multiple neighboring macrob locks. The processor 604 can also be configured to select one or more macrob locks based on a set of conditions. For one embodiment, the set of conditions can be arranged in a predefined order.
  • the set of conditions can be based on the position of the missing macroblock.
  • the set of conditions have been explained in detail in conjunction with FIG. 3 and FIG. 4.
  • the processor 604 can also be configured to rank the selected one or more macroblocks based on a set of pre-defined criteria.
  • the set of pre-defined criteria have been explained in detail in conjunction with FIG. 3.
  • each macroblock of the selected one or more macroblocks can include a representative motion-vector.
  • the representative motion-vector of a selected macroblock can be determined from multiple motion- vectors associated with a selected macroblock of the one or more motion- vectors.
  • the selected macroblock may have multiple motion-vectors representing different regions of the selected macroblock.
  • the motion- vector corresponding to the region closest to the centre of the position of the missing macroblock can be selected as the representative motion-vector for the selected macroblock.
  • the motion-vector of the region that appears first in a clockwise scan around the position of the missing macroblock can be selected representative motion-vector.
  • the processor 604 can also determine a predicted motion-vector based on one or more of the representative motion- vector of the ranked one or more macroblocks. For one embodiment, the processor 604 is further configured to reconstruct the video image data by placing a macroblock from a previous encoded frame at the position of the missing macroblock. The macroblock from the previous encoded frame can be selected based on the predicted motion-vector. [0040] As described above, the present invention provides a method and system for processing encoded video data. The present invention can be implemented in a wide variety of electronic appliances like a video telephone, a video conferencing device, a personal computer and the like.
  • the present invention reconstructs the lost data using a selected set of motion-vectors of neighboring macrob locks.
  • the invention helps in improving the quality of video by concealing the error caused due to loss of video data.
  • the present invention also takes into account the degree of coherence between lost data and the neighboring data while concealing the error caused due to loss of video data.
  • the method provides a better quality of video and is computationally less intensive as compared to other methods available.
  • embodiments of the invention described herein may be comprised of one or more conventional processors and unique stored program instructions that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the embodiments of the invention described herein.
  • the non-processor circuits may include, but are not limited to, a radio receiver, a radio transmitter, signal drivers, clock circuits, power source circuits, and user input devices. As such, these functions may be interpreted as steps of a method for processing encoded video data.

Abstract

Procédé de traitement de données vidéo codées. Les données vidéo codées comprennent une pluralité d'images codées. Chacune de la pluralité d'images codées contient une pluralité de macroblocs. Le procédé comprend l'étape consistant à déterminer (204), dans une image codée de la pluralité d'images codées, une position dans laquelle il manque un macrobloc. La position est entourée d'une pluralité de macroblocs voisins. Le procédé comprend également l'étape consistant à sélectionner (206) un ou plusieurs macroblocs sur la base d'un ensemble de conditions organisées selon un ordre prédéfini. De plus, le procédé comprend l'étape consistant à classer (208) le(s) macrobloc(s) sélectionné(s) sur la base d'un ensemble de critères prédéfinis. En outre, le procédé comprend l'étape consistant à déterminer (210) un vecteur de mouvement prédit sur la base des vecteurs de mouvement représentatifs d'un ou plusieurs des macroblocs classés. Le procédé comprend aussi l'étape consistant à traiter (212) les données vidéo codées sur la base du vecteur de mouvement prédit.
PCT/US2007/082822 2006-12-29 2007-10-29 Procédé et système de traitement de données vidéo codées WO2008082762A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN2839DE2006 2006-12-29
IN2839/DEL/2006 2006-12-29

Publications (2)

Publication Number Publication Date
WO2008082762A1 true WO2008082762A1 (fr) 2008-07-10
WO2008082762B1 WO2008082762B1 (fr) 2008-08-21

Family

ID=39588957

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/082822 WO2008082762A1 (fr) 2006-12-29 2007-10-29 Procédé et système de traitement de données vidéo codées

Country Status (1)

Country Link
WO (1) WO2008082762A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10171813B2 (en) 2011-02-24 2019-01-01 Qualcomm Incorporated Hierarchy of motion prediction video blocks
US10721490B2 (en) 2010-05-20 2020-07-21 Interdigital Vc Holdings, Inc. Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040146113A1 (en) * 2001-05-29 2004-07-29 Valente Stephane Edouard Error concealment method and device
US6970938B2 (en) * 2000-12-28 2005-11-29 Sony Corporation Signal processor
US20060013303A1 (en) * 1997-11-14 2006-01-19 Ac Capital Management, Inc. Apparatus and method for compressing video information
US20060215919A1 (en) * 2001-12-17 2006-09-28 Microsoft Corporation Spatial extrapolation of pixel values in intraframe video coding and decoding

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060013303A1 (en) * 1997-11-14 2006-01-19 Ac Capital Management, Inc. Apparatus and method for compressing video information
US6970938B2 (en) * 2000-12-28 2005-11-29 Sony Corporation Signal processor
US20040146113A1 (en) * 2001-05-29 2004-07-29 Valente Stephane Edouard Error concealment method and device
US20060215919A1 (en) * 2001-12-17 2006-09-28 Microsoft Corporation Spatial extrapolation of pixel values in intraframe video coding and decoding

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10721490B2 (en) 2010-05-20 2020-07-21 Interdigital Vc Holdings, Inc. Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding
US10171813B2 (en) 2011-02-24 2019-01-01 Qualcomm Incorporated Hierarchy of motion prediction video blocks
US10659791B2 (en) 2011-02-24 2020-05-19 Qualcomm Incorporated Hierarchy of motion prediction video blocks

Also Published As

Publication number Publication date
WO2008082762B1 (fr) 2008-08-21

Similar Documents

Publication Publication Date Title
EP0947103B1 (fr) Decodage parallele de trains de donnees entrelacees dans un decodeur mpeg
JP4346114B2 (ja) 複数の標準的な出力信号を提供するmpegデコーダ
US9445097B2 (en) Image decoding method using intra prediction mode
JP6163674B2 (ja) 高効率次世代ビデオコーディングのためのコンテンツ適応双方向性又は機能性予測マルチパスピクチャ
US6496537B1 (en) Video decoder with interleaved data processing
US6377628B1 (en) System for maintaining datastream continuity in the presence of disrupted source data
CN1209020A (zh) 视频目标平面的时间和空间可变尺度编码
US9729869B2 (en) Adaptive partition subset selection module and method for use therewith
US6970504B1 (en) Parallel decoding of interleaved data streams within an MPEG decoder
US9420308B2 (en) Scaled motion search section with parallel processing and method for use therewith
WO2008082762A1 (fr) Procédé et système de traitement de données vidéo codées
US8520738B2 (en) Video decoder with hybrid reference texture
EP1897375A2 (fr) Procede de decodage et decodeur avec moyen d'arrondissement
MXPA99005605A (en) Formatting of recompressed data in an mpeg decoder

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07863606

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07863606

Country of ref document: EP

Kind code of ref document: A1