WO2009057956A1 - Apparatus and method of decompressing distributed video coded video using error correction - Google Patents

Apparatus and method of decompressing distributed video coded video using error correction Download PDF

Info

Publication number
WO2009057956A1
WO2009057956A1 PCT/KR2008/006403 KR2008006403W WO2009057956A1 WO 2009057956 A1 WO2009057956 A1 WO 2009057956A1 KR 2008006403 W KR2008006403 W KR 2008006403W WO 2009057956 A1 WO2009057956 A1 WO 2009057956A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
reconstructed
unit
similarity
decoding
Prior art date
Application number
PCT/KR2008/006403
Other languages
French (fr)
Inventor
Byeung Woo Jeon
Bong Hyuk Ko
Original Assignee
Sungkyunkwan University Foundation For Corporate Collaboration
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sungkyunkwan University Foundation For Corporate Collaboration filed Critical Sungkyunkwan University Foundation For Corporate Collaboration
Publication of WO2009057956A1 publication Critical patent/WO2009057956A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates, in general, to an apparatus and method for decoding distributed video-coded video using error correction, and, more particularly, to a decoding apparatus and method for decoding distributed video-coded video using error correction, which detects and corrects decoding errors occurred in a reconstructed picture, thus improving the quality of the reconstructed picture.
  • VOD Video-On-Demand
  • CATV Cable Television
  • MPEG Moving Picture Experts Group
  • H.26x High Efficiency Video Coding
  • DMB Digital Multimedia Broadcasting
  • a method of reducing temporal redundancy For the compression of digital video data, three methods, that is, a method of reducing temporal redundancy, a method of reducing spatial redundancy, and a method of reducing the statistical redundancy of data, are mainly used.
  • a representative method of reducing temporal redundancy is motion estimation and compensation (ME/MC) technique.
  • DSC Distributed Source Coding
  • DVC Distributed Video Coding
  • Wyner-Ziv coding based on "Wyner-Ziv coding for video: Applications to compression and error resilience" which is a paper published by A. Aaron, S.
  • This DVC technology reconstructs a current picture in such a way that a decoder generates side information for the current picture using the similarity between the current picture and its neighboring pictures, regards this side information as noisy version of the current picture, in which the noise is added from virtual channel, and receives parity bits generated using a channel code from an encoder, and eliminates the noise on the side information.
  • FIG. 1 is a diagram showing the construction of a conventional encoder 110 based on Wyner-Ziv coding and a decoder 130 corresponding to the encoder 110.
  • the conventional encoder 110 based on Wyner-Ziv coding includes a key picture encoding unit 114, a block segmentation unit 111, a quantization unit 112, and a channel code encoding unit 113.
  • the decoder 130 corresponding to the encoder includes a key picture decoding unit 133, a channel code decoding unit 131, a side information generation unit 134, and a video reconstruction unit 132.
  • the encoder 110 based on Wyner-Ziv coding classifies pictures to be coded into two types.
  • Respective key pictures are typically encoded by the key picture encoding unit 114 using a predetermined method selected by a user, such as intra-picture coding of H.264/Advance Video Coding (AVC), and are transmitted to the decoder 130.
  • the key picture decoding unit 133 of the decoder 130 corresponding to the conventional encoder 110 based on Wyner-Ziv coding reconstructs key pictures, which have been encoded using the predetermined method and have been transmitted.
  • the side information generation unit 134 generates side information corresponding to a WZ picture using the key pictures reconstructed by the key picture decoding unit 133.
  • the side information generation unit 134 generates side information corresponding to a
  • WZ picture to be reconstructed using interpolation in which a linear motion between key pictures, which are previous to and subsequent to the WZ picture, is assumed.
  • extrapolation may be used, but, in most cases, interpolation is used because interpolation is better than extrapolation from the standpoint of performance.
  • the block segmentation unit 111 of the encoder 110 segments the input WZ picture into a predetermined coding units, and the quantization unit 112 performs quantization on each of the coding units. Further, the channel code encoding unit 113 generates parity bits for quantized values using a channel code.
  • the generated parity bits are stored in a parity buffer (not shown) and are then sequentially transmitted according to the request, through a feedback channel, of the decoder 130.
  • the channel code decoding unit 131 of FIG. 1 receives the parity bits from the encoder 110 and thus decodes quantized values.
  • the video reconstruction unit 132 of FIG. 1 receives the quantized values decoded by the channel code decoding unit 131, inverse-quantizes the quantized values, and reconstructs the WZ picture.
  • FIG. 2 is a diagram showing a turbo code-based construction among examples of the constructions of the channel code decoding unit 131 in Wyner-Ziv coding technology of FIG. 1.
  • the channel code decoding unit 131 includes two sott-input/soft-output (SISO) decoding units 210a and 210b, interleavers 213a and 213b, deinterleavers 214a and 214b, channel probability calculation units 21 Ia and 21 Ib, a decision unit 216, and a Demultiplexer (DEMUX) 215.
  • SISO sott-input/soft-output
  • DEMUX Demultiplexer
  • Parity bits transmitted from the encoder 110 which are composed of parity bits for quantized values and for interleaved quantized values, are separated by the DEMUX 215 of FIG. 2, and are input to respective channel probability calculation units 211a and 211b.
  • the channel probability calculation units 211a and 211b are input to respective channel probability calculation units 211a and 211b.
  • 211a and 211b receive the side information, the probability characteristics and statistic of noise, and the parity bits transmitted from the encoder 110, and accordingly calculate channel probability values.
  • each of the SISO decoding units 210a and 210b performs decoding based on its own channel probability value and an A Priori Probability (APrP) value, provided by the other SISO decoding unit 210a or 210b.
  • a Priori Probability A Priori Probability
  • each of the SISO decoding units 210a and 210b obtains a forward state metric from a transition metric while moving from an initial state to a final state in a trellis diagram, and, after the final state has been reached, obtains a backward state metric while moving in a backward direction.
  • An A Posteriori Probability (APoP) value and an extrinsic probability value are obtained using the state metric values and the transition metric value obtained in this way.
  • the decision unit 216 calculates an error rate from the APoP and terminates decoding when the calculated error rate decreases below a threshold value; otherwise the other SISO decoding unit 210a or 21 Ob repeats the above process.
  • the decoder 130 may request additional parity bits from the encoder 110 through a feedback channel.
  • Such a decoding method is fundamentally to correct noise in side information using a channel code.
  • an object of the present invention is to provide a method and apparatus for decoding distributed video-coded video using error correction, which detects whether a channel code decoding error has occurred in a reconstructed picture using temporal and spatial similarities, and selectively corrects the error depending on the results of the detection, thus improving the quality of the reconstructed picture.
  • the present invention provides an apparatus for decoding distributed video-coded video using error correction, comprising a key picture decoding unit for reconstructing key picture transmitted by an encoding apparatus; a side information generation unit for generating side information using the key picture reconstructed by the key picture decoding unit; a channel code decoding unit for estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; a video reconstruction unit for reconstructing a Wyner-Ziv (WZ) picture to be decoded using both the quantized value, estimated by the channel code decoding unit, and the side information; and an error correction unit for detecting whether a channel code decoding error has occurred in the WZ picture using the side information and the key picture reconstructed by the key picture decoding unit, and correcting the error in the reconstructed WZ picture on a basis of picture similarity.
  • WZ Wyner-Ziv
  • the error correction unit may comprise a decoding error detection unit for detecting a correction target pixel in which the channel code decoding error has occurred, among pixels of the reconstructed WZ picture, using the side information and the reconstructed key picture; and a decoding error correction unit for correcting the correction target pixel based on similarity between the correction target pixel, detected by the decoding error detection unit, and pixels temporally and/or spatially corresponding to and/or neighboring the correction target pixel.
  • the decoding error detection unit may comprise at least one of a spatial similarity measurement unit for measuring spatial similarity between a specific pixel and its neighboring pixels in the reconstructed WZ picture and a temporal similarity measurement unit for measuring temporal similarity between the reconstructed WZ picture and the side information; and a final detection unit for comparing at least one of the spatial similarity and the temporal similarity with a preset threshold value, and detecting correction target pixel if at least one of the spatial similarity and the temporal similarity is greater than or less than the threshold value.
  • the spatial similarity measurement unit may measure the spatial similarity using differences between values of the specific pixel and the neighboring pixels.
  • the temporal similarity measurement unit may measure the temporal similarity using differences between values of corresponding pixels of the reconstructed WZ picture and the side information.
  • the decoding error correction unit may comprise at least one of a spatial candidate estimation unit for estimating a spatial candidate value based on spatial similarity between the correction target pixel and its neighboring pixels and a temporal candidate estimation unit for estimating a temporal candidate value based on temporal similarity between the correction target pixel and a corresponding pixel in the reconstructed key picture; and a final correction unit for correcting the correction target pixel using at least one of the temporal candidate value and the spatial candidate value.
  • the spatial candidate estimation unit may estimate the spatial candidate value to be-a median value among the values of the correction target pixel and its neighboring pixels.
  • the temporal candidate estimation unit may estimate the temporal candidate value through motion estimation for the reconstructed WZ picture by using at least one key picture reconstructed by the key picture decoding unit as a reference picture.
  • a method of decoding distributed video-coded video using error correction comprising steps of (a) reconstructing at least one key picture transmitted from an encoding apparatus; (b) generating side information using the reconstructed key picture; (c) estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; (d) reconstructing a Wyner-Ziv (WZ) picture using both the estimated quantized value and the side information; (e) detecting whether a channel code decoding error has occurred in the WZ picture using both the side information and the key picture reconstructed by a key picture decoding unit; and (1) correcting the error in the reconstructed WZ picture on a basis of picture similarity.
  • step (e) may comprises at least one of a step of measuring spatial similarity between a specific pixel and its neighboring pixels in the reconstructed WZ picture and a step of measuring temporal similarity between the reconstructed WZ picture and the side information; a step of comparing at least one of the spatial similarity and the temporal similarity with a preset threshold value; and a step of detecting a correction target pixel, in which a channel code decoding error has occurred if at least one of the spatial similarity and the temporal similarity is greater than or less than the threshold value.
  • the spatial similarity may be measured based on differences between values of the specific pixel and the neighboring pixels.
  • the temporal similarity may be measured based on differences between values of corresponding pixels of the reconstructed WZ picture and the side information.
  • step (f) may comprise at least one of a step of estimating a spatial candidate value based on spatial similarity between the correction target pixel and its neighboring pixels and a step of estimating a temporal candidate value based on temporal similarity between correction target pixel and a corresponding pixel in the reconstructed key picture; and a step of correcting the correction target pixel using at least one of the temporal candidate value and the spatial candidate value.
  • the spatial candidate value may be estimated to be a median value among the values of the correction target pixel and its neighboring pixels in the reconstructed WZ picture. Further, the temporal candidate value may be estimated through motion estimation for the reconstructed WZ picture by using at least one key picture reconstructed by the key picture decoding unit as a reference picture.
  • whether a decoding error has occurred in a relevant pixel can be detected based on the measurement of the temporal and spatial similarities between each pixel in a reconstructed picture and its neighboring pixels and according to predetermined criteria Further, for a relevant pixel detected to have an error has occurred, a suitable candidate value is estimated using the temporal and/or spatial similarities to neighboring pixels, and the error can be corrected using the candidate value. Therefore, there is an advantage in that a decoding error occurring in a reconstructed picture can be corrected according to the present invention, thus greatly improving the quality of the reconstructed picture.
  • FIG. 1 is a diagram showing the construction of a conventional encoder based on Wyner-Ziv coding technology and a decoder corresponding to the encoder;
  • FIG. 2 is a diagram showing a turbo code-based construction among examples of the constructions of a channel code decoding unit in the Wyner-Ziv coding technology of FIG. 1 ;
  • FIG. 3 is a diagram showing the construction of a Wyner-Ziv encoding apparatus and a decoding apparatus, in a pixel domain, for decoding DVC-coded video using error correction including a decoding error detection and correction function according to the present invention
  • FIG. 4 is a diagram showing an example of the construction of the error correction unit of the decoding apparatus of FIG. 3;
  • FIG. 5 is a diagram showing an example of the construction of the decoding error detection unit of the error correction unit of FIG.4;
  • FIG. 6 is a diagram showing an example of the construction of the decoding error correction unit of the error correction unit of FIG.4;
  • FIG. 7 is a diagram showing an example of a specific pixel and its neighboring pixels
  • FIG. 8 is a diagram showing an example of reference pictures used for error correction by the error correction unit according to the present invention
  • FIG. 9 is a diagram showing another example of the construction of the decoding error correction unit of the error correction unit of FIG.4;
  • FIGS. 10 to 13 are diagrams showing a method of decoding DVC-coded video using error correction according to the present invention.
  • FIG. 14 is a diagram showing the construction of a Wyner-Ziv encoding apparatus and a decoding apparatus, in a transform domain, for decoding DVC-coded video using error correction including a decoding error detection and correction function according to the present invention.
  • an apparatus for decoding distributed video-coded video using error correction comprising a key picture decoding unit for reconstructing key picture transmitted from an encoding apparatus; a side information generation unit for generating side information using the key picture reconstructed by the key picture decoding unit; a channel code decoding unit for estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; a video reconstruction unit for reconstructing a WZ picture to be decoded using both the quantized value, estimated by the channel code decoding unit, and the side information; and an error correction unit for detecting whether a channel code decoding error has occurred in the WZ picture using the side information and the key picture reconstructed by the key picture decoding unit, and correcting the error in the reconstructed WZ picture on a basis of picture similarity.
  • FIG. 3 is a diagram showing the construction of a Wyner-Ziv encoding apparatus 10 and a decoding apparatus 30 for decoding distributed video-coded video using error correction including a decoding error detection and correction function according to the present invention.
  • the Wyner-Ziv encoding apparatus 10 includes a key picture encoding unit 12 and a WZ picture encoding unit 11. Further, the decoding apparatus 30 according to the present invention includes a key picture decoding unit 33, a channel code decoding unit 32, a side information generation unit 35, a video reconstruction unit 34 and an error correction unit 36.
  • the key picture decoding unit 33 reconstructs key pictures using the data received from the key picture encoding unit 12, and the side information generation unit 35 generates side information for a current WZ picture to be reconstructed using the reconstructed key pictures.
  • the channel code decoding unit 32 estimates quantized values using both the side information from the side information generation unit 35 and parity bits received from the Wyner-Ziv encoding apparatus 10. Further, the video reconstruction unit 34 reconstructs the WZ picture using both the quantized values, estimated by the channel code decoding unit 32, and the side information.
  • the WZ picture (a) reconstructed by the video reconstruction unit 34 is input to the error correction unit 36.
  • the error correction unit 36 detects a location at which a channel code decoding error has occurred in the reconstructed WZ picture (a) using both the side information (b) output from the side information generation unit 35 and the key pictures (c) reconstructed by the key picture decoding unit 33, and corrects the channel code decoding error, thus reproducing the reconstructed WZ picture (d), the video quality of which has been remarkably improved.
  • the channel code decoding unit 32 of FIG. 3 is configured to continuously request and receive parity bits from the Wyner-Ziv encoding apparatus 10 until reliable decoding is possible if it is estimated that decoded quantized values are not sufficiently reliable while performing channel code decoding.
  • the channel code decoding unit 32 may be configured according to the construction implemented by a user such that it receives a predetermined number of parity bits in advance at one time, without requesting the parity bits at several times, and does not request parity bits through the feedback channel during the use of the parity bits received in advance.
  • the Wyner-Ziv encoding apparatus 10 transmits a predetermined number of parity bits, which have been previously calculated or have been preset, to the decoding apparatus 30, and the decoding apparatus 30 does not request parity bits to the encoder.
  • the Turbo code or the Low Density Parity Check (LDPC) code which have been known to almost reach a Shannon limit may be preferably used.
  • LDPC Low Density Parity Check
  • FIG. 4 is a diagram showing the construction of the error correction unit 36 according to the present invention.
  • the error correction unit 36 of the present invention includes a decoding error detection unit 361 and a decoding error correction unit 362.
  • the decoding error detection unit 361 detects whether a channel code decoding error has occurred at each location of the WZ picture (a) reconstructed by the video reconstruction unit 34, and an example of the construction thereof is shown in FIG. 5.
  • the decoding error correction unit 362 corrects a pixel in which an error has occurred using either or both of spatial and temporal similarities to neighboring information, and an example of the construction thereof is shown in FIG.6. Referring to FIG.
  • the decoding error detection unit 361 includes a spatial similarity measurement unit 361a for measuring similarity between a reconstructed pixel and its neighboring pixels in the reconstructed WZ picture (a) from the video reconstruction unit 34, a temporal similarity measurement unit 361c for measuring temporal similarity between corresponding pixels of the side information (b), from the side information generation unit 35 and the reconstructed WZ picture (a) from the video reconstruction unit 34, and a final detection unit 361b for detecting whether a channel code decoding error has occurred at a relevant location in the reconstructed WZ picture (a) based on the results of the measurement of the spatial similarity measurement unit 361a and the temporal similarity measurement unit 361c according to predetermined criteria.
  • the temporal similarity measurement unit 361c may be configured to use together the key picture (c) generated by the key picture decoding unit 33.
  • the spatial similarity measurement unit 361a of the decoding error detection unit 361 can calculate the differences between the values of the reconstructed pixel and maximum, minimum value of its neighboring pixels of FIG. 7 in the reconstructed WZ picture (a), as shown in the following Equation 1, in order to measure the similarity between the reconstructed pixel and its neighboring pixels,
  • Mi(w)] ⁇ i ⁇ ⁇ - i,i),i(t + lj), ⁇ " - IKi(W+ D>
  • % ⁇ h3 is a corresponding pixel value at a location (/, J) in the reconstructed WZ picture (a),
  • N[X ⁇ i,j) ⁇ are the spatially neighboring pixel values thereof in the reconstructed WZ picture (a), as shown in FIG. 7.
  • 361 calculates lM,hJ) ⁇ Yxh. 1 ) ⁇ which is the difference between the values of corresponding pixels of the
  • Aw is the pixel value at the location (i)
  • the above equations used to measure the spatial and temporal similarities indicate only one example for the measurement of spatial or temporal similarity, and may also be variously accessed, in addition to the aforementioned example.
  • the final detection unit 361b finally detects whether a channel code decoding error has occurred on the basis of either or both of the temporal and spatial similarities. For example, the final detection unit
  • 361b compares the calculated ⁇ ⁇ J visi)! with a predetermined threshold value 'A' which is a criterion value for error detection, and detects that there is a high probability of an occurrence of decoding error in a relevant pixel when the difference value is greater than the threshold value 'A' .
  • a predetermined threshold value 'A' which is a criterion value for error detection
  • the decoding error correction unit 362 performs channel decoding error correction at a location (e) which is detected to have a decoding error, using all or part of the information shown in FIG. 8.
  • the decoding error correction unit includes a spatial candidate estimation unit 362a for estimating a suitable candidate value based on the spatial similarity to neighboring pixels, a temporal candidate estimation unit 362c for estimating a suitable candidate value based on the temporal similarity between the reconstructed key picture (c) and the reconstructed WZ picture (a), and a final correction unit 362b for correcting the pixel in which the error has occurred using the estimated candidate values.
  • the decoding error correction unit 362 may be constructed in various forms depending on which reference picture is to be used to estimate the correction value. That is, when Case 1 of FIG. 8 in which only spatial candidate values are considered is used, the temporal candidate estimation unit 362c constituting the decoding error correction unit may be omitted from the construction.
  • An example of the spatial candidate estimation unit 362a of the decoding error correction unit 362 is provided to estimate the most suitable candidate value to be a median value among the current pixel and its neighboring pixels using the following Equation 2. This corresponds to Case 1 of FIG. 8. In this case, the use of the median value of Equation 2 is not always a unique method, and the use of various functions is possible according to the user. Mi,.
  • Equation 2 is the pixel value of the WZ picture (d) which is the corrected reconstructed picture.
  • the temporal candidate estimation unit 362c of FIG. 6 may be constructed to additionally use the side information (b) generated by the side information generation unit 35 according to the implementation, as indicated by a dotted line in FIG. 6.
  • the spatial candidate estimation unit 362a and the temporal candidate estimation unit 362c may have a parallel scheme so that the final correction unit 362b can use together the results of the estimation performed by the spatial candidate estimation unit 362a and the temporal candidate estimation unit 362c, and may have a sequential scheme so that one candidate estimation unit can use the results of estimation performed by the other. Further, for the purpose of the simplification of an apparatus and the like, it is also possible to approach a scheme using only one type of candidate estimation unit, and thus various schemes may be taken into consideration. As shown in FIG.
  • the temporal candidate estimation unit 362c of the decoding error correction unit estimates a candidate value which is most similar to the value of the relevant location of the reconstructed WZ picture (a), through motion estimation that uses one or more reconstructed key pictures cl , c2 and c3 as reference pictures.
  • the reference pictures used at motion estimation may be key pictures temporally previous to the WZ picture desired to be currently corrected and reconstructed (this case is called forward estimation, and corresponds to Case 3 of FIG. 8), may be key pictures temporally subsequent to the WZ picture (this case is called backward estimation, and corresponds to Case 4 of FIG. 8), or may be both the previous and subsequent key pictures (this case is called bidirectional estimation, and corresponds to Case 5 and Case 7 of FIG. 8).
  • the location of a reference image in a selected key picture which is most similar to the current image of the reconstructed WZ picture (a), is estimated.
  • forward, backward and bidirectional motion estimation is performed, so that three candidate locations having the highest similarities in respective directions are found, and, among the three candidate locations, the one having the lowest block matching error, such as the Sum of Absolute Differences (SAD), is estimated as the best candidate location.
  • SAD Sum of Absolute Differences
  • the final correction unit 362b finally corrects the value of each location, at which an error is detected to have occurred in the reconstructed WZ picture (a), to the estimated candidate value.
  • the decoding error correction unit 362 of FIG.4 may be constructed, as shown in FIG. 9.
  • a decoding error correction unit 362' estimates a correction candidate value using spatial correlation and corrects a value at an error occurrence location by a spatial candidate estimation and correction unit 362'a, similar to the process performed by the spatial candidate estimation unit 362a of FIG. 6.
  • a temporal candidate value is estimated by a motion estimation unit 362'b through the aforementioned motion estimation.
  • a final correction unit 362'c corrects again the value at the error occurrence location using the estimated temporal candidate value.
  • a more accurate candidate value can be estimated when estimating a temporal candidate value, thus further improving video quality.
  • a channel decoding error detection step SlO may be implemented using various schemes, and an embodiment thereof is shown in FIG. 11.
  • the similarity between each pixel and its neighboring pixels in the reconstructed WZ picture (a) is calculated at a spatial similarity measurement step SI l.
  • the measurement of the spatial similarity is performed to calculate the differences between the values of each pixel and maximum, minimum value of its neighboring pixels in the reconstructed WZ picture (a), as shown in the above Equation 1.
  • a temporal similarity measurement step S 12 the difference between the values of the corresponding pixels of side information (b) and the reconstructed WZ picture (a) is calculated as temporal similarity.
  • the spatial and temporal similarities calculated at steps SI l and S 12 are processed in such a way that, as described above, when through the comparison therebetween, if the measured temporal similarity value is greater than a predetermined threshold value A, and, at the same time, the measured spatial similarity value, that is, one of ⁇ max and ⁇ min of Equation 1, is greater than a predetermined threshold value B, it is detected that a decoding error has occurred in a relevant pixel.
  • the channel decoding error detection step SlO as shown in FIG. 11, the construction in which two spatial and temporal similarities are used at the error occurrence detection step S13 is shown.
  • the error occurrence detection step S 13 can be performed to consider only one of the spatial and temporal similarities, as described above.
  • channel decoding error correction is performed at a location, at which a channel decoding error is detected to have occurred at the channel decoding error detection step SlO, using all or part of the information shown in FIG. 8.
  • this information may be organized into any form of the Cases 1 ⁇ 7 of FIG. 8 or a combination thereof.
  • data required at the decoding error correction step S30 is determined.
  • a spatial candidate estimation step S31 of estimating a suitable candidate value based on the spatial similarity to the neighboring pixels, a temporal candidate estimation step S32 of estimating a suitable candidate value based on the temporal similarity between the reconstructed key picture (c) and the reconstructed WZ picture (a), and a final correction step S33 of correcting a pixel in which an error has occurred using the estimated candidate values, are performed.
  • the temporal candidate estimation step S32 may be omitted from the channel decoding error correction step S30 of FIG. 12. Meanwhile, the spatial candidate estimation step S31 and the temporal candidate estimation step
  • S32 may be performed either in parallel or sequentially depending on implementation of user, and only one of the steps S31 and S32 may be performed to realize the convenience of steps.
  • An embodiment of the spatial candidate estimation step S31 is implemented to estimate a suitable candidate value to be the median value between the values of the current pixel and its neighboring pixels of FIG. 7 through the use of Equation 2. This corresponds to Case 1 of FIG. 8.
  • the use of the median value of Equation 2 is not always a unique method, and the use of various functions is possible according to the user.
  • the spatial candidate estimation step S31 and the temporal candidate estimation step S32 may be performed in parallel before the final correction step S33 is performed.
  • the candidate estimation steps may be sequentially performed so that the results of one of the candidate estimation steps are used by the other candidate estimation step.
  • a candidate value which is most similar to the value of the relevant location of the reconstructed WZ picture (a) is estimated through motion estimation performed by using one or more reconstructed key pictures (c) as a reference picture, as shown in FIG. 8.
  • the reference picture used at motion estimation may be a key picture temporally previous to the WZ picture desired to be currently corrected and reconstructed, or a key picture temporally subsequent to the WZ picture, or both the previous and subsequent key pictures, as described above.
  • the location of the reference picture in the selected key picture which is most similar to the current picture of the reconstructed WZ picture (a) is estimated.
  • forward, backward, and bidirectional motion estimation are performed, so that three candidate locations having the highest similarities in respective directions are found, and, among the three candidate locations, one having the lowest block matching error, such as the Sum of Absolute Differences (SAD), is estimated as the best candidate location.
  • SAD Sum of Absolute Differences
  • the value at each location at which the error is detected to have occurred in the reconstructed WZ picture (a) is finally corrected to the estimated candidate value.
  • the channel decoding error correction step S30 of FIG. 10 may be performed, as shown in FIG. 13.
  • a correction candidate value is estimated using spatial correlation and a value at an error occurrence location is corrected at a spatial candidate estimation and correction step S31 a, similar to the process performed at the spatial candidate estimation step S31 of FIG. 12.
  • a temporal candidate value is estimated through the aforementioned motion estimation at a motion estimation step S31 b.
  • the value at the error occurrence location is corrected again using the estimated temporal candidate value.
  • a transform unit 37 and an inverse transform unit 38 may be additionally provided, as shown in FIG. 14.
  • the representation of a pixel or a relevant location described in the present invention may be regarded as the representation of a transform coefficient in transform domain, such as an integer transform, a Discrete Cosine Transform (DCT) or a wavelet transform, according to the implementation of the present invention.
  • a transform coefficient in transform domain such as an integer transform, a Discrete Cosine Transform (DCT) or a wavelet transform
  • the transform unit 37 and the inverse transform unit 38 are provided to FIG. 3.
  • the representation of a pixel used in the description of the present invention may also be implemented by the representation of a transform coefficient.

Abstract

The present invention relates to an apparatus and method for decoding distributed video-coded video using error correction. The decoding apparatus of the present invention includes a key picture decoding unit (33) for reconstructing key picture transmitted from an encoding apparatus. A side information generation unit (35) generates side information using the key picture. A channel code decoding unit (32) estimates a quantized value using both parity bits transmitted from the encoding apparatus and the side information. A video reconstruction unit (34) reconstructs a WZ picture to be decoded using both the quantized value and the side information. An error correction unit (36) detects whether a channel code decoding error has occurred in the WZ picture using the side information and the key picture, and corrects the error in the reconstructed WZ picture on a basis of picture similarity.

Description

APPARATUS AND METHOD OF DECOMPRESSING DISTRIBUTED VIDEO CODED VIDEO USING ERROR CORRECTION
[Technical Field] The present invention relates, in general, to an apparatus and method for decoding distributed video-coded video using error correction, and, more particularly, to a decoding apparatus and method for decoding distributed video-coded video using error correction, which detects and corrects decoding errors occurred in a reconstructed picture, thus improving the quality of the reconstructed picture.
[Background Art]
Generally, digital video data used in video conference, Video-On-Demand (VOD) receivers, digital broadcast receivers, Cable Television (CATV), etc. have considerable data size, and thus such data are compressed using an efficient compression method rather than being used in uncompressed form.
Technology for compressing such video is based on various compression standards such as Moving Picture Experts Group (MPEG) and H.26x. These technologies have been used in various applications, such as video players, VOD, video phones, and Digital Multimedia Broadcasting (DMB).
Recently, with the development of wireless communication in 2.5G/3G technology, video compression has been used in video transmission even in wireless mobile environments.
For the compression of digital video data, three methods, that is, a method of reducing temporal redundancy, a method of reducing spatial redundancy, and a method of reducing the statistical redundancy of data, are mainly used. Of these methods, a representative method of reducing temporal redundancy is motion estimation and compensation (ME/MC) technique.
Current coding technologies can achieve high coding efficiency through the elimination of such temporal redundancy, but a portion occupying the largest computational load in a video encoder is also the motion estimation and compensation, and thus reduction in the complexity of an encoder becomes an important technical issue in resource constraint environments such as sensor networks.
Distributed Source Coding (DSC) technology based on the Slepian-Wolf theorem has attracted attention as one method for solving the complexity problem of an encoder. The Slepian-Wolf theorem mathematically proves that even if sources having correlation are encoded independently, when they are decoded in conjunction with each other, a coding gain can be obtained to the same degree as that obtained when the respective sources are jointly encoded.
Distributed Video Coding (DVC) is such a technology which extends DSC corresponding to lossless compression to the case of lossy compression, and is based on the Wyner-Ziv theorem which extends the Slepian-Wolf theorem, the theoretical basis of DSC technology, to the case of lossy compression. From the standpoint of video coding, both technologies make it possible to shift the motion estimation and compensation processes which used to be conducted to reduce redundancy between pictures in the prior art, to a decoder side without considerable loss of coding gain. Of DVC technologies, a well-known technology is Wyner-Ziv coding based on "Wyner-Ziv coding for video: Applications to compression and error resilience" which is a paper published by A. Aaron, S. Rane, R. Zhang, B. Girod, et al. in Proc. IEEE Data Compression Conference, 2003. This DVC technology reconstructs a current picture in such a way that a decoder generates side information for the current picture using the similarity between the current picture and its neighboring pictures, regards this side information as noisy version of the current picture, in which the noise is added from virtual channel, and receives parity bits generated using a channel code from an encoder, and eliminates the noise on the side information.
FIG. 1 is a diagram showing the construction of a conventional encoder 110 based on Wyner-Ziv coding and a decoder 130 corresponding to the encoder 110. As shown in FIG. 1, the conventional encoder 110 based on Wyner-Ziv coding includes a key picture encoding unit 114, a block segmentation unit 111, a quantization unit 112, and a channel code encoding unit 113. The decoder 130 corresponding to the encoder includes a key picture decoding unit 133, a channel code decoding unit 131, a side information generation unit 134, and a video reconstruction unit 132. The encoder 110 based on Wyner-Ziv coding classifies pictures to be coded into two types. One is a picture to be coded through DVC (hereinafter referred to as a 'WZ picture'), and the other is a picture to be coded through a conventional coding scheme other than DVC (hereinafter referred to as a 'key picture'). Respective key pictures are typically encoded by the key picture encoding unit 114 using a predetermined method selected by a user, such as intra-picture coding of H.264/Advance Video Coding (AVC), and are transmitted to the decoder 130. The key picture decoding unit 133 of the decoder 130 corresponding to the conventional encoder 110 based on Wyner-Ziv coding reconstructs key pictures, which have been encoded using the predetermined method and have been transmitted. The side information generation unit 134 generates side information corresponding to a WZ picture using the key pictures reconstructed by the key picture decoding unit 133.
Typically, the side information generation unit 134 generates side information corresponding to a
WZ picture to be reconstructed using interpolation in which a linear motion between key pictures, which are previous to and subsequent to the WZ picture, is assumed. Although extrapolation may be used, but, in most cases, interpolation is used because interpolation is better than extrapolation from the standpoint of performance.
Meanwhile, in order to encode the WZ picture, the block segmentation unit 111 of the encoder 110 segments the input WZ picture into a predetermined coding units, and the quantization unit 112 performs quantization on each of the coding units. Further, the channel code encoding unit 113 generates parity bits for quantized values using a channel code.
The generated parity bits are stored in a parity buffer (not shown) and are then sequentially transmitted according to the request, through a feedback channel, of the decoder 130. The channel code decoding unit 131 of FIG. 1 receives the parity bits from the encoder 110 and thus decodes quantized values. The video reconstruction unit 132 of FIG. 1 receives the quantized values decoded by the channel code decoding unit 131, inverse-quantizes the quantized values, and reconstructs the WZ picture.
In the above process, ambiguity occurring during the inverse quantization is solved by referring to the side information from the side information generation unit 134. For a detailed description thereof,
"Wyner-Ziv coding for video: Applications to compression and error resilience", which is a paper published by A. Aaron, S. Rane, R. Zhang, B. Girod, et al. in Proc. IEEE Data Compression Conference, 2003, is referred to.
FIG. 2 is a diagram showing a turbo code-based construction among examples of the constructions of the channel code decoding unit 131 in Wyner-Ziv coding technology of FIG. 1. As shown in FIG. 2, the channel code decoding unit 131 includes two sott-input/soft-output (SISO) decoding units 210a and 210b, interleavers 213a and 213b, deinterleavers 214a and 214b, channel probability calculation units 21 Ia and 21 Ib, a decision unit 216, and a Demultiplexer (DEMUX) 215.
Parity bits transmitted from the encoder 110, which are composed of parity bits for quantized values and for interleaved quantized values, are separated by the DEMUX 215 of FIG. 2, and are input to respective channel probability calculation units 211a and 211b. The channel probability calculation units
211a and 211b receive the side information, the probability characteristics and statistic of noise, and the parity bits transmitted from the encoder 110, and accordingly calculate channel probability values.
Further, each of the SISO decoding units 210a and 210b performs decoding based on its own channel probability value and an A Priori Probability (APrP) value, provided by the other SISO decoding unit 210a or 210b. In this case, each of the SISO decoding units 210a and 210b obtains a forward state metric from a transition metric while moving from an initial state to a final state in a trellis diagram, and, after the final state has been reached, obtains a backward state metric while moving in a backward direction. An A Posteriori Probability (APoP) value and an extrinsic probability value are obtained using the state metric values and the transition metric value obtained in this way. The decision unit 216 calculates an error rate from the APoP and terminates decoding when the calculated error rate decreases below a threshold value; otherwise the other SISO decoding unit 210a or 21 Ob repeats the above process. However, when the error rate does not decrease below the threshold value even after a predetermined number of repetitions have been performed, the decoder 130 may request additional parity bits from the encoder 110 through a feedback channel. Such a decoding method is fundamentally to correct noise in side information using a channel code. However, in video having a complicated and large amounts of motion, it is difficult to generate precise side information, so that noise in the side information may increase enoumously, thus resulting that an incorrect transition is wrongly estimated as a correct one if the number of parity bits transmitted is insufficient. Meanwhile, since the two SISO decoding units 210a and 210b exchange extrinsic probabilities with each other, results of wrongly estimated transition of one SISO decoding unit 210a or 210b are transferred to the other SISO decoding unit 210a or 210b, thus causing another error to the other SISO decoding unit. When the conventional method is used, this problem cannot be avoided unless a sufficient number of parity bits are received, and, as a result, occasionally reconstructed values may entirely differ from original values due to the influence of accumulated errors, so that noise like salt & pepper is produced, and thus greatly deteriorating the quality of video.
Therefore, technology for reducing a produced channel code decoding error to a predetermined level or less, or correcting a produced error using correlated information has been earnestly desired.
[Disclosure] [Technical Problem]
Accordingly, the present invention has been made to comply with the aforementioned technical requirement, and an object of the present invention is to provide a method and apparatus for decoding distributed video-coded video using error correction, which detects whether a channel code decoding error has occurred in a reconstructed picture using temporal and spatial similarities, and selectively corrects the error depending on the results of the detection, thus improving the quality of the reconstructed picture.
[Technical Solution]
In order to accomplish the above object, the present invention provides an apparatus for decoding distributed video-coded video using error correction, comprising a key picture decoding unit for reconstructing key picture transmitted by an encoding apparatus; a side information generation unit for generating side information using the key picture reconstructed by the key picture decoding unit; a channel code decoding unit for estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; a video reconstruction unit for reconstructing a Wyner-Ziv (WZ) picture to be decoded using both the quantized value, estimated by the channel code decoding unit, and the side information; and an error correction unit for detecting whether a channel code decoding error has occurred in the WZ picture using the side information and the key picture reconstructed by the key picture decoding unit, and correcting the error in the reconstructed WZ picture on a basis of picture similarity.
In this case, the error correction unit may comprise a decoding error detection unit for detecting a correction target pixel in which the channel code decoding error has occurred, among pixels of the reconstructed WZ picture, using the side information and the reconstructed key picture; and a decoding error correction unit for correcting the correction target pixel based on similarity between the correction target pixel, detected by the decoding error detection unit, and pixels temporally and/or spatially corresponding to and/or neighboring the correction target pixel.
Further, the decoding error detection unit may comprise at least one of a spatial similarity measurement unit for measuring spatial similarity between a specific pixel and its neighboring pixels in the reconstructed WZ picture and a temporal similarity measurement unit for measuring temporal similarity between the reconstructed WZ picture and the side information; and a final detection unit for comparing at least one of the spatial similarity and the temporal similarity with a preset threshold value, and detecting correction target pixel if at least one of the spatial similarity and the temporal similarity is greater than or less than the threshold value.
Further, the spatial similarity measurement unit may measure the spatial similarity using differences between values of the specific pixel and the neighboring pixels.
Further, the temporal similarity measurement unit may measure the temporal similarity using differences between values of corresponding pixels of the reconstructed WZ picture and the side information.
Further, the decoding error correction unit may comprise at least one of a spatial candidate estimation unit for estimating a spatial candidate value based on spatial similarity between the correction target pixel and its neighboring pixels and a temporal candidate estimation unit for estimating a temporal candidate value based on temporal similarity between the correction target pixel and a corresponding pixel in the reconstructed key picture; and a final correction unit for correcting the correction target pixel using at least one of the temporal candidate value and the spatial candidate value.
In this case, the spatial candidate estimation unit may estimate the spatial candidate value to be-a median value among the values of the correction target pixel and its neighboring pixels.
Further, the temporal candidate estimation unit may estimate the temporal candidate value through motion estimation for the reconstructed WZ picture by using at least one key picture reconstructed by the key picture decoding unit as a reference picture.
In addition, the above object may be accomplished according to another embodiment by a method of decoding distributed video-coded video using error correction, comprising steps of (a) reconstructing at least one key picture transmitted from an encoding apparatus; (b) generating side information using the reconstructed key picture; (c) estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; (d) reconstructing a Wyner-Ziv (WZ) picture using both the estimated quantized value and the side information; (e) detecting whether a channel code decoding error has occurred in the WZ picture using both the side information and the key picture reconstructed by a key picture decoding unit; and (1) correcting the error in the reconstructed WZ picture on a basis of picture similarity.
In this case, step (e) may comprises at least one of a step of measuring spatial similarity between a specific pixel and its neighboring pixels in the reconstructed WZ picture and a step of measuring temporal similarity between the reconstructed WZ picture and the side information; a step of comparing at least one of the spatial similarity and the temporal similarity with a preset threshold value; and a step of detecting a correction target pixel, in which a channel code decoding error has occurred if at least one of the spatial similarity and the temporal similarity is greater than or less than the threshold value.
Further, the spatial similarity may be measured based on differences between values of the specific pixel and the neighboring pixels.
Further, the temporal similarity may be measured based on differences between values of corresponding pixels of the reconstructed WZ picture and the side information.
Further, step (f) may comprise at least one of a step of estimating a spatial candidate value based on spatial similarity between the correction target pixel and its neighboring pixels and a step of estimating a temporal candidate value based on temporal similarity between correction target pixel and a corresponding pixel in the reconstructed key picture; and a step of correcting the correction target pixel using at least one of the temporal candidate value and the spatial candidate value.
In this case, the spatial candidate value may be estimated to be a median value among the values of the correction target pixel and its neighboring pixels in the reconstructed WZ picture. Further, the temporal candidate value may be estimated through motion estimation for the reconstructed WZ picture by using at least one key picture reconstructed by the key picture decoding unit as a reference picture. [Advantageous Effects]
According to the present invention, whether a decoding error has occurred in a relevant pixel can be detected based on the measurement of the temporal and spatial similarities between each pixel in a reconstructed picture and its neighboring pixels and according to predetermined criteria Further, for a relevant pixel detected to have an error has occurred, a suitable candidate value is estimated using the temporal and/or spatial similarities to neighboring pixels, and the error can be corrected using the candidate value. Therefore, there is an advantage in that a decoding error occurring in a reconstructed picture can be corrected according to the present invention, thus greatly improving the quality of the reconstructed picture.
[Description of Drawings] FIG. 1 is a diagram showing the construction of a conventional encoder based on Wyner-Ziv coding technology and a decoder corresponding to the encoder;
FIG. 2 is a diagram showing a turbo code-based construction among examples of the constructions of a channel code decoding unit in the Wyner-Ziv coding technology of FIG. 1 ;
FIG. 3 is a diagram showing the construction of a Wyner-Ziv encoding apparatus and a decoding apparatus, in a pixel domain, for decoding DVC-coded video using error correction including a decoding error detection and correction function according to the present invention;
FIG. 4 is a diagram showing an example of the construction of the error correction unit of the decoding apparatus of FIG. 3;
FIG. 5 is a diagram showing an example of the construction of the decoding error detection unit of the error correction unit of FIG.4;
FIG. 6 is a diagram showing an example of the construction of the decoding error correction unit of the error correction unit of FIG.4;
FIG. 7 is a diagram showing an example of a specific pixel and its neighboring pixels; FIG. 8 is a diagram showing an example of reference pictures used for error correction by the error correction unit according to the present invention;
FIG. 9 is a diagram showing another example of the construction of the decoding error correction unit of the error correction unit of FIG.4; FIGS. 10 to 13 are diagrams showing a method of decoding DVC-coded video using error correction according to the present invention; and
FIG. 14 is a diagram showing the construction of a Wyner-Ziv encoding apparatus and a decoding apparatus, in a transform domain, for decoding DVC-coded video using error correction including a decoding error detection and correction function according to the present invention.
<Description of reference characters of important parts>
10: Wyner-Ziv encoding apparatus 30: decoding apparatus
32: channel code decoding unit 33 : key picture decoding unit
34: video reconstruction unit 35: side information generation unit 36: error correction unit
[Best Mode]
In accordance with the present invention, the above object is accomplished by an apparatus for decoding distributed video-coded video using error correction, comprising a key picture decoding unit for reconstructing key picture transmitted from an encoding apparatus; a side information generation unit for generating side information using the key picture reconstructed by the key picture decoding unit; a channel code decoding unit for estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; a video reconstruction unit for reconstructing a WZ picture to be decoded using both the quantized value, estimated by the channel code decoding unit, and the side information; and an error correction unit for detecting whether a channel code decoding error has occurred in the WZ picture using the side information and the key picture reconstructed by the key picture decoding unit, and correcting the error in the reconstructed WZ picture on a basis of picture similarity.
[Mode for Invention]
Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings. FIG. 3 is a diagram showing the construction of a Wyner-Ziv encoding apparatus 10 and a decoding apparatus 30 for decoding distributed video-coded video using error correction including a decoding error detection and correction function according to the present invention.
Referring to FIG. 3, the Wyner-Ziv encoding apparatus 10 according to the present invention includes a key picture encoding unit 12 and a WZ picture encoding unit 11. Further, the decoding apparatus 30 according to the present invention includes a key picture decoding unit 33, a channel code decoding unit 32, a side information generation unit 35, a video reconstruction unit 34 and an error correction unit 36.
The key picture decoding unit 33 reconstructs key pictures using the data received from the key picture encoding unit 12, and the side information generation unit 35 generates side information for a current WZ picture to be reconstructed using the reconstructed key pictures. The channel code decoding unit 32 estimates quantized values using both the side information from the side information generation unit 35 and parity bits received from the Wyner-Ziv encoding apparatus 10. Further, the video reconstruction unit 34 reconstructs the WZ picture using both the quantized values, estimated by the channel code decoding unit 32, and the side information.
Here, the WZ picture (a) reconstructed by the video reconstruction unit 34 is input to the error correction unit 36. The error correction unit 36 detects a location at which a channel code decoding error has occurred in the reconstructed WZ picture (a) using both the side information (b) output from the side information generation unit 35 and the key pictures (c) reconstructed by the key picture decoding unit 33, and corrects the channel code decoding error, thus reproducing the reconstructed WZ picture (d), the video quality of which has been remarkably improved. The channel code decoding unit 32 of FIG. 3 is configured to continuously request and receive parity bits from the Wyner-Ziv encoding apparatus 10 until reliable decoding is possible if it is estimated that decoded quantized values are not sufficiently reliable while performing channel code decoding.
In this case, since the amount of parity bits just required for decoding are received from the Wyner-Ziv encoding apparatus 10, this configuration is efficient from the standpoint of reduction in a bit rate. However, this is possible only when a reverse channel (that is, a feedback channel) required to request parity bits is present In order to overcome this problem, the channel code decoding unit 32 may be configured according to the construction implemented by a user such that it receives a predetermined number of parity bits in advance at one time, without requesting the parity bits at several times, and does not request parity bits through the feedback channel during the use of the parity bits received in advance.
Further, even in this case, if it is estimated that decoding reliability is still low after the transmitted parity bits has been exhausted to decode, additional parity bits may be transmitted. Further, it is possible to implement construction so that feedback channel is assumed not to be used, the Wyner-Ziv encoding apparatus 10 transmits a predetermined number of parity bits, which have been previously calculated or have been preset, to the decoding apparatus 30, and the decoding apparatus 30 does not request parity bits to the encoder.
Further, as the channel code used by the channel code decoding unit 32 of FIG. 3, the Turbo code or the Low Density Parity Check (LDPC) code which have been known to almost reach a Shannon limit may be preferably used. Moreover, it is apparent that other channel codes having excellent error correction capability and thus excellent compression efficiency may also be used.
FIG. 4 is a diagram showing the construction of the error correction unit 36 according to the present invention. As shown in FIG. 4, the error correction unit 36 of the present invention includes a decoding error detection unit 361 and a decoding error correction unit 362. The decoding error detection unit 361 detects whether a channel code decoding error has occurred at each location of the WZ picture (a) reconstructed by the video reconstruction unit 34, and an example of the construction thereof is shown in FIG. 5. Further, the decoding error correction unit 362 corrects a pixel in which an error has occurred using either or both of spatial and temporal similarities to neighboring information, and an example of the construction thereof is shown in FIG.6. Referring to FIG. 5, the decoding error detection unit 361 includes a spatial similarity measurement unit 361a for measuring similarity between a reconstructed pixel and its neighboring pixels in the reconstructed WZ picture (a) from the video reconstruction unit 34, a temporal similarity measurement unit 361c for measuring temporal similarity between corresponding pixels of the side information (b), from the side information generation unit 35 and the reconstructed WZ picture (a) from the video reconstruction unit 34, and a final detection unit 361b for detecting whether a channel code decoding error has occurred at a relevant location in the reconstructed WZ picture (a) based on the results of the measurement of the spatial similarity measurement unit 361a and the temporal similarity measurement unit 361c according to predetermined criteria. Here, as shown in FIG. 5, the temporal similarity measurement unit 361c may be configured to use together the key picture (c) generated by the key picture decoding unit 33.
The spatial similarity measurement unit 361a of the decoding error detection unit 361 can calculate the differences between the values of the reconstructed pixel and maximum, minimum value of its neighboring pixels of FIG. 7 in the reconstructed WZ picture (a), as shown in the following Equation 1, in order to measure the similarity between the reconstructed pixel and its neighboring pixels,
[Equation 1]
Mi(w)] = {i{χ- i,i),i(t + lj),^"- IKi(W+ D>
Figure imgf000013_0001
where %\h3) is a corresponding pixel value at a location (/, J) in the reconstructed WZ picture (a), and
N[X\i,j)\ are the spatially neighboring pixel values thereof in the reconstructed WZ picture (a), as shown in FIG. 7.
Meanwhile, the temporal similarity measurement unit 361c of the decoding error detection unit
361 calculates lM,hJ) ~~ Yxh.1)} which is the difference between the values of corresponding pixels of the
side information (b) and the reconstructed WZ picture (a). Here, Aw is the pixel value at the location (i,
J) in the side information (a).
The above equations used to measure the spatial and temporal similarities indicate only one example for the measurement of spatial or temporal similarity, and may also be variously accessed, in addition to the aforementioned example. The final detection unit 361b finally detects whether a channel code decoding error has occurred on the basis of either or both of the temporal and spatial similarities. For example, the final detection unit
361b compares the calculated
Figure imgf000013_0002
~~ J visi)! with a predetermined threshold value 'A' which is a criterion value for error detection, and detects that there is a high probability of an occurrence of decoding error in a relevant pixel when the difference value is greater than the threshold value 'A' . This is based on general knowledge that, since the similarity between side information (b) and a current WZ picture is sufficiently high, there is a high probability that a decoding error occurs in a relevant pixel when the difference between the pixel value of the reconstructed WZ picture (a) and the pixel value of the side information (b) is greater than a predetermined threshold value. Further, for a relevant pixel having a high decoding error occurrence probability in this way, if
Δmax or Δmin of Equation 1 is greater than a predetermined threshold value 'B' which is a criterion value for the error detection, it is detected that a decoding error has occurred at the location of the relevant pixel. In order to implement the decoding error detection unit, various methods may be used, and the present invention is not limited by any of the aforementioned methods. Referring to FIG. 6, the decoding error correction unit 362 performs channel decoding error correction at a location (e) which is detected to have a decoding error, using all or part of the information shown in FIG. 8.
When a value used to correct the channel decoding error is calculated, the information may be organized in any form of Cases 1 ~ 7 of FIG. 8 or a combination thereof. According to this form, input data required by the decoding error correction unit is determined. The decoding error correction unit includes a spatial candidate estimation unit 362a for estimating a suitable candidate value based on the spatial similarity to neighboring pixels, a temporal candidate estimation unit 362c for estimating a suitable candidate value based on the temporal similarity between the reconstructed key picture (c) and the reconstructed WZ picture (a), and a final correction unit 362b for correcting the pixel in which the error has occurred using the estimated candidate values.
As shown in FIG. 8, the decoding error correction unit 362 may be constructed in various forms depending on which reference picture is to be used to estimate the correction value. That is, when Case 1 of FIG. 8 in which only spatial candidate values are considered is used, the temporal candidate estimation unit 362c constituting the decoding error correction unit may be omitted from the construction. An example of the spatial candidate estimation unit 362a of the decoding error correction unit 362 is provided to estimate the most suitable candidate value to be a median value among the current pixel and its neighboring pixels using the following Equation 2. This corresponds to Case 1 of FIG. 8. In this case, the use of the median value of Equation 2 is not always a unique method, and the use of various functions is possible according to the user. Mi,. j) in Equation 2 is the pixel value of the WZ picture (d) which is the corrected reconstructed picture. [Equation 2]
Figure imgf000015_0001
The temporal candidate estimation unit 362c of FIG. 6 may be constructed to additionally use the side information (b) generated by the side information generation unit 35 according to the implementation, as indicated by a dotted line in FIG. 6.
Further, the spatial candidate estimation unit 362a and the temporal candidate estimation unit 362c may have a parallel scheme so that the final correction unit 362b can use together the results of the estimation performed by the spatial candidate estimation unit 362a and the temporal candidate estimation unit 362c, and may have a sequential scheme so that one candidate estimation unit can use the results of estimation performed by the other. Further, for the purpose of the simplification of an apparatus and the like, it is also possible to approach a scheme using only one type of candidate estimation unit, and thus various schemes may be taken into consideration. As shown in FIG. 8, the temporal candidate estimation unit 362c of the decoding error correction unit estimates a candidate value which is most similar to the value of the relevant location of the reconstructed WZ picture (a), through motion estimation that uses one or more reconstructed key pictures cl , c2 and c3 as reference pictures. The reference pictures used at motion estimation may be key pictures temporally previous to the WZ picture desired to be currently corrected and reconstructed (this case is called forward estimation, and corresponds to Case 3 of FIG. 8), may be key pictures temporally subsequent to the WZ picture (this case is called backward estimation, and corresponds to Case 4 of FIG. 8), or may be both the previous and subsequent key pictures (this case is called bidirectional estimation, and corresponds to Case 5 and Case 7 of FIG. 8).
Here, the location of a reference image in a selected key picture which is most similar to the current image of the reconstructed WZ picture (a), is estimated. In this way, forward, backward and bidirectional motion estimation is performed, so that three candidate locations having the highest similarities in respective directions are found, and, among the three candidate locations, the one having the lowest block matching error, such as the Sum of Absolute Differences (SAD), is estimated as the best candidate location. Further, it is also possible to approach a scheme of combining all or part of the three candidate values and generating a suitable candidate value through a predetermined procedure without selecting one from among the three candidate locations as found in the above procedure. Further, for the purpose of reducing the computational load for motion estimation, it is possible to implement a scheme of performing only one of forward motion estimation and backward motion estimation, and estimating a candidate value. In addition, various schemes for motion estimation can be considered, and thus the scope of the present invention includes all schemes for motion estimation.
The final correction unit 362b finally corrects the value of each location, at which an error is detected to have occurred in the reconstructed WZ picture (a), to the estimated candidate value.
In order to further improve the accuracy of error correction through the aforementioned temporal candidate estimation, the decoding error correction unit 362 of FIG.4 may be constructed, as shown in FIG. 9. Referring to FIG. 9, a decoding error correction unit 362' estimates a correction candidate value using spatial correlation and corrects a value at an error occurrence location by a spatial candidate estimation and correction unit 362'a, similar to the process performed by the spatial candidate estimation unit 362a of FIG. 6. Thereafter, a temporal candidate value is estimated by a motion estimation unit 362'b through the aforementioned motion estimation. A final correction unit 362'c corrects again the value at the error occurrence location using the estimated temporal candidate value. Further, since more improved error correction is possible through the spatial candidate estimation and correction unit 362'a, and motion estimation is performed using this error corrected value, a more accurate candidate value can be estimated when estimating a temporal candidate value, thus further improving video quality.
Hereinafter, with reference to FIGS. 10 to 13, a method of detecting and correcting the channel decoding error of video, coded through DVC, using error correction according to the present invention will be described in detail. A channel decoding error detection step SlO may be implemented using various schemes, and an embodiment thereof is shown in FIG. 11. Referring to FIG. 11 , the similarity between each pixel and its neighboring pixels in the reconstructed WZ picture (a), is calculated at a spatial similarity measurement step SI l. The measurement of the spatial similarity is performed to calculate the differences between the values of each pixel and maximum, minimum value of its neighboring pixels in the reconstructed WZ picture (a), as shown in the above Equation 1.
At a temporal similarity measurement step S 12, the difference between the values of the corresponding pixels of side information (b) and the reconstructed WZ picture (a) is calculated as temporal similarity.
The aforementioned calculation of the differences between values to measure the similarity is only an example, and it is also possible to use other calculation schemes.
As described above, at an error occurrence detection step S 13, the spatial and temporal similarities calculated at steps SI l and S 12 are processed in such a way that, as described above, when through the comparison therebetween, if the measured temporal similarity value is greater than a predetermined threshold value A, and, at the same time, the measured spatial similarity value, that is, one of Δmax and Δmin of Equation 1, is greater than a predetermined threshold value B, it is detected that a decoding error has occurred in a relevant pixel. hi an embodiment of the channel decoding error detection step SlO, as shown in FIG. 11, the construction in which two spatial and temporal similarities are used at the error occurrence detection step S13 is shown. However, according to applications, the error occurrence detection step S 13 can be performed to consider only one of the spatial and temporal similarities, as described above.
At the channel decoding error correction step S30 of FIG. 10, channel decoding error correction is performed at a location, at which a channel decoding error is detected to have occurred at the channel decoding error detection step SlO, using all or part of the information shown in FIG. 8.
When a value used to correct the channel decoding error is estimated, this information may be organized into any form of the Cases 1 ~ 7 of FIG. 8 or a combination thereof. According to this form, data required at the decoding error correction step S30 is determined. For the purpose of channel decoding error correction, as shown in FIG. 12, a spatial candidate estimation step S31 of estimating a suitable candidate value based on the spatial similarity to the neighboring pixels, a temporal candidate estimation step S32 of estimating a suitable candidate value based on the temporal similarity between the reconstructed key picture (c) and the reconstructed WZ picture (a), and a final correction step S33 of correcting a pixel in which an error has occurred using the estimated candidate values, are performed. At the channel decoding error correction step S30, various types of data can be used depending on which reference picture is to be used to estimate the correction value, as shown in FIG. 8. When Case 1 of FIG. 8 in which only a spatial candidate value is considered is used, the temporal candidate estimation step S32 may be omitted from the channel decoding error correction step S30 of FIG. 12. Meanwhile, the spatial candidate estimation step S31 and the temporal candidate estimation step
S32 may be performed either in parallel or sequentially depending on implementation of user, and only one of the steps S31 and S32 may be performed to realize the convenience of steps.
An embodiment of the spatial candidate estimation step S31 is implemented to estimate a suitable candidate value to be the median value between the values of the current pixel and its neighboring pixels of FIG. 7 through the use of Equation 2. This corresponds to Case 1 of FIG. 8. Here, the use of the median value of Equation 2 is not always a unique method, and the use of various functions is possible according to the user.
According to the construction implemented by the user, the spatial candidate estimation step S31 and the temporal candidate estimation step S32 may be performed in parallel before the final correction step S33 is performed. Alternatively, the candidate estimation steps may be sequentially performed so that the results of one of the candidate estimation steps are used by the other candidate estimation step. Further, it is also possible to use only one type of candidate estimation step for the purpose of the simplification of the implementation methods or the like.
At the temporal candidate estimation step S32, a candidate value which is most similar to the value of the relevant location of the reconstructed WZ picture (a) is estimated through motion estimation performed by using one or more reconstructed key pictures (c) as a reference picture, as shown in FIG. 8.
The reference picture used at motion estimation may be a key picture temporally previous to the WZ picture desired to be currently corrected and reconstructed, or a key picture temporally subsequent to the WZ picture, or both the previous and subsequent key pictures, as described above. In this case, the location of the reference picture in the selected key picture which is most similar to the current picture of the reconstructed WZ picture (a) is estimated. In this way, forward, backward, and bidirectional motion estimation are performed, so that three candidate locations having the highest similarities in respective directions are found, and, among the three candidate locations, one having the lowest block matching error, such as the Sum of Absolute Differences (SAD), is estimated as the best candidate location. Further, it is also possible to approach a scheme of combining all or part of the three candidate values and generating a suitable candidate value through a predetermined procedure without selecting one from among the three candidate values as found in the above procedure. Further, for the purpose of the reduction in computational load for motion estimation, it is possible to implement a scheme of performing only one of forward motion estimation and backward motion estimation, and estimating a candidate value. In addition, various schemes for motion estimation can be considered, and thus the scope of the present invention includes all schemes for motion estimation.
At the final correction step S33, the value at each location at which the error is detected to have occurred in the reconstructed WZ picture (a) is finally corrected to the estimated candidate value.
In order to further improve the accuracy of error correction through the aforementioned temporal candidate estimation, the channel decoding error correction step S30 of FIG. 10 may be performed, as shown in FIG. 13. Referring to FIG. 13, a correction candidate value is estimated using spatial correlation and a value at an error occurrence location is corrected at a spatial candidate estimation and correction step S31 a, similar to the process performed at the spatial candidate estimation step S31 of FIG. 12. Thereafter, a temporal candidate value is estimated through the aforementioned motion estimation at a motion estimation step S31 b. At a final correction step S31 c, the value at the error occurrence location is corrected again using the estimated temporal candidate value. Further, since more improved error correction is possible by the spatial candidate estimation and correction step S3 Ia, and motion estimation is additionally performed using the error corrected value, a more accurate candidate value can be estimated when estimating a temporal candidate value, and thus video quality can be further improved.
In the embodiments of the present invention, a description has been made on the assumption that the entire encoding and decoding process including quantization is performed in pixel domain, but the gist of the present invention may be freely applied to the case where the encoding and decoding are performed in transform domain, rather than the pixel domain. When the present invention is applied to transform domain, a transform unit 37 and an inverse transform unit 38 may be additionally provided, as shown in FIG. 14.
Therefore, the representation of a pixel or a relevant location described in the present invention may be regarded as the representation of a transform coefficient in transform domain, such as an integer transform, a Discrete Cosine Transform (DCT) or a wavelet transform, according to the implementation of the present invention. When the representation of a pixel or a relevant location is regarded as that of a transform coefficient in transform domain, the transform unit 37 and the inverse transform unit 38 are provided to FIG. 3. In this case, the representation of a pixel used in the description of the present invention may also be implemented by the representation of a transform coefficient.
As described above, the method and apparatus for correcting a decoding error in Wyner-Ziv coding according to the embodiment of the present invention may be implemented. Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications are possible, without departing from the gist of the invention. Therefore, the scope of the present invention should be defined by the accompanying claims and equivalents thereof, rather than the aforementioned embodiments.

Claims

[CLAIMS]
[Claim 1]
An apparatus for decoding distributed video-coded video using error correction, comprising: a key picture decoding unit for reconstructing at least one key picture transmitted from an encoding apparatus; a side information generation unit for generating side information using the key picture reconstructed by the key picture decoding unit; a channel code decoding unit for estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information; a video reconstruction unit for reconstructing a Wyner-Ziv (WZ) picture using both the quantized value, estimated by the channel code decoding unit, and the side information; and an error correction unit for detecting whether a channel code decoding error has occurred in the reconstructed WZ picture using the side information and the key picture reconstructed by the key picture decoding unit, and correcting the error in the reconstructed WZ picture on a basis of picture similarity.
[Claim 2]
The apparatus according to claim 1 , wherein the error correction unit comprises: a decoding error detection unit for detecting a correction target pixel in which the channel code decoding error has occurred, among pixels of the reconstructed WZ picture, based on the side information and the reconstructed key picture; and a decoding error correction unit for correcting the correction target pixel based on similarity between the correction target pixel detected by the decoding error detection unit, and pixels temporally and/or spatially corresponding to and/or neighboring the correction target pixel.
[Claim 3]
The apparatus according to claim 2, wherein the decoding error detection unit comprises: at least one of a spatial similarity measurement unit for measuring spatial similarity between a specific pixel and its neighboring pixels in the reconstructed WZ picture and a temporal similarity measurement unit for measuring temporal similarity between the reconstructed WZ picture and the side information; and a final detection unit for comparing at least one of the spatial similarity and the temporal similarity with a preset threshold value, and detecting correction target pixel if at least one of the spatial similarity and the temporal similarity is greater than or less than the threshold value.
[Claim 4]
The apparatus according to claim 3, wherein the spatial similarity measurement unit measures the spatial similarity using differences between values of the specific pixel and the neighboring pixels.
[Claim 5] The apparatus according to claim 3, wherein the temporal similarity measurement unit measures the temporal similarity using differences between values of corresponding pixels of the reconstructed WZ picture and the side information.
[Claim 6]
The apparatus according to claim 2, wherein the decoding error correction unit comprises: at least one of a spatial candidate estimation unit for estimating a spatial candidate value based on spatial similarity between the correction target pixel and its neighboring pixels and a temporal candidate estimation unit for estimating a temporal candidate value based on temporal similarity between the correction target pixel and a corresponding pixel in the reconstructed key picture; and a final correction unit for correcting the correction target pixel using at least one of the temporal candidate value and the spatial candidate value.
[Claim 7]
The apparatus according to claim 6, wherein the spatial candidate estimation unit estimates the spatial candidate value to be a median value between the values of the correction target pixel and the neighboring pixels.
[Claim 8]
The apparatus according to claim 6, wherein the temporal candidate estimation unit estimates the temporal candidate value through motion estimation for the reconstructed WZ picture by using at least one key picture reconstructed by the key picture decoding unit as a reference picture.
[Claim 9]
A method of decoding distributed video-coded video using error correction, comprising steps of:
(a) reconstructing key picture transmitted from an encoding apparatus;
(b) generating side information using the reconstructed key picture;
(c) estimating a quantized value using both parity bits transmitted from the encoding apparatus and the side information;
(d) reconstructing a Wyner-Ziv (WZ) (to be decoded) picture using both the estimated quantized value and the side information;
(e) detecting whether a channel code decoding error has occurred in the WZ picture using both the side information and the key picture reconstructed by a key picture decoding unit; and (f) correcting the error in the reconstructed WZ picture on a basis of picture similarity.
[Claim 10]
The method according to claim 9, wherein step (e) comprises: at least one of a step of measuring spatial similarity between a specific pixel and its neighboring pixels in the reconstructed WZ picture and a step of measuring temporal similarity between the reconstructed WZ picture and the side information; a step of comparing at least one of the spatial similarity and the temporal similarity with a preset threshold value; and a step of detecting a correction target pixel, in which a channel code decoding error has occurred, if at least one of the spatial similarity and the temporal similarity is greater than or less than the threshold value.
[Claim 11]
The method according to claim 10, wherein the spatial similarity is measured based on differences between values of the specific pixel and the neighboring pixels.
[Claim 12] The method according to claim 10, wherein the temporal similarity is measured based on differences between values of corresponding pixels of the reconstructed WZ picture and the side information.
[Claim 13]
The method according to claim 10, wherein step (f) comprises: at least one of a step of estimating a spatial candidate value based on spatial similarity between the correction target pixel and its neighboring pixels and a step of estimating a temporal candidate value based on temporal similarity between the correction target pixel and a corresponding pixel in the reconstructed key picture; and a step of correcting the correction target pixel using at least one of the temporal candidate value and the spatial candidate value.
[Claim 14]
The method according to claim 13, wherein the spatial candidate value is estimated to be a median value between the values of the correction target pixel and the neighboring pixels in the reconstructed WZ picture.
[Claim 15]
The method according to claim 13, wherein the temporal candidate value is estimated through motion estimation for the reconstructed WZ picture by using at least one key picture reconstructed by the key picture decoding unit as a reference picture.
PCT/KR2008/006403 2007-11-02 2008-10-30 Apparatus and method of decompressing distributed video coded video using error correction WO2009057956A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2007-0111443 2007-11-02
KR1020070111443A KR100915097B1 (en) 2007-11-02 2007-11-02 Apparatus and method of decompressing distributed video coded video using error correction

Publications (1)

Publication Number Publication Date
WO2009057956A1 true WO2009057956A1 (en) 2009-05-07

Family

ID=40591249

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2008/006403 WO2009057956A1 (en) 2007-11-02 2008-10-30 Apparatus and method of decompressing distributed video coded video using error correction

Country Status (2)

Country Link
KR (1) KR100915097B1 (en)
WO (1) WO2009057956A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860748A (en) * 2010-04-02 2010-10-13 西安电子科技大学 Side information generating system and method based on distribution type video encoding
GB2487078A (en) * 2011-01-07 2012-07-11 Canon Kk Improved reconstruction of at least one missing area of a sequence of digital images
CN107027051A (en) * 2016-07-26 2017-08-08 中国科学院自动化研究所 A kind of video key frame extracting method based on linear dynamic system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011010895A2 (en) * 2009-07-23 2011-01-27 성균관대학교산학협력단 Apparatus and method for decoding an image encoded with distributed video coding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050268200A1 (en) * 2004-06-01 2005-12-01 Harinath Garudadri Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques
US20060045184A1 (en) * 2004-08-27 2006-03-02 Anthony Vetro Coding correlated images using syndrome bits
US20070160144A1 (en) * 2006-01-06 2007-07-12 International Business Machines Corporation Systems and methods for visual signal extrapolation or interpolation
US20070253479A1 (en) * 2006-04-30 2007-11-01 Debargha Mukherjee Robust and efficient compression/decompression providing for adjustable division of computational complexity between encoding/compression and decoding/decompression

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100877127B1 (en) * 2004-06-01 2009-01-09 퀄컴 인코포레이티드 Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050268200A1 (en) * 2004-06-01 2005-12-01 Harinath Garudadri Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques
US20060045184A1 (en) * 2004-08-27 2006-03-02 Anthony Vetro Coding correlated images using syndrome bits
US20070160144A1 (en) * 2006-01-06 2007-07-12 International Business Machines Corporation Systems and methods for visual signal extrapolation or interpolation
US20070253479A1 (en) * 2006-04-30 2007-11-01 Debargha Mukherjee Robust and efficient compression/decompression providing for adjustable division of computational complexity between encoding/compression and decoding/decompression

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101860748A (en) * 2010-04-02 2010-10-13 西安电子科技大学 Side information generating system and method based on distribution type video encoding
CN101860748B (en) * 2010-04-02 2012-02-08 西安电子科技大学 Side information generating system and method based on distribution type video encoding
GB2487078A (en) * 2011-01-07 2012-07-11 Canon Kk Improved reconstruction of at least one missing area of a sequence of digital images
GB2487078B (en) * 2011-01-07 2014-08-06 Canon Kk Improved reconstruction of at least one missing area of a sequence of digital images
CN107027051A (en) * 2016-07-26 2017-08-08 中国科学院自动化研究所 A kind of video key frame extracting method based on linear dynamic system
CN107027051B (en) * 2016-07-26 2019-11-08 中国科学院自动化研究所 A kind of video key frame extracting method based on linear dynamic system

Also Published As

Publication number Publication date
KR20090045558A (en) 2009-05-08
KR100915097B1 (en) 2009-09-02

Similar Documents

Publication Publication Date Title
Brites et al. Evaluating a feedback channel based transform domain Wyner–Ziv video codec
JP4927875B2 (en) Method and apparatus for error resilience algorithm in wireless video communication
Artigas et al. The DISCOVER codec: architecture, techniques and evaluation
US20100177893A1 (en) Distributed video decoder and distributed video decoding method
US8259798B2 (en) Distributed video encoder and decoder and distributed video decoding method
US8446949B2 (en) Distributed coded video decoding apparatus and method capable of successively improving side information on the basis of reliability of reconstructed data
US20100166057A1 (en) Differential Data Representation for Distributed Video Coding
EP2036360A1 (en) Method, apparatus and system for robust video transmission
WO2009057956A1 (en) Apparatus and method of decompressing distributed video coded video using error correction
Martins et al. Statistical motion learning for improved transform domain Wyner–Ziv video coding
Pereira et al. Studying the GOP size impact on the performance of a feedback channel-based Wyner-Ziv video codec
KR101639434B1 (en) Wyner-Ziv coding and decoding system and method
Ko et al. Wyner-Ziv coding with spatio-temporal refinement based on successive turbo decoding
Brites et al. Distributed video coding: bringing new applications to life
Min et al. Distributed video coding based on adaptive slice size using received motion vectors
KR100969135B1 (en) Apparatus and method of decompressing distributed way coded video with successively improving side information according to?the reliability of the?decoded data
Chien et al. Rate-distortion based selective decoding for pixel-domain distributed video coding
Tonoli et al. Error resilience in current distributed video coding architectures
US9088778B2 (en) Method and system for multiview distributed video coding with adaptive syndrome bit rate control
Chien et al. Transform-domain distributed video coding with rate–distortion-based adaptive quantisation
Lei et al. Study for distributed video coding architectures
Chien et al. Bitplane selective distributed video coding
Huo et al. Iterative two-dimensional error concealment for low-complexity wireless video uplink transmitters
Ascenso et al. Augmented LDPC graph for distributed video coding with multiple side information
Dogan et al. Video transmission over mobile satellite systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08845510

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08845510

Country of ref document: EP

Kind code of ref document: A1