WO1998041028A1 - Error concealment for video image - Google Patents

Error concealment for video image Download PDF

Info

Publication number
WO1998041028A1
WO1998041028A1 PCT/US1998/004497 US9804497W WO9841028A1 WO 1998041028 A1 WO1998041028 A1 WO 1998041028A1 US 9804497 W US9804497 W US 9804497W WO 9841028 A1 WO9841028 A1 WO 9841028A1
Authority
WO
WIPO (PCT)
Prior art keywords
macroblock
current
motion vector
vector
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US1998/004497
Other languages
English (en)
French (fr)
Inventor
Taner Ozcelik
Gong-San Yu
Shirish C. Gadre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Electronics Inc
Original Assignee
Sony Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Electronics Inc filed Critical Sony Electronics Inc
Priority to DE19882177T priority Critical patent/DE19882177T1/de
Priority to JP53964698A priority patent/JP2001514830A/ja
Priority to KR1019997007831A priority patent/KR100547095B1/ko
Priority to AU63478/98A priority patent/AU6347898A/en
Publication of WO1998041028A1 publication Critical patent/WO1998041028A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/56Motion estimation with initialisation of the vector search, e.g. estimating a good candidate to initiate a search
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo

Definitions

  • the present invention relates generally to video encoding and decoding and, in particular, to methods and apparatus for error concealment in video encoding and decoding.
  • MPEG1 was designed for storing and distributing audio and motion video, with emphasis on video quality. Its features include random access, fast forward and reverse playback. MPEG1 serves as the basis for video compact disks and for many video games. The original channel bandwidth and image resolution for MPEG1 were established based upon the recording media then available. The goal of MPEG1 was the reproduction of recorded digital audio and video using a 1 2 centimeter diameter optical disc with a bit rate of 1 .41 6 Mbps, 1 .1 5 Mbps of which are allocated to video.
  • the compressed bit streams generated under the MPEG1 standard implicitly define the decompression algorithms to be used for such bit streams. The compression algorithms, however, can vary within the specifications of the MPEG 1 standard, thereby allowing the possibility of a proprietary advantage in regard to the generation of compressed bit streams.
  • MPEG2 A later developed standard known as "MPEG2" extends the basic concepts of MPEG 1 to cover a wider range of applications.
  • MPEG2 may also be useful for other applications, such as the storage of full length motion pictures on Digital Video Disk (“DVD”) optical discs, with resolution at least as good as that presently provided by 1 2 inch diameter laser discs.
  • the MPEG2 standard relies upon three types of coded pictures. I (“intra”) pictures are fields or frames coded as a standalone still image. Such I pictures allow random access points within a video stream. As such, I pictures should occur about two times per second. I pictures should also be used where scene cuts (such as in a motion picture) occur.
  • P predicted pictures are fields or frames coded relative to the nearest previous I or P picture, resulting in forward prediction processing. P pictures allow more compression than I pictures through the use of motion compensation, and also serve as a reference for B pictures and future P pictures.
  • B bidirectional pictures are fields or frames that use the most closest (with respect to display order) past and future I or P picture as a reference, resulting in bidirectional prediction. B pictures provide the most compression and increase signal to noise ratio by averaging two pictures.
  • a group of pictures (“GOP") is a series of one or more coded pictures which assist in random accessing and editing.
  • a GOP value is configurable during the encoding process. Since the I pictures are closer together, the smaller the GOP value, the better the response to movement. The level of compression is, however, lower.
  • a GOP In a coded bitstream, a GOP must start with an I picture and may be followed by any number of I, P or B pictures in any order.
  • a GOP In display order, a GOP must start with an I or B picture and end with an I or P picture. Thus, the smallest GOP size is a single I picture, with the largest size being unlimited.
  • Figure 1 illustrates a simplified block diagram of an MPEG2 encoder 100.
  • a video stream consisting of macroblock information and motion compensation information is provided to both a discrete cosine transformer 102 and a motion vector generator 104.
  • Each 8 x 8 block (of pixels or error terms) is processed by the discrete cosine transformer 102 to generate an 8 x 8 block of horizontal and vertical frequency coefficients.
  • the quantizer 106 quantizes the 8 x 8 block of frequency-domain error coefficients, thereby limiting the number of allowed values.
  • the output of quantizer 106 is processed by a zigzag scanner 108, which, starting with DC components, generates a linear stream of quantized frequency coefficients arranged in order of increasing frequency. This produces long runs of consecutive zero coefficients, which are sent to the variable length encoder 1 10.
  • the linear stream of quantized frequency-domain error coefficients is first run-length encoded by the variable length encoder 1 10.
  • the linear stream of quantized frequency-domain error coefficients is converted into a series of run-amplitude (or run-level) pairs. Each pair indicates the number of zero coefficients and the amplitude of the non-zero coefficient which ends the run.
  • variable length encoder 1 10 After the variable length encoder 1 10 encodes the run-level pairs, it then Huffman encodes the run-level pairs.
  • the run-level pairs are coded differently depending upon whether the run-level pair is included in a list of commonly-occurring run-level pairs. If the run-level pair being Huffman encoded is on the list of commonly-occurring pairs, then it will be encoded into a predetermined variable length code word which corresponds to the run-level pair. If, on the other hand, the run-level pair is not on the list, then the run-level pair is encoded as a predetermined symbol (such as an escape symbol) followed by a fixed length codes to avoid long code words and to reduce the cost of implementation.
  • a predetermined symbol such as an escape symbol
  • the run-length encoded and Huffman encoded output of the variable length encoder 1 10 provides a coded video bitstream.
  • Picture type determination circuit 1 1 2 determines whether the frame being encoded is a P picture, an I picture or a B picture. In the case of a P or I picture, picture type determination circuit 1 10 causes the motion vector generator 104 to generate an appropriate motion vector which is then provided to variable length encoder 1 10. Such motion vector is then coded and combined with the output of variable length encoder 1 10.
  • Motion compensation improves compression of P and B pictures by removing temporal redundancies between pictures.
  • MPEG 2 it operates at the macroblock level.
  • a previous frame 200 contains, among other macroblocks, a macroblock 202 consisting of 1 6 pixels (also referred to as "pels") by 1 6 lines.
  • Motion compensation relies on the fact that, except for scene cuts, most images remain in the same location from frame to frame, whereas others move only a short distance.
  • a macroblock 300 of a current frame 302 can be represented by the macroblock 202 (of Figure 2) as modified by a two dimensional motion vector 304. It is to be understood that the macroblock 300 may or may not be within the same boundaries surrounding macroblock 202 in the previous frame 200.
  • a macroblock After a macroblock has been compressed using motion compensation, it contains both the prediction (commonly referred to as “motion vectors”) and temporal difference (commonly referred to as “error terms") between the reference macroblock and the macroblock being coded.
  • the decoded (coded) video bit stream is, generally, sufficiently error free so as to not require additional techniques to compensate for errors in the decoded video bit stream.
  • a coded video bit stream is typically referred to as a "program stream.”
  • program stream When the coded video bitstream output from variable length encoder 1 10 is transported by, for example, satellite or cable transmission systems, either directly from variable length encoder 1 10 or from a recording medium onto which the coded video bitstream has been recorded, the probability of errors in the decoded video bitstream increases.
  • a coded bitstream is typically referred to as a "transport stream. "
  • error concealment aims to generate data which can be substituted for the lost or corrupt data, where any discrepancies in image created by the generated data (generally at the macroblock level) are not likely to be perceived by a viewer of a video image which relies upon such error concealment.
  • an apparatus for concealing errors includes a detector for detecting the presence of an error in data representing the current macroblock, a system for estimating the at least one motion vector based upon a difference between a forward reference frame at the current macroblock and a decoded motion vector for the forward reference frame at the current macroblock, and a system for estimating the current macroblock based upon the estimated at least one motion vector.
  • a method for concealing errors includes the steps of detecting the presence of an error in data representing the current macroblock, estimating the at least one motion vector based upon a difference between a forward reference frame at the current macroblock and a decoded motion vector for the forward reference frame at the current macroblock, and estimating the current macroblock based upon the estimated at least one motion vector.
  • Figure 1 is a simplified block diagram of a MPEG 2 video encoder.
  • Figure 2 is an illustration of a macroblock within a previous frame.
  • Figure 3 is an illustration of a macroblock within a current frame.
  • Figure 4 is simplified block diagram of a MPEG 2 video decoder of the present invention.
  • Figure 5 is a block diagram of a motion compensation system of the present invention.
  • Figure 6 is a state diagram which illustrates reference block fetch control of the address generation and control unit of Figure 5.
  • Figure 7 is a flow chart of a method for estimating macroblocks in accordance with the present invention.
  • Figure 8 is a flow chart of a method for estimating motion vectors in the temporal domain in accordance with the present invention.
  • Figure 9 is a flow chart of a method for estimating motion vectors in the spatial domain in accordance with the present invention.
  • Figure 10 is a flow chart of a method for macroblock estimation utilizing estimated motion vectors.
  • Figure 1 1 is a flow chart of a method for macroblock estimation without the use of estimated motion vectors.
  • Decoder 400 utilizes two internal busses, a GBUS 402 and an RBUS 404.
  • GBUS 402 is a 64 bit bus which is utilized for data transfer between DRAM 406 and specific blocks of decoder 400 which are described below.
  • DRAM 406 is a static dynamic random access memory, although other types of memories may be utilized.
  • RBUS 404 is an 8 bit but used primarily for control of specific blocks through reduced instruction set computing ("RISC”) CPU 408.
  • RISC CPU reduced instruction set computing
  • Decoder 400 which is coupled to both GBUS 402 and RBUS 404, operates to control the functionality of specific blocks, as more particularly described below, as well as performing a portion of video bitstream decoding.
  • Decoder 400 includes a demultiplexer 410 which is coupled to both GBUS 402 and RBUS 404.
  • a video decoder 41 2, an audio decoder 414, a host interface 41 6, a letter box unit 41 8, and a sub picture/vertical blanking interval decoder 420 are each coupled to both GBUS 402 and RBUS 404.
  • Audio clock generator 428 outputs a clock signal ACLK.
  • a memory controller 430 is coupled to GBUS 402.
  • a clock generator 432 which provides a clock signal SCLK, is coupled to host interface 41 6.
  • An output of letter box unit 41 8 is provided to video post filter on screen display system 426.
  • Sub picture/vertical blanking interval decoder 420 is coupled to video post filter on screen display system 426, which system provides its output to NTSC/PAL encoder 424.
  • Sub picture/vertical blanking interval decoder 420 is coupled to video post filter on screen display system 426.
  • a host processor 434 interfaces with host interface 41 6.
  • sub picture/vertical blanking interval decoder 420 and letter box unit 41 8 are hardwired units.
  • Letter box unit 41 8 performs a 4-tap vertical filtering and sub-sampling of a video bit stream provided through GBUS 402 and operates to control the video post filter/on screen display system
  • Sub picture/vertical blanking interval decoder 420 operates to decode sub picture (“SP") and vertical blanking interval ("VBI") information in the video bit stream.
  • SP sub picture
  • VBI vertical blanking interval
  • a sub picture bitstream consists of subtitles or menu items. For example, this would include karaoke and menu highlighting.
  • the functionality for decoding both types of bitstreams is incorporated into a single sub picture/vertical blanking interval decoder 420.
  • decoding of the VBI bit stream occurs during the vertical blanking period, while SP bitstream decoding occurs during active display periods.
  • the sub picture/vertical blanking interval decoder 420 decodes and displays on screen display ("OSD") bitstreams.
  • OSD bitstreams are instead decoded by video post filter on screen display system 426.
  • RISC CPU 408 operates to parse the video bitstream in order to control the decoder 400. RISC CPU 408 also partially decodes the video bitstream (for example, decoding of top-level data such as headers) and also controls various of the other units within decoder
  • sub picture/video blanking interval decoder 420 sub picture/video blanking interval decoder 400 through RBUS 404. A portion of the parsing is also performed by sub picture/video blanking interval decoder 420.
  • RISC CPU 408 can be utilized to change the position of an SP window through RBUS 404.
  • a user can move the SP window up or down through a command to CPU 404 with a Y coordinate as a parameter.
  • Letter box unit 41 8 is essentially a vertical decimation filter with downloadable coefficients. Letter box unit 41 8 operates to decimate an active area of a frame which has a ratio of 4:3.
  • letter box unit 41 8 converts a
  • letter box unit 41 8 converts a 720 x 480 frame to a 720 x 360 frame.
  • the active picture area is centered with respect to a display area.
  • Host processor 434 and RISC CPU 408 utilize DRAM 406 to exchange messages, commands and status information.
  • processor 434 and CPU 408 have the capability to interrupt each other.
  • CPU 408 provides a host command parser to execute such commands from host processor 434.
  • a typical sequence of events during execution of a command by host processor 434 is:
  • Host processor 434 writes a command to DRAM 406 and interrupts CPU 408. 2.
  • CPU 408 reads the command and parameters from DRAM 406.
  • CPU 408 acknowledges the command by writing a status variable to DRAM 406. 4.
  • Command parser of CPU 408 parses the command and executes it. 5.
  • CPU 408 interrupts host processor 434 upon completion of the command to report status.
  • CPU 408 polls a DRAM command buffer (not shown) for every field sync.
  • This buffer is a ring buffer where a write pointer is maintained by host processor 434 while a read pointer is maintained by CPU 408.
  • Video decoder 41 2 contains an inverse cosine discrete transformer, a variable length decoder 436, a motion compensation unit 438 and an inverse discrete cosine transformer 440.
  • Video decoder 41 2 decodes a coded video data stream received through GBUS 402 and provides a decoded stream to NTSC/PAL encoder 424 through RBUS 404.
  • NTSC/PAL encoder converts the decoded stream into an analog signal suitable for display on a television monitor having NTSC and/or PAL signal inputs.
  • Demultiplexer 410 operates on data entering decoder 400.
  • data is in the form of packets, and includes audio, video and other streams of multiplexed packets.
  • Demultiplexer 41 0 selects desired audio packets, video packets and other desired information packets, but rejects the other packets within the video bitstream. For example, audio packets representing audio in several different languages may be present in the video bitstream.
  • demultiplexer 41 0 selects only those audio packets corresponding to that language which is selected for presentation with the corresponding video packets.
  • Host interface 41 6 provides a glueless interface for host processor 434.
  • RBUS controller 422 sends out messages on RBUS 404 and acts as an arbitrator for RBUS 404.
  • the motion compensation unit 500 includes an address generation and control unit 502.
  • the address generation and control unit 502 corresponds to the memory controller 430 of Figure 4.
  • the address generation and control unit 502 accepts motion vectors from variable length decoder 436 and calculates a starting address of a reference macroblock.
  • the address generation and control unit 502 issues a data transfer request to the memory controller unit 430.
  • data transfer occurs in 64 bit (8 byte) segments at addresses aligned at 8-byte boundaries.
  • this data returns from the DRAM 406, the data are latched within the motion compensation unit 500.
  • Each 8 bit element of these latched data is then run through horizontal and vertical half-pel filters 504, and the resulting data is stored in the prediction RAM (random access memory) 506.
  • the motion compensation unit sits idle.
  • prediction data is required for reconstruction by the reconstruction unit 508 of decoded picture data.
  • the predicted data is obtained by averaging two such predictions, that is, the output of the half-pel filters at the time and a value from a prediction RAM 506 that was stored after a forward prediction.
  • the reconstruction unit 508 supports this averaging of the half-pel filters 504.
  • An estimation RAM 510 holds coefficient data transformed in the inverse discrete cosine transformer 440. Reconstruction of each picture starts once the estimation RAM 510 is full.
  • the motion compensation unit 500 issues a data transfer request and begins reconstruction.
  • the reconstruction basically consists of adding signed numbers from the output of the inverse discrete cosine transformer stored in the estimation RAM 510 to the outputs (stored in the prediction RAM 506) of the half-pel filters 504 for non-intra blocks. For intra-blocks however, the addition is not required. In such a case, the adder output is clipped before it is latched at the output of the reconstruction unit 508 when reconstruction of the picture occurs.
  • a state machine 600 which represents the functionality of the address generation and control unit 502 in regard to the transfer of reference picture data from DRAM 406 and the construction of a macroblock is now explained. From a start state 602, a state machine 600 proceeds to a get address state 604. If no motion compensation is to be used to construct the macroblock, state machine 600 proceeds to a yO wait state 606. If only backward motion compensation is to be utilized, then state machine 600 proceeds to state 608 to get or fetch a previous macroblock b which will serve as a reference macroblock. If however forward motion compensation is to be utilized, then state machine 600 proceeds to state 610 to get or fetch a forward macroblock f which will serve as a reference macroblock.
  • State machine 600 then proceeds to the yO wait state 606. If the macroblock to be constructed is to be based upon both the forward macroblock f and the previous macroblock b, then state machine 600 proceeds from state 610 to state 608 to also get or fetch a previous macroblock. In such an instance, both the forward and the previous macroblock will serve as reference macroblocks.
  • state machine 600 waits for luminance data to be received in regard to the reference macroblock or macroblocks.
  • state 61 2 the luminance portion of the macroblock to be constructed is reconstructed.
  • state machine waits for chrominance data to be received in regard to the reference macroblock or macroblocks.
  • state 61 8 reconstruction of the chrominance portion of the macroblock to be constructed occurs.
  • state 620 proceeds to state 620 to await an instruction to construct a new macroblock.
  • state machine 600 Similar to the case of the previously constructed macroblock, state machine 600 then proceeds to a get address 1 state 622. if no motion compensation is to be used to construct the macroblock, state machine 600 proceeds to a yl wait state 624. If only backward motion compensation is to be utilized, then state machine 600 proceeds to state 626 to get or fetch a previous macroblock b1 which will serve as a reference macroblock. If, however, forward motion compensation is to be utilized, then state machine 600 proceeds to state 628 to get or fetch a forward macroblock f1 which will serve as a reference macroblock. State machine 600 then proceeds to the y1_wait state 624. If the new macroblock to be constructed is to be based upon both the forward macroblock f 1 and the previous macroblock b1 , state machine 600 proceeds from state
  • state machine 600 waits for luminance data to be received in regard to the reference macroblock or macroblocks.
  • the luminance portion of the macroblock to be constructed is reconstructed.
  • state machine waits for chrominance data to be received in regard to the reference macroblock or macroblocks.
  • state machine 600 proceeds to back to start state 602.
  • state machine 600 waits until the motion vector FIFO memory of the variable length decoder 436 is not empty.
  • the address generation and control unit 502 then generates a request for a motion vector. Two consecutive requests, one for X (horizontal) and one for Y (vertical) components of the motion vectors are made.
  • the address generation and control unit 502 obtains both components of the motion vector, the address of the reference block is calculated. The address generation and control unit 502 then sends a request for data transfer to the memory controller unit.
  • a motion vector points to a sub-pixel location instead of to an exact pixel location, in order to more accurately represent a P or B picture, it is necessary to generate half-pixel (half-pel) data.
  • the smallest unit of concealment is a slice.
  • a slice consists of a series of sequential macroblocks.
  • motion vectors are estimated using either temporal prediction or spatial prediction.
  • spatial prediction pixels from a successfully decoded macroblock are copied for use in decoding the macroblock having a data error.
  • motion vectors from a successfully decoded macroblock are utilized to predict a new motion vector field in order to decode the macroblock having a data error.
  • the basic concept is that if there is a motion of an object from a frame K-2 (that is, two frames prior to frame K), one can assume that this motion will most likely continue from frame K-2 up through frame K. Therefore, the assumption is that the motion will be basically linear. Based upon that assumption, the present invention estimates pixels and motion vectors, the estimation method depending upon the data available for such estimation.
  • the motion compensation unit 438 first attempts to estimate motion vectors in the temporal domain at step 702.
  • Figure 8 illustrates such a method.
  • the algorithm starts at step 800.
  • the motion compensation unit 438 determines whether a decoded motion vector for a forward reference frame at the macroblock positioned by a vector p is available. This motion vector is designated as
  • step 804 indicates a failed attempt. If a decoded motion vector for a forward reference frame at the macroblock positioned by the vector p is available, the algorithm proceeds to step 806, which determines whether a decoded motion vector is available for the difference between ( 1 ) a forward reference frame at the macroblock positioned by the vector p; and (2) a decoded motion vector for a forward reference frame at the macroblock positioned by the vector p, where such decoded motion vector is designated by MV(k-m, p).
  • step 804 the algorithm proceeds to step 804 to indicate a failed attempt. If available, the algorithm proceeds to step 808 at which an estimated motion vector for a current frame, the k-th frame, at the macroblock positioned by the vector p is determined. Such estimated motion vector is taken to be equal to the difference between ( 1 ) a forward reference frame at the macroblock positioned by the vector p; and (2) a decoded motion vector for a forward reference frame at the macroblock positioned by the vector p. The algorithm then proceeds to step 810 which indicates a successful motion vector estimation in the temporal domain.
  • step 704 it is determined whether motion vector estimation in the temporal domain was successful. If so, the algorithm proceeds to step 706, where based upon the estimated motion vector, the motion vector to be used for estimating the subject macroblock is updated. If the motion vector estimation in the temporal domain was not successful, the algorithm proceeds to step 708, where motion vector estimation is performed in the spatial domain. The algorithm for such estimation is shown in
  • FIG. 9 The algorithm starts at step 900 of Figure 9 and proceeds to step 902, where it is determined whether a decoded motion vector for the macroblock located immediately above the estimating macroblock is available. Such a motion vector is designated by MV(k,p-( 1 ,0)). If not, a failure is indicated at step 904.
  • step 906 the motion vector for the current frame, the k-th frame at the macroblock positioned by the vector p, ⁇ MV(k, p)), is estimated to be equal to the decoded motion vector, the macroblock located immediately above the estimating macroblock, MV(k,j ( 1 ,0)), where ( 1 ,0) is a vector indicating a row index as 1 and a column index as 0.
  • step 908 indicates a successive motion vector estimation in the spatial domain.
  • step 908 the motion vector for the current macroblock is updated at step 706. Then, at step 71 2, the current macroblock is estimated using the just estimated motion vector, whether that motion vector is estimated in the temporal domain at step 702 or, in the spatial domain, at step 708.
  • step 1000 macroblock estimation with the estimated motion vector is started.
  • step 1 002 the estimated macroblock for the current frame, the k-th frame, at the macroblock positioned by the estimated motion vector p, ⁇ MB(k, p " ), is estimated to be equal to the decoded macroblock of the difference of ( 1 ) the forward reference frame at the macroblock positioned by the vector p; and (2) the estimated motion vector for the current frame, the k-th frame, at the macroblock positioned by the vector p.
  • This decoded macroblock is designated as MB(k-m, p — MV(k, f3)), where m is the frame index difference between the current frame and a forward reference frame.
  • Step 71 6 the current macroblock is estimated without the use of an estimated motion vector.
  • Step 71 6 is detailed in Figure 1 1 .
  • macroblock estimation without use of an estimated motion vector starts at step 1 100.
  • step 1 102 it is determined whether the macroblock for the frame preceding the current frame (the k-th frame being the current frame) positioned by the vector p, MB(k-1 , p is available. If such macroblock is available, then, at step 1 104, the current macroblock positioned by the vector p is estimated to be equal to the macroblock for the frame preceding the current frame positioned by the vector p.
  • the algorithm is then completed as indicated at step 714.
  • step 1 106 it is determined whether the macroblock for the current frame positioned by the vector p but indexed by minus 1 row and in the same column, MB(k, f ( 1 ,0) is available, where ( 1 ,0) is a vector indicating a row index as 1 and a column index as 0.
  • the current macroblock (for the current frame, the k-th frame, positioned by the vector p) is estimated to be equal to the macroblock for the current frame positioned by the vector p but indexed by minus 1 row and in the same column, MP(k,j ( 1 ,0)).
  • the algorithm is then completed as indicated at step 714.
  • step 1 1 10 it is then determined whether the decoded macroblock for the macroblock located immediately above the macroblock to be estimated, MB(k,p + (1 ,0)), is available, where (1 ,0) is a vector indicating a row index as 1 and a column index as 0.
  • the estimated macroblock for the current frame, the k-th frame, at the macroblock positioned by the vector p is estimated to be equal to such decoded macroblock for the macroblock located immediately above the macroblock to be estimated, MB(k, p ( 1 ,0)).
  • the algorithm is then completed as indicated at step 714.
  • the macroblock estimation without an estimated motion vector fails, as indicated at step 1 1 14. In this case, the macroblock can be left blank.
  • the present invention has been described in relation to decoding of a coded video bit stream, the present invention is also applicable to the coding of a video bit stream, where an error is detected during or after coding and the error is concealed prior to recording or transport.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Picture Signal Circuits (AREA)
PCT/US1998/004497 1997-03-13 1998-03-06 Error concealment for video image Ceased WO1998041028A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
DE19882177T DE19882177T1 (de) 1997-03-13 1998-03-06 Verfahren und Gerät zur Fehlerverdeckung
JP53964698A JP2001514830A (ja) 1997-03-13 1998-03-06 ビデオ画像の誤り隠蔽
KR1019997007831A KR100547095B1 (ko) 1997-03-13 1998-03-06 오류 은닉 방법 및 장치
AU63478/98A AU6347898A (en) 1997-03-13 1998-03-06 Error concealment for video image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/816,867 US6078616A (en) 1997-03-13 1997-03-13 Methods and apparatus for error concealment utilizing temporal domain motion vector estimation
US08/816,867 1997-03-13

Publications (1)

Publication Number Publication Date
WO1998041028A1 true WO1998041028A1 (en) 1998-09-17

Family

ID=25221814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/004497 Ceased WO1998041028A1 (en) 1997-03-13 1998-03-06 Error concealment for video image

Country Status (7)

Country Link
US (4) US6078616A (enExample)
JP (1) JP2001514830A (enExample)
KR (1) KR100547095B1 (enExample)
CN (1) CN1256048A (enExample)
AU (1) AU6347898A (enExample)
DE (1) DE19882177T1 (enExample)
WO (1) WO1998041028A1 (enExample)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004086631A3 (en) * 2003-03-24 2005-06-23 Qualcomm Inc Method, apparatus and system for encoding and decoding side information for multimedia transmission
KR100587280B1 (ko) * 1999-01-12 2006-06-08 엘지전자 주식회사 오류 은폐방법
US8331445B2 (en) 2004-06-01 2012-12-11 Qualcomm Incorporated Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques

Families Citing this family (51)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3493985B2 (ja) * 1997-11-26 2004-02-03 安藤電気株式会社 動画通信評価装置
JP4016227B2 (ja) * 1998-01-07 2007-12-05 ソニー株式会社 画像処理装置および方法、並びに記録媒体
KR100556450B1 (ko) * 1998-05-28 2006-05-25 엘지전자 주식회사 움직임 벡터 추정에 의한 오류 복원 방법
JP3604290B2 (ja) * 1998-09-25 2004-12-22 沖電気工業株式会社 動画像復号方法及び装置
US6438136B1 (en) 1998-10-09 2002-08-20 Microsoft Corporation Method for scheduling time slots in a communications network channel to support on-going video transmissions
US6618363B1 (en) 1998-10-09 2003-09-09 Microsoft Corporation Method for adapting video packet generation and transmission rates to available resources in a communications network
US6754266B2 (en) 1998-10-09 2004-06-22 Microsoft Corporation Method and apparatus for use in transmitting video information over a communication network
US6445701B1 (en) 1998-10-09 2002-09-03 Microsoft Corporation Channel access scheme for use in network communications
US6519004B1 (en) 1998-10-09 2003-02-11 Microsoft Corporation Method for transmitting video information over a communication channel
US6507587B1 (en) 1998-10-09 2003-01-14 Microsoft Corporation Method of specifying the amount of bandwidth to reserve for use in network communications
US6385454B1 (en) 1998-10-09 2002-05-07 Microsoft Corporation Apparatus and method for management of resources in cellular networks
US6289297B1 (en) * 1998-10-09 2001-09-11 Microsoft Corporation Method for reconstructing a video frame received from a video source over a communication channel
US6489995B1 (en) * 1998-10-22 2002-12-03 Sony Corporation Method and apparatus for motion vector concealment
US6430159B1 (en) * 1998-12-23 2002-08-06 Cisco Systems Canada Co. Forward error correction at MPEG-2 transport stream layer
US6697061B1 (en) * 1999-01-21 2004-02-24 Hewlett-Packard Development Company, L.P. Image compression featuring selective re-use of prior compression data
US6363113B1 (en) * 1999-06-07 2002-03-26 Lucent Technologies Inc. Methods and apparatus for context-based perceptual quantization
GB9928022D0 (en) * 1999-11-26 2000-01-26 British Telecomm Video coding and decording
KR100335055B1 (ko) 1999-12-08 2002-05-02 구자홍 압축 영상신호의 블럭현상 및 링현상 제거방법
JP2001309372A (ja) * 2000-04-17 2001-11-02 Mitsubishi Electric Corp 符号化装置
KR20020010171A (ko) * 2000-07-27 2002-02-04 오길록 블록 정합 움직임 추정을 위한 적응적 예측 방향성 탐색방법
US6965647B1 (en) * 2000-11-28 2005-11-15 Sony Corporation Robust time domain block decoding
CN1167271C (zh) * 2001-01-10 2004-09-15 华为技术有限公司 压缩编码图像传输中的误码处理方法
US7197194B1 (en) * 2001-05-14 2007-03-27 Lsi Logic Corporation Video horizontal and vertical variable scaling filter
US7039117B2 (en) * 2001-08-16 2006-05-02 Sony Corporation Error concealment of video data using texture data recovery
JP2003209845A (ja) * 2002-01-11 2003-07-25 Mitsubishi Electric Corp 画像符号化集積回路
KR100906473B1 (ko) * 2002-07-18 2009-07-08 삼성전자주식회사 개선된 움직임 벡터 부호화 및 복호화 방법과 그 장치
KR100548316B1 (ko) * 2002-11-08 2006-02-02 엘지전자 주식회사 동영상 에러 보정 방법 및 장치
KR100504824B1 (ko) 2003-04-08 2005-07-29 엘지전자 주식회사 이미지 신호 블록오류 보정 장치 및 방법
MXPA06003925A (es) * 2003-10-09 2006-07-05 Thomson Licensing Proceso de derivacion de modo directo para el ocultamiento de error.
KR20050076155A (ko) * 2004-01-19 2005-07-26 삼성전자주식회사 영상 프레임의 에러 은닉 장치 및 방법
KR100531895B1 (ko) * 2004-02-26 2005-11-29 엘지전자 주식회사 이동통신 시스템에서의 영상 블럭 오류 은닉 장치 및 방법
US8311127B2 (en) * 2004-03-04 2012-11-13 Nvidia Corporation Method and apparatus to check for wrongly decoded macroblocks in streaming multimedia applications
US7613351B2 (en) * 2004-05-21 2009-11-03 Broadcom Corporation Video decoder with deblocker within decoding loop
US20060012719A1 (en) * 2004-07-12 2006-01-19 Nokia Corporation System and method for motion prediction in scalable video coding
CN100409689C (zh) * 2004-08-05 2008-08-06 中兴通讯股份有限公司 用于提高视频质量的错误掩蔽方法
KR100689216B1 (ko) * 2005-05-12 2007-03-09 동국대학교 산학협력단 서브블록을 이용한 인트라 프레임의 시간적인 오류은닉방법
KR100736041B1 (ko) 2005-06-30 2007-07-06 삼성전자주식회사 에러 은닉 방법 및 장치
US9055298B2 (en) * 2005-07-15 2015-06-09 Qualcomm Incorporated Video encoding method enabling highly efficient partial decoding of H.264 and other transform coded information
US7912219B1 (en) * 2005-08-12 2011-03-22 The Directv Group, Inc. Just in time delivery of entitlement control message (ECMs) and other essential data elements for television programming
KR20080061379A (ko) * 2005-09-26 2008-07-02 코닌클리케 필립스 일렉트로닉스 엔.브이. 비디오 에러 은폐를 개선하기 위한 코딩/디코딩 방법 및장치
US7916796B2 (en) * 2005-10-19 2011-03-29 Freescale Semiconductor, Inc. Region clustering based error concealment for video data
US7965774B2 (en) * 2006-01-06 2011-06-21 International Business Machines Corporation Method for visual signal extrapolation or interpolation
FR2898459B1 (fr) * 2006-03-08 2008-09-05 Canon Kk Procede et dispositif de reception d'images ayant subi des pertes en cours de transmission
US7916791B2 (en) * 2006-06-16 2011-03-29 International Business Machines Corporation Method and system for non-linear motion estimation
US8494053B2 (en) * 2007-01-03 2013-07-23 International Business Machines Corporation Method and apparatus of temporal filtering for side information interpolation and extrapolation in Wyner-Ziv video compression systems
US20080285651A1 (en) * 2007-05-17 2008-11-20 The Hong Kong University Of Science And Technology Spatio-temporal boundary matching algorithm for temporal error concealment
US8208556B2 (en) 2007-06-26 2012-06-26 Microsoft Corporation Video coding using spatio-temporal texture synthesis
US7995858B2 (en) * 2007-07-05 2011-08-09 Motorola Solutions, Inc. Method and apparatus to facilitate creating ancillary information regarding errored image content
CN101849838B (zh) * 2009-03-30 2013-10-16 深圳迈瑞生物医疗电子股份有限公司 超声系统中消除暂态的方法与装置
JP5649412B2 (ja) * 2010-11-12 2015-01-07 三菱電機株式会社 エラーコンシールメント装置及び復号装置
CN103813177A (zh) * 2012-11-07 2014-05-21 辉达公司 一种视频解码系统和方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247363A (en) * 1992-03-02 1993-09-21 Rca Thomson Licensing Corporation Error concealment apparatus for hdtv receivers
EP0727910A2 (en) * 1995-02-16 1996-08-21 THOMSON multimedia Temporal-spatial error concealment apparatus and method for video signal processors
US5552831A (en) * 1992-07-03 1996-09-03 Matsushita Electric Industrial Co., Ltd. Digital video signal decoding apparatus and presumed motion vector calculating method
US5561532A (en) * 1993-03-31 1996-10-01 Canon Kabushiki Kaisha Image reproducing apparatus

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60143341A (ja) * 1983-12-30 1985-07-29 Dainippon Screen Mfg Co Ltd 抜きマスク版の作製方法
DE3480811D1 (de) * 1984-03-23 1990-01-25 Ibm Interaktives anzeigesystem.
US5253339A (en) * 1990-07-26 1993-10-12 Sun Microsystems, Inc. Method and apparatus for adaptive Phong shading
SE469866B (sv) * 1991-04-12 1993-09-27 Dv Sweden Ab Metod för estimering av rörelseinnehåll i videosignaler
KR0125581B1 (ko) * 1991-07-24 1998-07-01 구자홍 디지탈 영상신호의 에러수정 시스템
JPH05137131A (ja) * 1991-11-13 1993-06-01 Sony Corp フレーム間動き予測方法
US5596655A (en) * 1992-08-18 1997-01-21 Hewlett-Packard Company Method for finding and classifying scanned information
US5461420A (en) * 1992-09-18 1995-10-24 Sony Corporation Apparatus for coding and decoding a digital video signal derived from a motion picture film source
EP0610916A3 (en) * 1993-02-09 1994-10-12 Cedars Sinai Medical Center Method and device for generating preferred segmented numerical images.
US5737022A (en) * 1993-02-26 1998-04-07 Kabushiki Kaisha Toshiba Motion picture error concealment using simplified motion compensation
TW224553B (en) * 1993-03-01 1994-06-01 Sony Co Ltd Method and apparatus for inverse discrete consine transform and coding/decoding of moving picture
US5515388A (en) * 1993-03-19 1996-05-07 Sony Corporation Apparatus and method for preventing repetitive random errors in transform coefficients representing a motion picture signal
JPH0763691A (ja) * 1993-08-24 1995-03-10 Toshiba Corp パターン欠陥検査方法及びその装置
US5440652A (en) * 1993-09-10 1995-08-08 Athena Design Systems, Inc. Method and apparatus for preparing color separations based on n-way color relationships
JP3405776B2 (ja) * 1993-09-22 2003-05-12 コニカ株式会社 切り抜き画像の輪郭線探索装置
US5604822A (en) * 1993-11-12 1997-02-18 Martin Marietta Corporation Methods and apparatus for centroid based object segmentation in object recognition-type image processing system
US5630037A (en) * 1994-05-18 1997-05-13 Schindler Imaging, Inc. Method and apparatus for extracting and treating digital images for seamless compositing
JP3794502B2 (ja) * 1994-11-29 2006-07-05 ソニー株式会社 画像領域抽出方法及び画像領域抽出装置
JPH09128529A (ja) * 1995-10-30 1997-05-16 Sony Corp ディジタル画像の雑音の投影に基づく除去方法
TW357327B (en) * 1996-08-02 1999-05-01 Sony Corp Methods, apparatus and program storage device for removing scratch or wire noise, and recording media therefor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5247363A (en) * 1992-03-02 1993-09-21 Rca Thomson Licensing Corporation Error concealment apparatus for hdtv receivers
US5552831A (en) * 1992-07-03 1996-09-03 Matsushita Electric Industrial Co., Ltd. Digital video signal decoding apparatus and presumed motion vector calculating method
US5561532A (en) * 1993-03-31 1996-10-01 Canon Kabushiki Kaisha Image reproducing apparatus
EP0727910A2 (en) * 1995-02-16 1996-08-21 THOMSON multimedia Temporal-spatial error concealment apparatus and method for video signal processors

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEE S H ET AL: "TRANSMISSION ERROR DETECTION, RESYNCHRONIZATION, AND ERROR CONCEALMENT FOR MPEG VIDEO DECODER", PROCEEDINGS OF THE SPIE, vol. 2094, no. PART 01, 8 November 1993 (1993-11-08), pages 195 - 204, XP002043758 *
SUN H ET AL: "ADAPTIVE ERROR CONCEALMENT ALGORITHM FOR MPEG COMPRESSED VIDEO", VISUAL COMMUNICATIONS AND IMAGE PROCESSING 18-20 NOVEMBER 1992, BOSTON, US, vol. 1818, no. PART 02, 18 November 1992 (1992-11-18), pages 814 - 824, XP002043757 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100587280B1 (ko) * 1999-01-12 2006-06-08 엘지전자 주식회사 오류 은폐방법
WO2004086631A3 (en) * 2003-03-24 2005-06-23 Qualcomm Inc Method, apparatus and system for encoding and decoding side information for multimedia transmission
US7643558B2 (en) 2003-03-24 2010-01-05 Qualcomm Incorporated Method, apparatus, and system for encoding and decoding side information for multimedia transmission
US8331445B2 (en) 2004-06-01 2012-12-11 Qualcomm Incorporated Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques
US8379716B2 (en) 2004-06-01 2013-02-19 Qualcomm Incorporated Method, apparatus, and system for enhancing robustness of predictive video codecs using a side-channel based on distributed source coding techniques

Also Published As

Publication number Publication date
AU6347898A (en) 1998-09-29
JP2001514830A (ja) 2001-09-11
US6175597B1 (en) 2001-01-16
KR20000075760A (ko) 2000-12-26
DE19882177T1 (de) 2000-02-10
US6078616A (en) 2000-06-20
US6449311B1 (en) 2002-09-10
CN1256048A (zh) 2000-06-07
US6285715B1 (en) 2001-09-04
KR100547095B1 (ko) 2006-02-01

Similar Documents

Publication Publication Date Title
US6078616A (en) Methods and apparatus for error concealment utilizing temporal domain motion vector estimation
US6404817B1 (en) MPEG video decoder having robust error detection and concealment
EP0895694B1 (en) System and method for creating trick play video streams from a compressed normal play video bitstream
US6862402B2 (en) Digital recording and playback apparatus having MPEG CODEC and method therefor
US5136371A (en) Digital image coding using random scanning
US6256348B1 (en) Reduced memory MPEG video decoder circuits and methods
EP1107613A2 (en) Picture recording apparatus and methods
JP2000278692A (ja) 圧縮データ処理方法及び処理装置並びに記録再生システム
JPH06181569A (ja) 画像符号化及び復号化方法又は装置、及び画像記録媒体
JPH0698313A (ja) 動画像復号化装置
JP2005064569A (ja) トランスコーダ及びこれを用いた撮像装置及び信号処理装置
US5903311A (en) Run level pair buffering for fast variable length decoder circuit
JP3147792B2 (ja) 高速再生のためのビデオデータの復号化方法及びその装置
US6192188B1 (en) Programmable audio/video encoding system capable of downloading compression software from DVD disk
JPH0818979A (ja) 画像処理装置
US6321026B1 (en) Recordable DVD disk with video compression software included in a read-only sector
US6128340A (en) Decoder system with 2.53 frame display buffer
JPH10506505A (ja) 連続ディジタルビデオの同期化方法
JP3061125B2 (ja) Mpeg画像再生装置およびmpeg画像再生方法
US6438318B2 (en) Method for regenerating the original data of a digitally coded video film, and apparatus for carrying out the method
EP0470772B1 (en) Digital video signal reproducing apparatus
JPH1084545A (ja) ディジタルビデオ信号の符号化方法及び装置
JP3203172B2 (ja) Mpegビデオデコーダ
JP3501521B2 (ja) ディジタル映像信号再生装置および再生方法
JP2005518728A (ja) 画像処理方法及び装置

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 98805118.4

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH GM GW HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1019997007831

Country of ref document: KR

ENP Entry into the national phase

Ref document number: 1998 539646

Country of ref document: JP

Kind code of ref document: A

RET De translation (de og part 6b)

Ref document number: 19882177

Country of ref document: DE

Date of ref document: 20000210

WWE Wipo information: entry into national phase

Ref document number: 19882177

Country of ref document: DE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA

WWP Wipo information: published in national office

Ref document number: 1019997007831

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1019997007831

Country of ref document: KR

REG Reference to national code

Ref country code: DE

Ref legal event code: 8607