US20060182356A1 - Video compression and decompression system with postfilter to filter coding artifacts - Google Patents

Video compression and decompression system with postfilter to filter coding artifacts Download PDF

Info

Publication number
US20060182356A1
US20060182356A1 US11/401,516 US40151606A US2006182356A1 US 20060182356 A1 US20060182356 A1 US 20060182356A1 US 40151606 A US40151606 A US 40151606A US 2006182356 A1 US2006182356 A1 US 2006182356A1
Authority
US
United States
Prior art keywords
video sequence
level
frame
video
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/401,516
Inventor
Karl Lillevold
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/401,516 priority Critical patent/US20060182356A1/en
Publication of US20060182356A1 publication Critical patent/US20060182356A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/179Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the invention pertains to the field of video compression and decompression systems. More particularly, the invention pertains to a system and method for reducing compression artifacts in a video compression and decompression system.
  • Systems for applications of video and visual communications transmit, process and store large quantities of video data.
  • a rendering video system displays the video data as a sequence of individual digital images, also referred to as “frames,” thereby simulating movement.
  • the video systems process and modify the video data prior to transmission or storage. For instance, the video data is compressed and encoded to reduce the bit rate and the required bandwidth for storage and transmission of the images.
  • a video encoder is used to compress and encode the video data and a video decoder is used to decompress and to decode the video data.
  • the video encoder outputs video data that has a reduced bit rate and a reduced redundancy. That is, the technique of video compression removes spatial redundancy within a video frame or temporal redundancy between consecutive video frames.
  • an image coding process typically includes performing a block based frequency transform, e.g., discrete cosine transform (DCT), on an image to be transmitted.
  • DCT discrete cosine transform
  • the size of the quantization steps needs to be relatively large. In that case, the resulting coarse quantization of the DCT coefficients introduces coding artifacts into the transmitted image and severely degrades the visual quality of the decoded sequence that may be displayed.
  • mosquito artifacts examples include mosquito artifacts and blocking artifacts.
  • Mosquito artifacts are defined as temporarily nonstationary impulses that appear around objects which are moving within a decompressed video sequence.
  • the mosquito artifacts result from the coarse quantization of a prediction error signal.
  • the majority of the energy contained in the prediction error signal is the result of a motion estimator's inability to distinguish between differently moving objects within the video sequence.
  • the subject is generally against a stationary background. Since the motion estimator tries to match blocks of pixels between temporarily adjacent frames, the boundaries between moving objects and stationary background that fall within these blocks cannot be detected. This leads to a situation where either a part of the background is assumed to be moving, or a part of the moving object is assumed to be stationary. If these prediction errors are coarsely quantized, impulsive artifacts result that change over time and tend to swarm around the moving object, similar to a mosquito.
  • Blocking artifacts are defined as the introduction of artificial block boundaries into the decoded video sequence. These artifacts are due to the combination of quantization and dividing the prediction error signal into blocks. That is, since there exists an inverse relationship between spatial extent and frequency extent analogous to the inverse relationship that exists between time and frequency extent in Fourier analysis, the quantization errors that occur in the DCT domain are smeared across the corresponding spatial block. Furthermore, since each block is quantized separately, the errors are most visible at the block boundaries.
  • An aspect of the present invention involves a decoder apparatus for a video compression and decompression system having an input to receive an encoded video sequence and an output for a decoded video sequence.
  • a video decoder is coupled to the input and decodes the received encoded video sequence.
  • a filter module is coupled to the video decoder and the output and filters a decoded video sequence from the video decoder.
  • the filter module has a variable filter strength that is a function of detected motion activity within the video sequence.
  • the filter module has an input to receive a decoded video sequence and an output for the decoded video sequence.
  • An activity counter determines motion activity within the decoded video sequence.
  • a threshold detector is coupled to the activity counter and adjusts a filter strength as a function of the determined motion activity within the decoded video sequence. The threshold detector selectively adjusts the filter strength to one of a predetermined number of levels.
  • a further aspect of the invention involves a video compression and decompression system having an input to receive an encoded video sequence and an output for a decoded video sequence.
  • a video decoder is coupled to the input and decodes the received encoded video sequence.
  • a filter module is coupled to the video decoder and the output and filters a decoded video sequence from the video decoder.
  • the filter module has a variable filter strength that is a function of detected motion activity within the video sequence.
  • Another aspect of the invention involves a method of filtering a decoded video sequence in a video compression and decompression system.
  • the method receives a decoded video sequence and determines a motion activity of each frame of the decoded video sequence.
  • the method categorizes each frame as a frame of high activity or as a frame of low activity and adjusts a filter strength of a filter to filter the decoded video sequence as a function of the motion activity.
  • FIG. 1 is a high-level block diagram of a video compression and decompression system having an encoder apparatus and a decoder apparatus that includes a postfilter module.
  • FIG. 2 shows an exemplary embodiment of the postfilter module.
  • FIGS. 3A and 3B are diagrams illustrating motion activity within a video sequence as a function of time.
  • FIG. 3C is a flow diagram illustrating the filter strength of the postfilter module as a function of time.
  • FIG. 4 is a diagram illustrating a procedure for varying the filter strength illustrated in FIG. 3C .
  • FIG. 1 is a high-level block diagram of a video compression and decompression system 1 (hereinafter “video compression system 1 ”) having an encoder apparatus 2 and a decoder apparatus 4 that is coupled to the encoder apparatus 2 through a medium 6 .
  • the encoder apparatus 3 includes a video encoder 12 and a buffer 14 .
  • the decoder apparatus 4 includes a buffer 16 , a video decoder 8 and a postfilter module 20 .
  • the postfilter module 20 provides for a variable filter strength in accordance with the present invention. The filter strength varies as a function of a determined motion activity within a video sequence in order to remove coding artifacts (e.g., mosquito artifacts and blocking artifacts) as explained below in greater detail.
  • coding artifacts e.g., mosquito artifacts and blocking artifacts
  • the encoder apparatus 2 encodes a video input sequence 8 (VIDEO IN) to generate an encoded and thus compressed representation in one of a number of possible formats.
  • the format may be in an interleaved format tailored for “live” streaming of the encoded representation.
  • the format may also be in a single file format in which each of the encoded representations are stored in a contiguous block within one file. This format is tailored for the “static” case in which a file is created for subsequent streaming by a server.
  • the video input sequence 8 to the encoder apparatus 2 may be either a live signal, e.g., provided by a video camera, or a prerecorded sequence in a number of possible formats.
  • the video input sequence 8 includes frames of a digital video, an audio segment consisting of digital audio, combinations of video, graphics, text, and/or audio (multimedia applications), or analog forms of the aforementioned. If necessary, conversions can be applied to various types of input signals such as analog video, or previously compressed and encoded video to produce an appropriate input to the encoder apparatus 2 .
  • the encoder apparatus 2 may accept video in RGB or YUV formats.
  • the encoder apparatus 2 may be adapted to accept any format of input as long as an appropriate conversion mechanism is supplied. Conversion mechanisms for converting a signal in one format to a signal in another format are well known in the art.
  • the medium 6 may be a storage device or a transmission medium.
  • the video compression system 1 may be implemented on a computer.
  • the encoder apparatus 2 sends an encoded video stream (representation) to the medium 6 that is implemented as a storage device.
  • the storage device may be a video server, a hard disk drive, a CD rewriteable drive, a read/write DVD drive, or any other device capable of storing and allowing the retrieval of encoded video data.
  • the storage device is connected to the decoder apparatus 4 , which can selectively read from the storage device and decode the encoded video sequence. As the decoder apparatus 4 decodes a selected one of the encoded video sequence, it generates a reproduction of the video input sequence 8 , for example, for display on a computer monitor or screen.
  • the medium 6 provides a connection to another computer, which may be a remote computer, that receives the encoded video sequence.
  • the medium 6 may be a network connection such as a LAN, a WAN, the Internet, or the like.
  • the decoder apparatus 4 within the remote computer decodes the encoded representations contained therein and may generate a reproduction of the video input sequence 8 on a screen or a monitor of the remote computer.
  • the video encoder 12 performs, for example, a discrete cosine transform (DCT) to encode and compress the video sequence 8 .
  • DCT discrete cosine transform
  • the video encoder 12 converts the video input sequence 8 from the time domain into the frequency domain.
  • the output of the video encoder 12 is a set of signal amplitudes, for example, called “DCT coefficients” or transform coefficients.
  • a quantizer receives the DCT coefficients and assigns each of a range (or step size) of DCT coefficient values a single value, such as a small integer, during encoding. Quantization allows data to be represented more compactly, but results in the loss of some data. Quantization on a finer scale results in a less compact representation (higher bit-rate), but also involves the loss of less data. Quantization on a more coarse scale results in a more compact representation (lower bit-rate), but also involves more loss of data.
  • the pre-existing video encoding techniques typically break up a frame (picture) into smaller blocks of pixels called macroblocks.
  • Each macroblock can consist of a matrix of pixels, typically a 16 ⁇ 16 matrix, defining the unit of information at which encoding is performed.
  • the matrix of pixels is therefore referred to as a 16 ⁇ 16 macroblock.
  • These video encoding techniques usually break each 16 ⁇ 16 macroblock further up into smaller matrices of pixels. For example, into 8 ⁇ 8 matrices of pixels or 4 ⁇ 4 matrices of pixels. Such matrices are hereinafter referred to as subblocks.
  • subblocks Such matrices are hereinafter referred to as subblocks.
  • a 16 ⁇ 16 macroblock is divided into 4 ⁇ 4 subblocks.
  • Those skilled in the art will appreciate that the present invention is equally applicable to systems that use 8 ⁇ 8 subblocks, 4 ⁇ 4 subblocks or only 16 ⁇ 16 marcoblocks without breaking it up into subblocks.
  • the pre-existing encoding techniques provide for motion compensation and motion estimation using motion vectors.
  • the motion vectors describe the direction, expressed through an x-component and a y-component, and the amount of motion of the 16 ⁇ 16 macroblocks, or their respective subblocks, and are transmitted to the decoder as part of the bit stream.
  • Motion vectors are used for bidirectionally encoded pictures (B-pictures) and predicted pictures (P pictures) as known in the art.
  • the buffer 14 of the encoder apparatus 2 receives the encoded and compressed video sequence (hereinafter “encoded video sequence”) from the video encoder 12 and adjusts the bit rate of the encoded video sequence before it is sent to the medium 6 . Buffering may be required because individual video images may contain varying amounts of information, resulting in varying coding efficiencies from image to image. As the buffer 14 has a limited size, a feedback loop to the quantizer may be used to avoid overflow or underflow of the buffer 14 .
  • the bit-rate of the representation is the rate at which the representation data must be processed in order to present the representation in real time.
  • a higher bit-rate representation of the video input sequence 8 generally comprises more data than a lower bit-rate representation of the same sequence.
  • bit-rate measure The most pertinent application of the bit-rate measure is in determining the rate at which the data of the representation should be streamed in order to present the representation in real time.
  • Real-time playback of a higher bit-rate representation of a video sequence requires that the data of the representation be streamed and/or decoded at a faster rate than that required for a lower bit-rate representation of the same sequence.
  • a higher bit-rate representation requires a higher bandwidth communication link than a lower bit-rate representation.
  • the decoder apparatus 4 performs the inverse function of the encoder apparatus 4 .
  • the buffer 16 serves also to adjust the bit rate of the incoming encoded video sequence.
  • the video decoder 18 decodes and decompresses the incoming video sequence reconstructing the video sequence and outputs a decoded and decompressed video sequence 24 (hereinafter “decoded video sequence 24 ”).
  • the decoder apparatus 4 includes the postfilter module 20 that removes coding artifacts such as mosquito artifacts and blocking artifacts. The causes for these coding artifacts are explained above.
  • the postfilter module 20 in accordance with the present invention reduces these artifacts without distorting the pictures which are output as a video output sequence 10 . The video movie is therefore visually more pleasing for the viewers' eyes.
  • FIG. 2 shows the video decoder 18 coupled to an exemplary embodiment of the postfilter module 20 .
  • the postfilter module 20 includes an activity counter 22 , a threshold detector 26 and a filter 30 .
  • An output 19 of the video decoder 18 is connected to an input 21 of the activity counter 22 and an output 23 of the video decoder 18 is connected to an input 29 of the filter 30 .
  • the activity counter 22 is connected to an input 25 of the threshold detector 26 which is also connected to an input 27 of the filter 30 .
  • the input 21 of the activity counter 22 receives a sequence of motion vectors
  • the input 29 of the filter 30 receives a bitstream representing the decoded video sequence
  • the input 21 of the filter 30 receives a filter strength control signal.
  • the filter 30 outputs the video output sequence 10 .
  • the postfilter module 20 is divided into the activity counter 22 , the threshold detector 26 and the filter 30 for illustrative purposes only. Those skilled in the art will appreciate that such a division may not be required to implement the functionality of the postfilter module 20 in accordance of the present invention.
  • the functionality of the postfilter module 20 may be implemented using a software module or a microprocessor that incorporates the functionalities of the components ( 22 , 26 , 30 ) shown in FIG. 2 .
  • the video decoder 18 receives a bit stream representing the encoded video sequence from the buffer 16 ( FIG. 1 ).
  • the video decoder 18 is a conventional MPEG decoder that includes a decoder controller, a VLC decoder (Variable Length Coding, VLC) and a reconstruction module.
  • VLC decoder Very Length Coding
  • VLC Variable Length Coding
  • the operation and function of these components are known to those skilled in the art. These components are therefore described only to the extent believed to be helpful for a complete understanding of the present invention.
  • For a more extensive description of a MPEG decoder reference is made to generally available MPEG documents and publications. For instance, Barry G. Haskell et at, “Digital Video: An Introduction to MPEG-2,” Chapman & Hall, ISBN 0-412-08411-2, Chapter 8, pages 156-182.
  • the decoder controller receives the bit stream and derives control signals for the reconstruction module and the VLC decoder from the bit stream. Further, the decoder controller separates the encoded video data from the bit stream and inputs the encoded data to the VLC decoder. The decoder controller outputs control signals and status signals that include among others “block position,” “frame select,” “intra select” and “current picture” (not shown).
  • the VLC decoder obtains from the encoded video data the quantized DC coefficients. Further, the VLC decoder obtains the motion vectors of each picture and a code word indicating the mode used to encode the video sequence (e.g., bidirectionally coded pictures (B-pictures) and predicted pictures (P-pictures)). In accordance with the present invention, the motion vectors are available for the activity counter 22 at the output 19 of the video decoder 18 .
  • the reconstruction module includes a dequantizer unit and an IDCT unit for calculating the inverse DCT. Using the encoded video data and the control signals provided by the decoder controller, the reconstruction module rebuilds each picture and, thus, creates the decoded video sequence 24 that is input to the filter 30 .
  • the activity counter 22 receives the motion vectors from the video encoder 18 on a frame-by-frame basis.
  • Each macroblock of a frame e.g., a 16 ⁇ 16 macroblock
  • the activity counter 22 uses these motion vectors to determine whether the presently analyzed frame (picture) is a high activity frame or a low activity frame.
  • the activity counter 26 includes a processor that analyzes these motion vectors of the subblocks. If either the differences of the x-components or the difference of the y-components of a motion vector with respect to the present subblock and the neighboring subblocks are greater than a predetermined integer the present subblock is categorized as an active subblock. That is, the predetermined integer is selected as a parameter to determine whether there is motion in the present subblock. The processor determines that there is motion when at least one of the differences (x-components, y-components) is greater than the predetermined integer.
  • the predetermined integer is selected to be in a range between “1” and “3.” In a preferred embodiment, the predetermined integer is “1.” This procedure is repeated, for example, sixteen times and each 4 ⁇ 4 subblock is either in the active category or in the inactive category.
  • the processor determines the number of active subblocks for each macroblock.
  • the processor characterizes this macroblock as an active macroblock.
  • the predetermined threshold number is selected to define when a macroblock is active. That is, when a sufficient number of subblocks is active, the macroblock is defined as active. For instance, if the macroblock is divided into 4 ⁇ 4 subblocks, the predetermined threshold number is four so that the macroblock is active when more than four subblocks (i.e., 25%) out of 16 subblocks are active. In other embodiments, the macroblock may be defined as active when more subblocks are active (e.g., within a range between 30% and 50%), or when less subblocks are active (e.g., within a range of 10% and 25%).
  • the processor counts the number of active macroblocks for the whole frame and compares the number of active macroblocks to a defined threshold value. If the number of active macroblocks is higher than the defined threshold value, the frame is a high activity frame (hereinafter “H” frame.) If the number of active macroblocks is equal to or lower than the defined threshold value, the processor characterizes the frame as a low activity frame (hereinafter “L” frame.)
  • the defined threshold value may then be selected to be 300. That is, the frame is an “H” frame if more than 25% of the macroblocks are active. It is contemplated that the defined threshold value may be selected so that a frame is active when between 10% and 40% of the macroblocks are active.
  • the processor For each “H” frame the processor increases a counter and for each “L” frame the processor decreases this counter.
  • a first counter value therefore, increases and decreases as a function of the “H” frames and the “L” frames.
  • the processor implements two counters. The first counter operates as described. The second counter, however, decreases with each “H” frame and increases with each “L” frame ( FIG. 3A ). A second counter value, therefore, decreases and increases as a function of the “H” frames and the “L” frames.
  • the activity counter 22 may be implemented using a processor and separate counters coupled to the processor.
  • the activity counter 22 may be implemented as a software module or a combination of a software module and firmware.
  • the threshold detector 26 is coupled to the activity counter 22 and includes in one embodiment a comparator unit (e.g., including operational amplifiers).
  • the comparator unit receives, for example, the two counter values from the activity counter 22 and determines the difference between the two counter values.
  • the comparator unit compares this difference to predetermined threshold values as described below with respect to FIGS. 3A, 3C .
  • the threshold values may be determined through resistor circuits coupled to the comparator unit. In another embodiment, the threshold values may be stored in a programmable memory from which the comparator unit may read the respective threshold value.
  • the filter 30 is a digital filter that filters high-frequency components, such as pulses, from the decoded video sequence 24 .
  • the filter 30 has a variable filter strength that depends upon the motion activity within a picture.
  • the filter strength can be adjusted by varying the filter coefficients of the filter 30 .
  • the filter 30 may be adjusted to have one of a number of predetermined levels representing different filter strengths.
  • the filter 30 has three levels, Strong (“S”), Medium (“M”) or Weak (“W”) as shown in FIG. 3C . It is contemplated that more than three levels may be defined.
  • the filter 30 may include or may be associated with a memory that stores the respective filter coefficients for the levels Strong, Medium and Weak.
  • the generated control signal e.g., a code word corresponding to one of the levels addresses the memory and the filter coefficients for the selected filter strength are loaded to the filter 30 .
  • FIG. 3A is a diagram illustrating the motion activity within the video sequence 24 as determined by the activity counter 26 .
  • FIG. 3A shows two graphs 30 , 32 representing these events as a continuous function of time (t).
  • the graph 30 is shown as a solid line and the graph 32 is shown as a broken line, wherein the graph 32 is a mirror image of the graph 30 relative to a horizontal axis through an offset value O.
  • the offset value O may be zero and the horizontal axis may be the X-axis.
  • the graph 30 increases with a predetermined rate and the graph 32 decreases with the same rate.
  • FIG. 3A further shows difference values ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 as described below.
  • FIG. 3B is a diagram illustrating a sequence of frames having high and low activities as a function of time (t).
  • a frame with a high activity is represented as “H” and a frame with a low activity is represented as “L.”
  • FIG. 3B is aligned with FIG. 3A to illustrate the increasing and decreasing graphs 30 , 32 as a function of the “H” and “L” frames.
  • the illustrated sequence of frames (from left to right) has five “H” frames, ten “L” frames, eight “H” frames and eight “L” frames.
  • FIG. 3C is a diagram illustrating the filter strength of the postfilter module 20 as a function of time.
  • the filter strength may be set to have one of three levels Strong (“S”), Medium (“M”) or Weak (“W”).
  • FIG. 4 is a flow diagram illustrating a procedure 40 for varying the filter strength illustrated in FIG. 3C .
  • the activity counter is reset to its offset value O and the filter strength is set to be at the level “M” which is in one embodiment a default level as indicated in a step 42 .
  • the procedure determines the motion activity within the sequence of frames. That is, the procedure determines whether the presently analyzed frame is a “H” frame or a “L” frame, for example, as illustrated in FIG. 3B . For each subblock within a 16 ⁇ 16 macroblock, the procedure looks at a present subblock and its neighboring subblocks and uses the differences of the motion vectors (x, y components) with respect to the present motion vector and the neighboring subblock to determine whether the present subblock is an active subblock or an inactive subblock This procedure is repeated sixteen times so that each subblock is either in the active category or in the inactive category.
  • the procedure determines the number of active subblocks for each macroblock.
  • a comparison of the number of active subblocks with the predetermined threshold value is used to determine whether the macroblock is an active or inactive macroblock.
  • the predetermined threshold value is four. That is, if more than 25% of the 16 subblocks are in the active category the macroblock is an active macroblock. It is contemplated that the threshold value may be set at a higher number.
  • the procedure determines the number of macroblocks for the whole frame and compares the number of active macroblocks to a defined threshold value. If the number of active macroblocks is higher than the defined threshold value, the frame is a “H” frame. In one embodiment, the defined threshold value is 25%, i.e., if the number of active macroblocks in a frame is greater than 25% of the number of macroblocks in the whole frame, the frame is an “H” frame. If it is a “H” frame, the procedure proceeds along the YES branch to a step 50 , and if it is not a “H” frame, the procedure proceeds along the NO branch to a step 48 .
  • the procedure increases a first counter and decreases a second counter.
  • These periods include the “H” frames as shown in FIG. 3B .
  • the procedure decreases the first counter and increases the second counter.
  • These periods include the “L” frames as shown in FIG. 3B .
  • the procedure determines if the difference ⁇ is positive and if the difference ⁇ is greater than a threshold value T(MS).
  • the threshold value T(MS) is an integer value, for example, eight (e.g., ⁇ 1 >8). If the condition is satisfied, the procedure proceeds along the YES branch to a step 56 . If the condition is not satisfied, the procedure proceeds along the NO branch to a step 58 .
  • the filter strength is set to the level “S” the filter strength does not change unless another condition is satisfied.
  • the procedure maintains the level “M” which is the default level for the filter strength.
  • the procedure determines if the difference ⁇ is negative and if the absolute value of the difference ⁇ is greater than a threshold value T(SM).
  • the threshold value T(SM) is an integer value, for example, five (e.g., abs ( ⁇ 2 )>5). If the condition is satisfied, the procedure proceeds along the YES branch to a step 64 . If the condition is not satisfied, the procedure proceeds along the NO branch to a step 62 . As indicated in the step 62 , the procedure maintains the filter strength at the level “S.”
  • the filter strength is set to the level “M” the filter strength does not change unless another condition is satisfied.
  • the procedure determines if the difference ⁇ is negative and if the absolute value of the difference ⁇ is greater than a threshold value T(MW).
  • T(MW) is an integer value, for example, ten (e.g., abs ( ⁇ 3 )>10). If the condition is satisfied, the procedure proceeds along the YES branch to a step 70 . If the condition is not satisfied, the procedure proceeds along the NO branch to a step 68 . As indicated in the step 68 , the procedure maintains the filter strength at the level “M”.
  • the filter strength is set to the level “W” the filter strength does not change unless another condition is satisfied.
  • the procedure determines if the difference ⁇ is positive and if the difference ⁇ is greater than a threshold value T(WM).
  • the threshold value T(WM) is an integer value, for example, three (e.g., ⁇ 4 >3). If the condition is satisfied, the procedure proceeds along the YES branch to a step 76 . If the condition is not satisfied, the procedure proceeds along the NO branch to a step 74 . As indicated in the step 74 , the procedure maintains the filter strength at the level “W.”
  • the filter strength is set to the level “M” the filter strength does not change unless one of the conditions is satisfied.
  • the procedure changes the filter strength to the level “W” as described with reference to the step 66 .
  • the procedure determines whether the sequence of frames has ended (end of sequence “EOS”). If the sequence has not yet ended, the procedure returns along the NO branch to the step 54 . Otherwise, the procedure ends at a step 80 .
  • the video compression system 1 and the method of filtering in accordance with the present invention provide for a reduction of mosquito artifacts and blocking artifacts without distorting the pictures of the a video output sequence 10 so that the video movie as a whole is visually more pleasing for the viewers' eyes.
  • a hysteresis avoids that the filter strength changes if the motion activity changes only briefly.
  • threshold values and threshold numbers referred to above are of exemplary nature. Different threshold values and threshold numbers may be used to, for example, modify the hysteresis of the postfilter module 20 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A video compression and decompression system has an input to receive an encoded video sequence and an output for a decoded video sequence. A video decoder is coupled to the input and decode the received encoded video sequence. A filter module is coupled to the video decoder and the output and filters the decoded video sequence from the video decoder. The filter module has a variable filter strength that is a function of detected motion activity within the video sequence. The filter module filters coding artifacts, such as mosquito artifacts and blocking artifacts from the decoded video sequence so that the displayed video is more pleasing for a viewer's eyes.

Description

    RELATED APPLICATIONS
  • This application is incorporated by reference and is a continuation of U.S. patent application Ser. No. 09/731,474, filed Dec. 6, 2000.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The invention pertains to the field of video compression and decompression systems. More particularly, the invention pertains to a system and method for reducing compression artifacts in a video compression and decompression system.
  • 2. Description of the Related Art
  • Systems for applications of video and visual communications transmit, process and store large quantities of video data. To create a video presentation, such as a video movie, a rendering video system displays the video data as a sequence of individual digital images, also referred to as “frames,” thereby simulating movement. In order to achieve a video presentation with an acceptable video quality, or to enable transmission and storage at all, the video systems process and modify the video data prior to transmission or storage. For instance, the video data is compressed and encoded to reduce the bit rate and the required bandwidth for storage and transmission of the images.
  • In a conventional video system a video encoder is used to compress and encode the video data and a video decoder is used to decompress and to decode the video data. The video encoder outputs video data that has a reduced bit rate and a reduced redundancy. That is, the technique of video compression removes spatial redundancy within a video frame or temporal redundancy between consecutive video frames. In accordance with known image compression standards, such as MPEG, MPEG-2 and JPEG, an image coding process typically includes performing a block based frequency transform, e.g., discrete cosine transform (DCT), on an image to be transmitted. The resulting DCT coefficients are quantized or mapped to different quantization steps to render an approximate representation thereof. If the available transmission bandwidth is relatively small, with respect to the complexity of the image to be transmitted, the size of the quantization steps needs to be relatively large. In that case, the resulting coarse quantization of the DCT coefficients introduces coding artifacts into the transmitted image and severely degrades the visual quality of the decoded sequence that may be displayed.
  • Examples of such artifacts include mosquito artifacts and blocking artifacts. Mosquito artifacts are defined as temporarily nonstationary impulses that appear around objects which are moving within a decompressed video sequence. The mosquito artifacts result from the coarse quantization of a prediction error signal. The majority of the energy contained in the prediction error signal is the result of a motion estimator's inability to distinguish between differently moving objects within the video sequence. For example, in videoconferencing applications the subject is generally against a stationary background. Since the motion estimator tries to match blocks of pixels between temporarily adjacent frames, the boundaries between moving objects and stationary background that fall within these blocks cannot be detected. This leads to a situation where either a part of the background is assumed to be moving, or a part of the moving object is assumed to be stationary. If these prediction errors are coarsely quantized, impulsive artifacts result that change over time and tend to swarm around the moving object, similar to a mosquito.
  • Blocking artifacts are defined as the introduction of artificial block boundaries into the decoded video sequence. These artifacts are due to the combination of quantization and dividing the prediction error signal into blocks. That is, since there exists an inverse relationship between spatial extent and frequency extent analogous to the inverse relationship that exists between time and frequency extent in Fourier analysis, the quantization errors that occur in the DCT domain are smeared across the corresponding spatial block. Furthermore, since each block is quantized separately, the errors are most visible at the block boundaries.
  • In order to reduce the effects of the coding artifacts, it is known to apply a postprocessing technique to the recovered image. Since the artifacts typically comprise high frequency components, decoders in systems that apply such postprocessing include a postprocessor having a low-pass filter to filter out those components in the recovered image. However, the quality of the postprocessed image is dependent upon the selected parameters and may drastically vary from one set of parameters to another.
  • Other systems use postprocessing filters that are spatially adaptive. These spatially adapted filters rely on local signal estimates and local noise power estimates to alter their responses. However, such an estimation of the noise power based on the quantization step size is not a reliable indicator as to the spatial location of mosquito artifacts and blocking artifacts within the decompressed video. For example, oversmoothing or blurring of the decompressed video occurs due to inaccurate estimates of the compressed video's signal-to-noise ratio.
  • Thus, there is a need for a video compression and decompressing system and a method which suppress mosquito and blocking artifacts to improve upon the video quality a viewer perceives.
  • SUMMARY OF THE INVENTION
  • An aspect of the present invention involves a decoder apparatus for a video compression and decompression system having an input to receive an encoded video sequence and an output for a decoded video sequence. A video decoder is coupled to the input and decodes the received encoded video sequence. A filter module is coupled to the video decoder and the output and filters a decoded video sequence from the video decoder. The filter module has a variable filter strength that is a function of detected motion activity within the video sequence.
  • Another aspect of the invention involves a filter module for a video compression and decompression system. The filter module has an input to receive a decoded video sequence and an output for the decoded video sequence. An activity counter determines motion activity within the decoded video sequence. A threshold detector is coupled to the activity counter and adjusts a filter strength as a function of the determined motion activity within the decoded video sequence. The threshold detector selectively adjusts the filter strength to one of a predetermined number of levels.
  • A further aspect of the invention involves a video compression and decompression system having an input to receive an encoded video sequence and an output for a decoded video sequence. A video decoder is coupled to the input and decodes the received encoded video sequence. A filter module is coupled to the video decoder and the output and filters a decoded video sequence from the video decoder. The filter module has a variable filter strength that is a function of detected motion activity within the video sequence.
  • Another aspect of the invention involves a method of filtering a decoded video sequence in a video compression and decompression system. The method receives a decoded video sequence and determines a motion activity of each frame of the decoded video sequence. The method categorizes each frame as a frame of high activity or as a frame of low activity and adjusts a filter strength of a filter to filter the decoded video sequence as a function of the motion activity.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other aspects, advantages, and novel features of the invention will become apparent upon reading the following detailed description and upon reference to the accompanying drawings.
  • FIG. 1 is a high-level block diagram of a video compression and decompression system having an encoder apparatus and a decoder apparatus that includes a postfilter module.
  • FIG. 2 shows an exemplary embodiment of the postfilter module.
  • FIGS. 3A and 3B are diagrams illustrating motion activity within a video sequence as a function of time.
  • FIG. 3C is a flow diagram illustrating the filter strength of the postfilter module as a function of time.
  • FIG. 4 is a diagram illustrating a procedure for varying the filter strength illustrated in FIG. 3C.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • In the following description, reference is made to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific embodiments in which the invention may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention. Where possible, the same reference numbers will be used throughout the drawings to refer to the same or like components. Numerous specific details are set forth in order to provide a thorough understanding of the present invention. However, it will be obvious to one skilled in the art that the present invention may be practiced without the specific details or with certain alternative equivalent devices and methods to those described herein. In other instances, well-known methods, procedures, components, and devices have not been described in detail so as not to unnecessarily obscure aspects of the present invention.
  • FIG. 1 is a high-level block diagram of a video compression and decompression system 1 (hereinafter “video compression system 1”) having an encoder apparatus 2 and a decoder apparatus 4 that is coupled to the encoder apparatus 2 through a medium 6. The encoder apparatus 3 includes a video encoder 12 and a buffer 14. The decoder apparatus 4 includes a buffer 16, a video decoder 8 and a postfilter module 20. The postfilter module 20 provides for a variable filter strength in accordance with the present invention. The filter strength varies as a function of a determined motion activity within a video sequence in order to remove coding artifacts (e.g., mosquito artifacts and blocking artifacts) as explained below in greater detail.
  • The encoder apparatus 2 encodes a video input sequence 8 (VIDEO IN) to generate an encoded and thus compressed representation in one of a number of possible formats. The format may be in an interleaved format tailored for “live” streaming of the encoded representation. The format may also be in a single file format in which each of the encoded representations are stored in a contiguous block within one file. This format is tailored for the “static” case in which a file is created for subsequent streaming by a server.
  • The video input sequence 8 to the encoder apparatus 2 may be either a live signal, e.g., provided by a video camera, or a prerecorded sequence in a number of possible formats. The video input sequence 8 includes frames of a digital video, an audio segment consisting of digital audio, combinations of video, graphics, text, and/or audio (multimedia applications), or analog forms of the aforementioned. If necessary, conversions can be applied to various types of input signals such as analog video, or previously compressed and encoded video to produce an appropriate input to the encoder apparatus 2. In one embodiment, the encoder apparatus 2 may accept video in RGB or YUV formats. The encoder apparatus 2, however, may be adapted to accept any format of input as long as an appropriate conversion mechanism is supplied. Conversion mechanisms for converting a signal in one format to a signal in another format are well known in the art.
  • The medium 6 may be a storage device or a transmission medium. In one embodiment, the video compression system 1 may be implemented on a computer. The encoder apparatus 2 sends an encoded video stream (representation) to the medium 6 that is implemented as a storage device. The storage device may be a video server, a hard disk drive, a CD rewriteable drive, a read/write DVD drive, or any other device capable of storing and allowing the retrieval of encoded video data. The storage device is connected to the decoder apparatus 4, which can selectively read from the storage device and decode the encoded video sequence. As the decoder apparatus 4 decodes a selected one of the encoded video sequence, it generates a reproduction of the video input sequence 8, for example, for display on a computer monitor or screen.
  • In another embodiment, the medium 6 provides a connection to another computer, which may be a remote computer, that receives the encoded video sequence. The medium 6 may be a network connection such as a LAN, a WAN, the Internet, or the like. The decoder apparatus 4 within the remote computer decodes the encoded representations contained therein and may generate a reproduction of the video input sequence 8 on a screen or a monitor of the remote computer.
  • Aspects of the video compression system 1 illustrated in FIG. 1 and described above can be combined and supplemented to achieve other embodiments. Numerous other implementations are consistent with the scope of this invention. Such other implementations need not be restricted to video, but may include audio or other forms of media as well.
  • The video encoder 12 performs, for example, a discrete cosine transform (DCT) to encode and compress the video sequence 8. Briefly, the video encoder 12 converts the video input sequence 8 from the time domain into the frequency domain. The output of the video encoder 12 is a set of signal amplitudes, for example, called “DCT coefficients” or transform coefficients. A quantizer receives the DCT coefficients and assigns each of a range (or step size) of DCT coefficient values a single value, such as a small integer, during encoding. Quantization allows data to be represented more compactly, but results in the loss of some data. Quantization on a finer scale results in a less compact representation (higher bit-rate), but also involves the loss of less data. Quantization on a more coarse scale results in a more compact representation (lower bit-rate), but also involves more loss of data.
  • The pre-existing video encoding techniques typically break up a frame (picture) into smaller blocks of pixels called macroblocks. Each macroblock can consist of a matrix of pixels, typically a 16×16 matrix, defining the unit of information at which encoding is performed. The matrix of pixels is therefore referred to as a 16×16 macroblock. These video encoding techniques usually break each 16×16 macroblock further up into smaller matrices of pixels. For example, into 8×8 matrices of pixels or 4×4 matrices of pixels. Such matrices are hereinafter referred to as subblocks. In one embodiment of the present invention, a 16×16 macroblock is divided into 4×4 subblocks. Those skilled in the art will appreciate that the present invention is equally applicable to systems that use 8×8 subblocks, 4×4 subblocks or only 16×16 marcoblocks without breaking it up into subblocks.
  • Further, the pre-existing encoding techniques provide for motion compensation and motion estimation using motion vectors. The motion vectors describe the direction, expressed through an x-component and a y-component, and the amount of motion of the 16×16 macroblocks, or their respective subblocks, and are transmitted to the decoder as part of the bit stream. Motion vectors are used for bidirectionally encoded pictures (B-pictures) and predicted pictures (P pictures) as known in the art.
  • The buffer 14 of the encoder apparatus 2 receives the encoded and compressed video sequence (hereinafter “encoded video sequence”) from the video encoder 12 and adjusts the bit rate of the encoded video sequence before it is sent to the medium 6. Buffering may be required because individual video images may contain varying amounts of information, resulting in varying coding efficiencies from image to image. As the buffer 14 has a limited size, a feedback loop to the quantizer may be used to avoid overflow or underflow of the buffer 14. The bit-rate of the representation is the rate at which the representation data must be processed in order to present the representation in real time. A higher bit-rate representation of the video input sequence 8 generally comprises more data than a lower bit-rate representation of the same sequence. The most pertinent application of the bit-rate measure is in determining the rate at which the data of the representation should be streamed in order to present the representation in real time. Real-time playback of a higher bit-rate representation of a video sequence requires that the data of the representation be streamed and/or decoded at a faster rate than that required for a lower bit-rate representation of the same sequence. Thus, when streaming for real-time playback, a higher bit-rate representation requires a higher bandwidth communication link than a lower bit-rate representation.
  • The decoder apparatus 4 performs the inverse function of the encoder apparatus 4. The buffer 16 serves also to adjust the bit rate of the incoming encoded video sequence. The video decoder 18 decodes and decompresses the incoming video sequence reconstructing the video sequence and outputs a decoded and decompressed video sequence 24 (hereinafter “decoded video sequence 24”). In addition, the decoder apparatus 4 includes the postfilter module 20 that removes coding artifacts such as mosquito artifacts and blocking artifacts. The causes for these coding artifacts are explained above. The postfilter module 20 in accordance with the present invention reduces these artifacts without distorting the pictures which are output as a video output sequence 10. The video movie is therefore visually more pleasing for the viewers' eyes.
  • FIG. 2 shows the video decoder 18 coupled to an exemplary embodiment of the postfilter module 20. The postfilter module 20 includes an activity counter 22, a threshold detector 26 and a filter 30. An output 19 of the video decoder 18 is connected to an input 21 of the activity counter 22 and an output 23 of the video decoder 18 is connected to an input 29 of the filter 30. Further, the activity counter 22 is connected to an input 25 of the threshold detector 26 which is also connected to an input 27 of the filter 30. As discussed below in more detail, the input 21 of the activity counter 22 receives a sequence of motion vectors, the input 29 of the filter 30 receives a bitstream representing the decoded video sequence and the input 21 of the filter 30 receives a filter strength control signal. The filter 30 outputs the video output sequence 10.
  • It is contemplated that the postfilter module 20 is divided into the activity counter 22, the threshold detector 26 and the filter 30 for illustrative purposes only. Those skilled in the art will appreciate that such a division may not be required to implement the functionality of the postfilter module 20 in accordance of the present invention. For instance, the functionality of the postfilter module 20 may be implemented using a software module or a microprocessor that incorporates the functionalities of the components (22, 26, 30) shown in FIG. 2.
  • The video decoder 18 receives a bit stream representing the encoded video sequence from the buffer 16 (FIG. 1). In one embodiment, the video decoder 18 is a conventional MPEG decoder that includes a decoder controller, a VLC decoder (Variable Length Coding, VLC) and a reconstruction module. The operation and function of these components are known to those skilled in the art. These components are therefore described only to the extent believed to be helpful for a complete understanding of the present invention. For a more extensive description of a MPEG decoder, reference is made to generally available MPEG documents and publications. For instance, Barry G. Haskell et at, “Digital Video: An Introduction to MPEG-2,” Chapman & Hall, ISBN 0-412-08411-2, Chapter 8, pages 156-182.
  • The decoder controller receives the bit stream and derives control signals for the reconstruction module and the VLC decoder from the bit stream. Further, the decoder controller separates the encoded video data from the bit stream and inputs the encoded data to the VLC decoder. The decoder controller outputs control signals and status signals that include among others “block position,” “frame select,” “intra select” and “current picture” (not shown).
  • The VLC decoder obtains from the encoded video data the quantized DC coefficients. Further, the VLC decoder obtains the motion vectors of each picture and a code word indicating the mode used to encode the video sequence (e.g., bidirectionally coded pictures (B-pictures) and predicted pictures (P-pictures)). In accordance with the present invention, the motion vectors are available for the activity counter 22 at the output 19 of the video decoder 18.
  • The reconstruction module includes a dequantizer unit and an IDCT unit for calculating the inverse DCT. Using the encoded video data and the control signals provided by the decoder controller, the reconstruction module rebuilds each picture and, thus, creates the decoded video sequence 24 that is input to the filter 30.
  • The activity counter 22 receives the motion vectors from the video encoder 18 on a frame-by-frame basis. Each macroblock of a frame (e.g., a 16×16 macroblock) is further divided into subblocks, for instance, into 4×4 subblocks, wherein each 4×4 subblock includes a motion vector representing motion of the 4×4 subblock, if any. The activity counter 22 uses these motion vectors to determine whether the presently analyzed frame (picture) is a high activity frame or a low activity frame.
  • In one embodiment, the activity counter 26 includes a processor that analyzes these motion vectors of the subblocks. If either the differences of the x-components or the difference of the y-components of a motion vector with respect to the present subblock and the neighboring subblocks are greater than a predetermined integer the present subblock is categorized as an active subblock. That is, the predetermined integer is selected as a parameter to determine whether there is motion in the present subblock. The processor determines that there is motion when at least one of the differences (x-components, y-components) is greater than the predetermined integer. For instance, the predetermined integer is selected to be in a range between “1” and “3.” In a preferred embodiment, the predetermined integer is “1.” This procedure is repeated, for example, sixteen times and each 4×4 subblock is either in the active category or in the inactive category. The processor determines the number of active subblocks for each macroblock.
  • If the number of active subblocks is greater than a predetermined threshold number, the processor characterizes this macroblock as an active macroblock. The predetermined threshold number is selected to define when a macroblock is active. That is, when a sufficient number of subblocks is active, the macroblock is defined as active. For instance, if the macroblock is divided into 4×4 subblocks, the predetermined threshold number is four so that the macroblock is active when more than four subblocks (i.e., 25%) out of 16 subblocks are active. In other embodiments, the macroblock may be defined as active when more subblocks are active (e.g., within a range between 30% and 50%), or when less subblocks are active (e.g., within a range of 10% and 25%).
  • The processor counts the number of active macroblocks for the whole frame and compares the number of active macroblocks to a defined threshold value. If the number of active macroblocks is higher than the defined threshold value, the frame is a high activity frame (hereinafter “H” frame.) If the number of active macroblocks is equal to or lower than the defined threshold value, the processor characterizes the frame as a low activity frame (hereinafter “L” frame.)
  • The total number of macroblocks in a frame depends on the size of the video frames or the resolution of the video frames. For instance, if the resolution is 640×480, the number of macroblocks is (640/16)×(480/16)=1200. The defined threshold value may then be selected to be 300. That is, the frame is an “H” frame if more than 25% of the macroblocks are active. It is contemplated that the defined threshold value may be selected so that a frame is active when between 10% and 40% of the macroblocks are active.
  • For each “H” frame the processor increases a counter and for each “L” frame the processor decreases this counter. A first counter value, therefore, increases and decreases as a function of the “H” frames and the “L” frames. In one embodiment, the processor implements two counters. The first counter operates as described. The second counter, however, decreases with each “H” frame and increases with each “L” frame (FIG. 3A). A second counter value, therefore, decreases and increases as a function of the “H” frames and the “L” frames.
  • Those skilled in the art will appreciate that, in another embodiment, the activity counter 22 may be implemented using a processor and separate counters coupled to the processor. In yet another embodiment, the activity counter 22 may be implemented as a software module or a combination of a software module and firmware.
  • The threshold detector 26 is coupled to the activity counter 22 and includes in one embodiment a comparator unit (e.g., including operational amplifiers). The comparator unit receives, for example, the two counter values from the activity counter 22 and determines the difference between the two counter values. The comparator unit compares this difference to predetermined threshold values as described below with respect to FIGS. 3A, 3C. The threshold values may be determined through resistor circuits coupled to the comparator unit. In another embodiment, the threshold values may be stored in a programmable memory from which the comparator unit may read the respective threshold value.
  • The filter 30 is a digital filter that filters high-frequency components, such as pulses, from the decoded video sequence 24. In accordance with the present invention, the filter 30 has a variable filter strength that depends upon the motion activity within a picture. The filter strength can be adjusted by varying the filter coefficients of the filter 30. In one embodiment, the filter 30 may be adjusted to have one of a number of predetermined levels representing different filter strengths. In one embodiment, the filter 30 has three levels, Strong (“S”), Medium (“M”) or Weak (“W”) as shown in FIG. 3C. It is contemplated that more than three levels may be defined.
  • In one embodiment, the filter 30 may include or may be associated with a memory that stores the respective filter coefficients for the levels Strong, Medium and Weak. Upon the threshold detector 26 determining that the filter strength must be changed, the generated control signal (e.g., a code word corresponding to one of the levels) addresses the memory and the filter coefficients for the selected filter strength are loaded to the filter 30.
  • FIG. 3A is a diagram illustrating the motion activity within the video sequence 24 as determined by the activity counter 26. As the activity counter 26 determines the occurrence of “high” and “low” activities as discrete events, FIG. 3A shows two graphs 30, 32 representing these events as a continuous function of time (t). The graph 30 is shown as a solid line and the graph 32 is shown as a broken line, wherein the graph 32 is a mirror image of the graph 30 relative to a horizontal axis through an offset value O. In another illustration, the offset value O may be zero and the horizontal axis may be the X-axis. Thus, starting at the offset value O, the graph 30 increases with a predetermined rate and the graph 32 decreases with the same rate. The graphs 30, 32 intersect, for example, at t=t3, while the graph 30 is decreasing and the graph 32 is increasing. FIG. 3A further shows difference values Δ1, Δ2, Δ3, Δ4 as described below.
  • FIG. 3B is a diagram illustrating a sequence of frames having high and low activities as a function of time (t). A frame with a high activity is represented as “H” and a frame with a low activity is represented as “L.” FIG. 3B is aligned with FIG. 3A to illustrate the increasing and decreasing graphs 30, 32 as a function of the “H” and “L” frames. The illustrated sequence of frames (from left to right) has five “H” frames, ten “L” frames, eight “H” frames and eight “L” frames. Corresponding to these activities, the graph 30 in FIG. 3A increases during a period between t=t0 and t=t2, decreases during a period between t=t2 and t=t6, increases during a period between t =t6 and t=t8 and decreases during a period between t=t8 and t=t9.
  • FIG. 3C is a diagram illustrating the filter strength of the postfilter module 20 as a function of time. In one embodiment, the filter strength may be set to have one of three levels Strong (“S”), Medium (“M”) or Weak (“W”). The postfilter module 20 is configured so that, as a default setting, the filter strength is at the level “M.” From this level “M” the filter strength may change to one of the levels “S” and “W.” For example, at t=t1, the filter strength changes from the level “M” to the level “S” and at t=t4, the filter strength returns to the level “M.” At t=t5, the filter strength changes from the level “M” to the level “W” and at t=t7, the filter strength returns to the level “M.” At t=t9, the filter strength changes again from the level “M” to the level “W.”
  • FIG. 4 is a flow diagram illustrating a procedure 40 for varying the filter strength illustrated in FIG. 3C. In describing the procedure, reference is made to FIGS. 2, 3A and 3B. During initialization of the procedure, the activity counter is reset to its offset value O and the filter strength is set to be at the level “M” which is in one embodiment a default level as indicated in a step 42.
  • Proceeding to a step 44, the procedure determines the motion activity within the sequence of frames. That is, the procedure determines whether the presently analyzed frame is a “H” frame or a “L” frame, for example, as illustrated in FIG. 3B. For each subblock within a 16×16 macroblock, the procedure looks at a present subblock and its neighboring subblocks and uses the differences of the motion vectors (x, y components) with respect to the present motion vector and the neighboring subblock to determine whether the present subblock is an active subblock or an inactive subblock This procedure is repeated sixteen times so that each subblock is either in the active category or in the inactive category.
  • The procedure determines the number of active subblocks for each macroblock. A comparison of the number of active subblocks with the predetermined threshold value is used to determine whether the macroblock is an active or inactive macroblock. In one embodiment, the predetermined threshold value is four. That is, if more than 25% of the 16 subblocks are in the active category the macroblock is an active macroblock. It is contemplated that the threshold value may be set at a higher number.
  • Proceeding to a step 46, the procedure determines the number of macroblocks for the whole frame and compares the number of active macroblocks to a defined threshold value. If the number of active macroblocks is higher than the defined threshold value, the frame is a “H” frame. In one embodiment, the defined threshold value is 25%, i.e., if the number of active macroblocks in a frame is greater than 25% of the number of macroblocks in the whole frame, the frame is an “H” frame. If it is a “H” frame, the procedure proceeds along the YES branch to a step 50, and if it is not a “H” frame, the procedure proceeds along the NO branch to a step 48.
  • In the step 50, for each “H” frame the procedure increases a first counter and decreases a second counter. As shown in FIG. 3A, the first counter increases during the periods between t=t0 and t=t2, t=t6 and t=t8 as shown through the increasing graph 30, and the second counter decreases during these periods as shown through the decreasing graph 32. These periods include the “H” frames as shown in FIG. 3B.
  • In the step 48, for each “L” frame the procedure decreases the first counter and increases the second counter. As shown in FIG. 3A, the first counter decreases during the periods between t=t2 and t=t6, t=t8 and t=t9 as shown through the decreasing graph 30, and the second counter increases during these periods as shown through the increasing graph 32. These periods include the “L” frames as shown in FIG. 3B.
  • Proceeding to a step 52, the procedure determines a difference Δ between events counted by the first counter and events counted by the second counter. FIG. 3A shows exemplary differences: a difference Δ1 is shown at t=t1, a difference Δ2 is shown at t=t4, a difference Δ3 is shown at t=t5, a difference Δ4 is shown at t=t7 and the difference Δ3 is shown again at t=t9.
  • Proceeding to a step 54, the procedure determines if the difference Δ is positive and if the difference Δ is greater than a threshold value T(MS). In one embodiment, the threshold value T(MS) is an integer value, for example, eight (e.g., Δ1>8). If the condition is satisfied, the procedure proceeds along the YES branch to a step 56. If the condition is not satisfied, the procedure proceeds along the NO branch to a step 58.
  • In the step 56, with the condition being satisfied, the procedure changes the filter strength from the level “M” to the level “S” at t=t1 as shown in FIG. 3C. Once the filter strength is set to the level “S” the filter strength does not change unless another condition is satisfied. In FIG. 3C the filter strength remains at the level “S” between t=t1 and t=t4.
  • In the step 58, with the condition being not satisfied, the procedure maintains the level “M” which is the default level for the filter strength.
  • Proceeding to a step 60, the procedure determines if the difference Δ is negative and if the absolute value of the difference Δ is greater than a threshold value T(SM). In one embodiment, the threshold value T(SM) is an integer value, for example, five (e.g., abs (Δ2)>5). If the condition is satisfied, the procedure proceeds along the YES branch to a step 64. If the condition is not satisfied, the procedure proceeds along the NO branch to a step 62. As indicated in the step 62, the procedure maintains the filter strength at the level “S.”
  • In the step 64, with the condition of step 60 being satisfied, the procedure changes the filter strength from the level “S” to the level “M” at t=t4 as shown in FIG. 3C. Once the filter strength is set to the level “M” the filter strength does not change unless another condition is satisfied. In FIG. 3C the filter strength remains at the level “M” between t=t4 and t=t5.
  • Proceeding to a step 66, with the filter strength being at the level “M” the procedure determines if the difference Δ is negative and if the absolute value of the difference Δ is greater than a threshold value T(MW). In one embodiment, the threshold value T(MW) is an integer value, for example, ten (e.g., abs (Δ3)>10). If the condition is satisfied, the procedure proceeds along the YES branch to a step 70. If the condition is not satisfied, the procedure proceeds along the NO branch to a step 68. As indicated in the step 68, the procedure maintains the filter strength at the level “M”.
  • In the step 70, with the condition of step 66 being satisfied, the procedure changes the filter strength from the level “M” to the level “W” at t=t5 as shown in FIG. 3C. Once the filter strength is set to the level “W” the filter strength does not change unless another condition is satisfied. In FIG. 3C the filter strength remains at the level “W” between t=t5 and t=t7.
  • Proceeding to a step 72, with the filter strength being at the level “W” the procedure determines if the difference Δ is positive and if the difference Δ is greater than a threshold value T(WM). In one embodiment, the threshold value T(WM) is an integer value, for example, three (e.g., Δ4>3). If the condition is satisfied, the procedure proceeds along the YES branch to a step 76. If the condition is not satisfied, the procedure proceeds along the NO branch to a step 74. As indicated in the step 74, the procedure maintains the filter strength at the level “W.”
  • In the step 76, with the condition of the step 72 being satisfied, the procedure changes the filter strength from the level “W” to the level “M” at t=t7 as shown in FIG. 3C. Once the filter strength is set to the level “M” the filter strength does not change unless one of the conditions is satisfied. In FIG. 3C the filter strength remains at the level “M” between t=t7 and t=t9. At t=t9, the procedure changes the filter strength to the level “W” as described with reference to the step 66.
  • Proceeding to a step 78, the procedure determines whether the sequence of frames has ended (end of sequence “EOS”). If the sequence has not yet ended, the procedure returns along the NO branch to the step 54. Otherwise, the procedure ends at a step 80.
  • The video compression system 1 and the method of filtering in accordance with the present invention provide for a reduction of mosquito artifacts and blocking artifacts without distorting the pictures of the a video output sequence 10 so that the video movie as a whole is visually more pleasing for the viewers' eyes.
  • Furthermore, the postfilter module 20 is configured to implement a hysteresis for the levels of the filter strength. That is, the filter strength changes from the level “M” to the level “S” at t=t1 when the differences Δ1 is positive and greater than the threshold value T(MS), but returns to the level “M” when a different condition is satisfied, namely that the difference Δ2 is negative and the absolute value of Δ2 is greater than the threshold value T(SM). Such a hysteresis avoids that the filter strength changes if the motion activity changes only briefly.
  • Those skilled in the art will appreciate that the threshold values and threshold numbers referred to above are of exemplary nature. Different threshold values and threshold numbers may be used to, for example, modify the hysteresis of the postfilter module 20.
  • While the above detailed description has shown, described and identified several novel features of the invention as applied to a preferred embodiment, it will be understood that various omissions, substitutions and changes in the form and details of the described embodiments may be made by those skilled in the art without departing from the spirit of the invention. Accordingly, the scope of the invention should not be limited to the foregoing discussion, but should be defined by the appended claims.

Claims (20)

1. A method comprising:
receiving an encoded video sequence;
decoding the received encoded video sequence into a decoded video sequence that contains compression artifacts;
categorizing each frame of the decoded video sequence as a first activity frame or as a second activity frame; and
removing with a filter module the compression artifacts in the decoded video sequence, the filter module having a variable filter strength being a function of the motion activity within the decoded video sequence.
2. The method of claim 1, wherein the category of the first activity frame is a high activity frame and the category of the second activity frame is a low activity frame.
3. The method of claim 1, additionally comprising selectively adjusting the filter strength to one of a high level, a medium level and a weak level.
4. The method of claim 3, wherein medium level is a default level.
5. An apparatus, comprising:
a video decoder coupled configured to decode a received encoded video sequence into a decoded video sequence that contains compression artifacts; and
a filter module coupled with the video decoder, said filter module configured to filter the compression artifacts in the decoded video sequence, the filter module being configured to remove compression artifacts and having a variable filter strength that is a function of the motion activity within the video sequence.
6. The apparatus of claim 5, wherein the filter module includes a threshold detector configured to generate a control signal to adjust the filter strength.
7. The apparatus of claim 6, wherein the control signal adjusts the filter strength to one of a high level, a medium level and a weak level.
8. The apparatus of claim 7, wherein the medium level is a default level.
9. A method comprising:
receiving an encoded video sequence;
decoding the received encoded video sequence into a decoded video sequence that contains compression artifacts; and
removing with a filter module the compression artifacts in the decoded video sequence, the filter module having a variable filter strength being a function of the motion activity within the decoded video sequence.
10. The method of claim 9, wherein the category of the first activity frame is a high activity frame and the category of the second activity frame is a low activity frame.
11. The method of claim 9, additionally comprising selectively adjusting the filter strength to one of a high level, a medium level and a weak level.
12. The method of claim 11, wherein medium level is a default level.
13. A program storage device storing instructions that when executed performs the steps comprising:
receiving an encoded video sequence;
decoding the received encoded video sequence into a decoded video sequence that contains compression artifacts;
categorizing each frame of the decoded video sequence as a first activity frame or as a second activity frame; and
removing with a filter module the compression artifacts in the decoded video sequence, the filter module having a variable filter strength being a function of the motion activity within the decoded video sequence.
14. The program storage device of claim 13, wherein the category of the first activity frame is a high activity frame and the category of the second activity frame is a low activity frame.
15. The program storage device of of claim 13, wherein adjusting the filter strength includes selectively adjusting the filter strength to one of a high level, a medium level and a weak level.
16. The method of claim 15, wherein adjusting the filter strength includes adjusting the filter strength to the medium level is a default level.
17. An apparatus comprising:
means for receiving an encoded video sequence;
means for decoding the received encoded video sequence into a decoded video sequence that contains compression artifacts;
means for categorizing each frame of the decoded video sequence as a first activity frame or as a second activity frame; and
means for removing with a filter module the compression artifacts in the decoded video sequence, the filter module having a variable filter strength being a function of the motion activity within the decoded video sequence.
18. The apparatus of claim 17, wherein the category of the first activity frame is a high activity frame and the category of the second activity frame is a low activity frame.
19. The apparatus of claim 17, additionally comprising selectively adjusting the filter strength to one of a high level, a medium level and a weak level.
20. The apparatus of claim 20, wherein medium level is a default level.
US11/401,516 2000-12-06 2006-04-11 Video compression and decompression system with postfilter to filter coding artifacts Abandoned US20060182356A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/401,516 US20060182356A1 (en) 2000-12-06 2006-04-11 Video compression and decompression system with postfilter to filter coding artifacts

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/731,474 US7054500B1 (en) 2000-12-06 2000-12-06 Video compression and decompression system with postfilter to filter coding artifacts
US11/401,516 US20060182356A1 (en) 2000-12-06 2006-04-11 Video compression and decompression system with postfilter to filter coding artifacts

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/731,474 Continuation US7054500B1 (en) 2000-12-06 2000-12-06 Video compression and decompression system with postfilter to filter coding artifacts

Publications (1)

Publication Number Publication Date
US20060182356A1 true US20060182356A1 (en) 2006-08-17

Family

ID=36462729

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/731,474 Expired - Lifetime US7054500B1 (en) 2000-12-06 2000-12-06 Video compression and decompression system with postfilter to filter coding artifacts
US11/401,516 Abandoned US20060182356A1 (en) 2000-12-06 2006-04-11 Video compression and decompression system with postfilter to filter coding artifacts

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/731,474 Expired - Lifetime US7054500B1 (en) 2000-12-06 2000-12-06 Video compression and decompression system with postfilter to filter coding artifacts

Country Status (1)

Country Link
US (2) US7054500B1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060188015A1 (en) * 2005-02-16 2006-08-24 Tetsujiro Kondo Coding apparatus and method, decoding apparatus and method, image processing system, image processing method, recording medium, and program
US20060221252A1 (en) * 2005-04-05 2006-10-05 Samsung Electronics Co., Ltd. Reliability estimation of temporal noise estimation
DE102006055702A1 (en) * 2006-11-23 2008-05-29 Deutsche Thomson Ohg A method and apparatus for restoring a display image sequence from a coded digital video signal
US20090067509A1 (en) * 2007-09-07 2009-03-12 Eunice Poon System And Method For Displaying A Digital Video Sequence Modified To Compensate For Perceived Blur
US20110268366A1 (en) * 2009-01-20 2011-11-03 Megachips Corporation Image processing apparatus and image conversion apparatus
US20120251012A1 (en) * 2009-12-18 2012-10-04 Tomohiro Ikai Image filter, encoding device, decoding device, and data structure
TWI477153B (en) * 2010-08-20 2015-03-11 Intel Corp Techniques for identifying block artifacts

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6766376B2 (en) 2000-09-12 2004-07-20 Sn Acquisition, L.L.C Streaming media buffering system
US8595372B2 (en) * 2000-09-12 2013-11-26 Wag Acquisition, Llc Streaming media buffering system
US7716358B2 (en) 2000-09-12 2010-05-11 Wag Acquisition, Llc Streaming media buffering system
JP3939198B2 (en) * 2002-05-20 2007-07-04 三洋電機株式会社 Data output device
EP1391866A1 (en) * 2002-08-23 2004-02-25 Deutsche Thomson Brandt Adaptive noise reduction for digital display panels
US7626635B2 (en) * 2003-04-04 2009-12-01 Koplar Interactive Systems International, L.L.C. Method and system of detecting signal presence from a video signal presented on a digital display device
US8204128B2 (en) * 2007-08-01 2012-06-19 Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Learning filters for enhancing the quality of block coded still and video images
KR20090096121A (en) * 2008-03-07 2009-09-10 삼성전자주식회사 apparatus and method of stateful address Auto configuration protocol in IPv6 network
US8401311B2 (en) * 2008-03-11 2013-03-19 Sony Corporation Image processing device, method, and program
US8363974B2 (en) * 2009-07-21 2013-01-29 Qualcomm Incorporated Block artifact reducer
US9025675B2 (en) * 2011-06-22 2015-05-05 Texas Instruments Incorporated Systems and methods for reducing blocking artifacts
US9386319B2 (en) 2013-09-05 2016-07-05 Microsoft Technology Licensing, Llc Post-process filter for decompressed screen content
US20180343449A1 (en) * 2017-05-26 2018-11-29 Ati Technologies Ulc Application specific filters for high-quality video playback

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5787203A (en) * 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
US5802218A (en) * 1994-11-04 1998-09-01 Motorola, Inc. Method, post-processing filter, and video compression system for suppressing mosquito and blocking atrifacts
US6014181A (en) * 1997-10-13 2000-01-11 Sharp Laboratories Of America, Inc. Adaptive step-size motion estimation based on statistical sum of absolute differences
US6037986A (en) * 1996-07-16 2000-03-14 Divicom Inc. Video preprocessing method and apparatus with selective filtering based on motion detection
US6157396A (en) * 1999-02-16 2000-12-05 Pixonics Llc System and method for using bitstream information to process images for use in digital display systems
US6178205B1 (en) * 1997-12-12 2001-01-23 Vtel Corporation Video postfiltering with motion-compensated temporal filtering and/or spatial-adaptive filtering
US6269484B1 (en) * 1997-06-24 2001-07-31 Ati Technologies Method and apparatus for de-interlacing interlaced content using motion vectors in compressed video streams
US6314160B1 (en) * 1999-12-17 2001-11-06 General Electric Company Method and apparatus for performing fluoroscopic noise reduction
US20010053186A1 (en) * 1997-06-09 2001-12-20 Yuichiro Nakaya Computer-readable medium having image decoding program stored thereon
US20020063807A1 (en) * 1999-04-19 2002-05-30 Neal Margulis Method for Performing Image Transforms in a Digital Display System
US6466624B1 (en) * 1998-10-28 2002-10-15 Pixonics, Llc Video decoder with bit stream based enhancements
US6539060B1 (en) * 1997-10-25 2003-03-25 Samsung Electronics Co., Ltd. Image data post-processing method for reducing quantization effect, apparatus therefor
US6665346B1 (en) * 1998-08-01 2003-12-16 Samsung Electronics Co., Ltd. Loop-filtering method for image data and apparatus therefor
US6668018B2 (en) * 1997-11-20 2003-12-23 Larry Pearlstein Methods and apparatus for representing different portions of an image at different resolutions
US6748113B1 (en) * 1999-08-25 2004-06-08 Matsushita Electric Insdustrial Co., Ltd. Noise detecting method, noise detector and image decoding apparatus
US7650043B2 (en) * 2003-08-11 2010-01-19 Samsung Electronics Co., Ltd. Method of reducing blocking artifacts from block-coded digital images and image reproducing apparatus using the same

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802218A (en) * 1994-11-04 1998-09-01 Motorola, Inc. Method, post-processing filter, and video compression system for suppressing mosquito and blocking atrifacts
US5787203A (en) * 1996-01-19 1998-07-28 Microsoft Corporation Method and system for filtering compressed video images
US6037986A (en) * 1996-07-16 2000-03-14 Divicom Inc. Video preprocessing method and apparatus with selective filtering based on motion detection
US20010053186A1 (en) * 1997-06-09 2001-12-20 Yuichiro Nakaya Computer-readable medium having image decoding program stored thereon
US6269484B1 (en) * 1997-06-24 2001-07-31 Ati Technologies Method and apparatus for de-interlacing interlaced content using motion vectors in compressed video streams
US6014181A (en) * 1997-10-13 2000-01-11 Sharp Laboratories Of America, Inc. Adaptive step-size motion estimation based on statistical sum of absolute differences
US6539060B1 (en) * 1997-10-25 2003-03-25 Samsung Electronics Co., Ltd. Image data post-processing method for reducing quantization effect, apparatus therefor
US6668018B2 (en) * 1997-11-20 2003-12-23 Larry Pearlstein Methods and apparatus for representing different portions of an image at different resolutions
US6178205B1 (en) * 1997-12-12 2001-01-23 Vtel Corporation Video postfiltering with motion-compensated temporal filtering and/or spatial-adaptive filtering
US6665346B1 (en) * 1998-08-01 2003-12-16 Samsung Electronics Co., Ltd. Loop-filtering method for image data and apparatus therefor
US6466624B1 (en) * 1998-10-28 2002-10-15 Pixonics, Llc Video decoder with bit stream based enhancements
US6157396A (en) * 1999-02-16 2000-12-05 Pixonics Llc System and method for using bitstream information to process images for use in digital display systems
US20020063807A1 (en) * 1999-04-19 2002-05-30 Neal Margulis Method for Performing Image Transforms in a Digital Display System
US6748113B1 (en) * 1999-08-25 2004-06-08 Matsushita Electric Insdustrial Co., Ltd. Noise detecting method, noise detector and image decoding apparatus
US6314160B1 (en) * 1999-12-17 2001-11-06 General Electric Company Method and apparatus for performing fluoroscopic noise reduction
US7650043B2 (en) * 2003-08-11 2010-01-19 Samsung Electronics Co., Ltd. Method of reducing blocking artifacts from block-coded digital images and image reproducing apparatus using the same

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7952769B2 (en) * 2005-02-16 2011-05-31 Sony Corporation Systems and methods for image processing coding/decoding
US20060188015A1 (en) * 2005-02-16 2006-08-24 Tetsujiro Kondo Coding apparatus and method, decoding apparatus and method, image processing system, image processing method, recording medium, and program
US20060221252A1 (en) * 2005-04-05 2006-10-05 Samsung Electronics Co., Ltd. Reliability estimation of temporal noise estimation
US7714939B2 (en) * 2005-04-05 2010-05-11 Samsung Electronics Co., Ltd. Reliability estimation of temporal noise estimation
DE102006055702A1 (en) * 2006-11-23 2008-05-29 Deutsche Thomson Ohg A method and apparatus for restoring a display image sequence from a coded digital video signal
US20090067509A1 (en) * 2007-09-07 2009-03-12 Eunice Poon System And Method For Displaying A Digital Video Sequence Modified To Compensate For Perceived Blur
US7843462B2 (en) * 2007-09-07 2010-11-30 Seiko Epson Corporation System and method for displaying a digital video sequence modified to compensate for perceived blur
US20110268366A1 (en) * 2009-01-20 2011-11-03 Megachips Corporation Image processing apparatus and image conversion apparatus
US8818123B2 (en) * 2009-01-20 2014-08-26 Megachips Corporation Image processing apparatus and image conversion apparatus
US20120251012A1 (en) * 2009-12-18 2012-10-04 Tomohiro Ikai Image filter, encoding device, decoding device, and data structure
US9514519B2 (en) 2009-12-18 2016-12-06 Sharp Kabushiki Kaisha Image filter
US9641865B2 (en) 2009-12-18 2017-05-02 Sharp Kabushiki Kaisha Method for decoding moving images
TWI477153B (en) * 2010-08-20 2015-03-11 Intel Corp Techniques for identifying block artifacts

Also Published As

Publication number Publication date
US7054500B1 (en) 2006-05-30

Similar Documents

Publication Publication Date Title
US20060182356A1 (en) Video compression and decompression system with postfilter to filter coding artifacts
US9479796B2 (en) Variable coding resolution in video codec
US20190246115A1 (en) System and method for intracoding video data
US8243820B2 (en) Decoding variable coded resolution video with native range/resolution post-processing operation
US7379496B2 (en) Multi-resolution video coding and decoding
JP2728619B2 (en) Method and apparatus for suppressing blocking artifacts in encoding / decoding apparatus
KR101213704B1 (en) Method and apparatus for video coding and decoding based on variable color format
US5917609A (en) Hybrid waveform and model-based encoding and decoding of image signals
US20160191916A1 (en) High frequency emphasis in decoding of encoded signals
EP0739137A2 (en) Processing video signals for scalable video playback
US20060056518A1 (en) Intra coding video data methods and apparatuses
US10616498B2 (en) High dynamic range video capture control for video transmission
CN1366778A (en) Video compression
US6963609B2 (en) Image data compression
US6804299B2 (en) Methods and systems for reducing requantization-originated generational error in predictive video streams using motion compensation
JP2894137B2 (en) Prefilter control method and apparatus in video coding
JP2003179921A (en) Coded image decoding apparatus
WO2006000964A1 (en) Video transcoding with selection of data portions to be processed
JP4762486B2 (en) Multi-resolution video encoding and decoding
EP0927954B1 (en) Image signal compression coding method and apparatus
US20040001548A1 (en) Method and apparatus to facilitate variable-loss image compression for images of objects in motion
US20170127088A1 (en) Method of reducing noise of video signal
Kim et al. Content-based video transcoding in compressed domain
EP1750452B1 (en) Apparatus for intra coding video data
KR100335435B1 (en) Compression coder and / or decoder of video signal and method thereof

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION