WO2011049321A2 - Procédé et appareil de codage/décodage d'images utilisant le filtrage d'une trame à mouvement compensé - Google Patents

Procédé et appareil de codage/décodage d'images utilisant le filtrage d'une trame à mouvement compensé Download PDF

Info

Publication number
WO2011049321A2
WO2011049321A2 PCT/KR2010/007027 KR2010007027W WO2011049321A2 WO 2011049321 A2 WO2011049321 A2 WO 2011049321A2 KR 2010007027 W KR2010007027 W KR 2010007027W WO 2011049321 A2 WO2011049321 A2 WO 2011049321A2
Authority
WO
WIPO (PCT)
Prior art keywords
frame
motion compensation
motion
block
filter
Prior art date
Application number
PCT/KR2010/007027
Other languages
English (en)
Korean (ko)
Other versions
WO2011049321A3 (fr
Inventor
김수년
임정연
최재훈
이규민
한종기
이영렬
문주희
김해광
전병우
유영조
Original Assignee
에스케이텔레콤 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 에스케이텔레콤 주식회사 filed Critical 에스케이텔레콤 주식회사
Publication of WO2011049321A2 publication Critical patent/WO2011049321A2/fr
Publication of WO2011049321A3 publication Critical patent/WO2011049321A3/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Definitions

  • the present invention relates to a method and apparatus for image encoding / decoding using filtering of a motion compensation frame. More specifically, the present invention relates to a method and apparatus for improving the compression efficiency of a video by filtering a motion compensation frame generated through motion estimation and motion compensation during video encoding and decoding.
  • Video compression technique In order to efficiently store or transmit video data, it is necessary to compress and compress the data.
  • Techniques for compressing video data include H.261, H.263, H.264, MPEG-2, and MPEG-4.
  • the luminance component and the chrominance component of each frame image of the video are predicted temporally through motion estimation and motion compensation, and the prediction residuals are transformed, quantized, and entropy encoded to be transmitted in the form of a bitstream.
  • H.264 / AVC the latest video compression technology, was developed jointly by Moving Picture Experts Group (MPEG) and Video Coding Experts Group (VCEG). Macroblocks are 16x16, 16x8, 8x16, 8x8, 8x4, 4x8, Motion estimation and motion compensation are performed by dividing into smaller blocks having a size of 4x4. In this case, up to 16 frames may also be used for the reference frame for motion estimation and motion compensation.
  • MPEG Moving Picture Experts Group
  • VCEG Video Coding Experts Group
  • Macroblocks are 16x16, 16x8, 8x16, 8x8, 8x4, 4x8, Motion estimation and motion compensation are performed by dividing into smaller blocks having a size of 4x4. In this case, up to 16 frames may also be used for the reference frame for motion estimation and motion compensation.
  • the H.264 / AVC adopts a 1/4 pixel interpolation method, and the compression rate can be improved while maintaining the image quality compared to MPEG-4, which is a conventional video compression technology.
  • high compression ratio can be achieved by removing temporal redundancy in the image using motion estimation and motion compensation, but the motion compensation frame generated through motion estimation and motion compensation is still temporal. Since a lot of redundancy remains, the prediction residual to be transformed, quantized, and entropy encoded becomes large, and thus a large amount of residual data needs to be encoded.
  • the present invention is to improve the compression efficiency of a video by filtering a motion compensation frame generated through motion estimation and motion compensation and encoding the remaining frames during video encoding and decoding.
  • the present invention provides a method of encoding an image, the method comprising: filtering a motion compensation frame generated by estimating and compensating for a motion of a current frame of an image; And transforming, quantizing, and encoding the remaining frame that is a difference between the current frame and the filtered motion compensation frame.
  • an apparatus for encoding an image comprising: a predictor for filtering a motion compensation frame generated by estimating and compensating for a motion of a current frame of an image; A subtractor for generating a residual frame by subtracting the current frame and the filtered motion compensation frame; A converter and quantizer for transforming and quantizing residual frames; And an encoder for encoding the transformed and quantized residual frames.
  • the current frame is obtained by using a motion vector and a reference frame index identified by the motion information reconstructed by decoding motion information data extracted from the bitstream. Compensating for motion to generate a motion compensation frame and filtering the motion compensation frame; And reconstructing the current frame by adding the filtered motion compensation frame and the residual frame extracted and decoded from the bitstream to recover the current frame.
  • an apparatus for decoding an image comprising: a decoder for decoding motion information data and image encoded data extracted from a bitstream and restoring motion information, transformed and quantized residual frames; An inverse quantizer and an inverse converter for inverse quantizing and inverse transforming the reconstructed transform and quantized residual frames to recover the residual frame; A predictor for generating a motion compensation frame by compensating for the motion of the current frame by using the motion vector identified by the reconstructed motion information and the reference frame index, and filtering the motion compensation frame; And an adder for reconstructing the current frame by adding the reconstructed residual frame and the filtered motion compensation frame.
  • an inter prediction method comprising: determining a motion vector and a reference frame index for a block in which a block mode is an inter mode within a current frame of an image; Generating a motion compensation block using the determined motion vector and the reference frame index and generating a motion compensation frame including the motion compensation block; And filtering the motion compensation frame.
  • an inter prediction apparatus comprising: a motion estimator for determining a motion vector and a reference frame index for a block in which a block mode is inter mode within a current frame of an image; A motion compensator for generating a motion compensation block using the determined motion vector and the reference frame index and generating a motion compensation frame including the motion compensation block; And a motion compensation filter for filtering the motion compensation frame.
  • a motion compensation block is generated by using a motion vector and a reference frame index identified by motion information extracted from the bitstream, and decoded from the bitstream. Generating a motion compensation frame comprising a; And filtering the motion compensation frame.
  • a motion compensation block is generated by using a motion vector and a reference frame index identified by motion information extracted from a bitstream, decoded, and recovered.
  • a motion compensator for generating a motion compensation frame comprising a; And a motion compensation filter for filtering the motion compensation frame.
  • the compression efficiency of the video can be improved.
  • FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention
  • FIG. 2 is a block diagram schematically illustrating an inter prediction apparatus for image encoding according to an embodiment of the present invention
  • FIG. 3 is an exemplary view illustrating a process of generating a motion compensation frame according to an embodiment of the present invention
  • FIG. 4 is an exemplary view illustrating a process of filtering a motion compensation frame according to an embodiment of the present invention
  • FIG. 5 is a flowchart illustrating a video encoding method according to an embodiment of the present invention.
  • FIG. 6 is a block diagram schematically illustrating an image decoding apparatus according to an embodiment of the present invention.
  • FIG. 7 is a block diagram schematically illustrating an inter prediction apparatus for image decoding according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
  • a video encoding apparatus (Video Encoding Apparatus), a video decoding apparatus (Video Decoding Apparatus) to be described below is a personal computer (PC), notebook computer, personal digital assistant (PDA), portable multimedia player (PMP) : User terminal such as Portable Multimedia Player (PSP), PlayStation Portable (PSP: PlayStation Portable), Mobile Communication Terminal (Mobile Communication Terminal), or a server terminal such as an application server or a service server, and communicates with various devices or wired / wireless communication networks.
  • the present invention refers to various devices including a communication device such as a communication modem, a memory for storing various programs and data for encoding or decoding an image, a microprocessor for executing and operating a program, and the like.
  • the image encoded in the bitstream by the video encoding apparatus is real-time or non-real-time through the wired or wireless communication network, such as the Internet, local area wireless communication network, wireless LAN network, WiBro network, mobile communication network, or the like, and a cable, universal serial bus (USB: Universal It may be transmitted to an image decoding apparatus through various communication interfaces such as a serial bus, and may be decoded by the image decoding apparatus to restore and reproduce the image.
  • the wired or wireless communication network such as the Internet, local area wireless communication network, wireless LAN network, WiBro network, mobile communication network, or the like
  • a cable, universal serial bus USB: Universal It may be transmitted to an image decoding apparatus through various communication interfaces such as a serial bus, and may be decoded by the image decoding apparatus to restore and reproduce the image.
  • a moving picture is composed of a series of pictures, and each picture is divided into a predetermined area such as a block.
  • the divided blocks are largely classified into intra blocks and inter blocks according to encoding methods.
  • An intra block refers to a block that is encoded by using an intra prediction coding method.
  • An intra prediction coding is performed by using pixels of blocks previously encoded, decoded, and reconstructed in a current picture that performs current encoding.
  • a prediction block is generated by predicting pixels of a block and a difference value with pixels of the current block is encoded.
  • An inter block refers to a block that is encoded using inter prediction coding.
  • Inter prediction coding generates a prediction block by predicting a current block within a current picture by referring to one or more past or future pictures, and then generates a current block. This is a method of encoding the difference value with.
  • a frame referred to for encoding or decoding the current picture is referred to as a reference frame.
  • FIG. 1 is a block diagram schematically illustrating a video encoding apparatus according to an embodiment of the present invention.
  • the image encoding apparatus 100 is an apparatus for encoding an image, and includes a predictor 110, a subtractor 120, a transformer and quantizer 130, and an encoder.
  • the encoder may include an encoder 140, an inverse quantizer and an inverse transformer 150, an adder 160, a deblocking filter 170, and a memory 180.
  • the predictor 110 filters a motion compensation frame generated by estimating and compensating for a motion of a frame to be encoded in the input image (hereinafter referred to as a current frame). That is, the predictor 110 estimates the motion of each block in the current frame according to the inter prediction mode in the reference frame stored in the memory 180 after it is already encoded, decoded, and reconstructed, and then is a motion vector. ) And generate a motion compensation frame in which the motion of the current frame is compensated by compensating the motion of each block using the motion vector, and filtering the generated motion compensation frame.
  • the predictor 110 determines a motion vector and a reference frame index for a block in which the block mode is inter mode in the current frame, and uses the determined motion vector and reference frame index.
  • the motion compensation block may be generated, and the motion compensation frame may be filtered by generating a motion compensation frame including the generated motion compensation block.
  • the predictor 110 may filter the motion compensation frame using another filter according to the characteristics of the pixels of the block in the motion compensation frame, and the motion compensation frame may be a Winner filter or a deblocking filter. Filtering may also be performed using various filters such as the above.
  • the predictor 110 may filter the motion compensation frame using a filter having a filter coefficient that minimizes a square error between the current frame and the filtered motion compensation frame.
  • the subtractor 120 subtracts the current frame and the filtered motion compensation frame to generate a residual frame.
  • the residual frame is a frame including a residual generated by subtracting the current frame and the filtered motion compensation frame, and refers to a frame including a residual signal that is a difference between pixels of the current frame and pixels of the filtered motion compensation frame.
  • Converter and quantizer 130 transforms and quantizes the remaining frames. That is, the transformer and quantizer 130 converts the residual signal of the residual frame generated by the subtractor 120 into a frequency domain to generate a transformed residual frame having a transform coefficient and transforms the transformed signal.
  • the transform coefficients of the residual frame are quantized to produce transformed and quantized residual frames.
  • a transform method used a method of transforming an image signal in a spatial domain into a frequency domain such as a Hadamard transform or a discrete cosine transform based integer transform is used.
  • various quantization techniques such as Dead Zone Uniform Threshold Quantization (DZUTQ) or quantization weighted matrix (DZUTQ) may be used.
  • DZUTQ Dead Zone Uniform Threshold Quantization
  • DZUTQ quantization weighted matrix
  • the encoder 140 encodes the transformed and quantized residual frames. That is, the quantized transform coefficients of the residual frame transformed and quantized by the transformer and the quantizer 130 are encoded to generate image encoded data.
  • the generated image coded data is included in the bitstream.
  • an entropy encoding technique may be used, but various encoding techniques may be used without being limited thereto.
  • the encoder 140 may generate filter coefficient data by encoding filter coefficient information indicating filter coefficients of a filter used to filter the motion compensation frame.
  • the generated filter coefficient data is included in the bitstream.
  • the encoder 140 may generate block mode data by encoding block mode information indicating a block mode of each block in the current frame, and the generated block mode data is included in the bitstream.
  • the encoder 140 may generate motion information data by encoding motion information indicating a motion vector and a reference frame index determined for a block in which the block mode is inter mode in the current frame. It is included in the bitstream.
  • the functions of the transformer and the quantizer 130 may be integrated into the encoder 140 and implemented. That is, when implemented with one encoder, the encoder generates encoded data by encoding the residual frame.
  • Inverse quantizer and inverse transformer 150 inverse quantizes and inverse transforms the transformed and quantized residual frame to recover the residual frame. That is, the inverse quantizer and the inverse transformer 150 inverse quantizes the transformed and quantized residual frames transmitted from the transformer and the quantizer 130 to restore the residual frame having the transform coefficient, and reconstructs the residual frame having the transform coefficient. Inverse conversion restores the residual frame with the residual signal. In this case, the inverse quantizer and the inverse transformer 150 may restore the remaining frame by performing the transformed and quantized method in the inverse of the transformer and the quantizer 130.
  • the adder 160 reconstructs the current frame by adding the reconstructed residual frame and the filtered motion compensation frame.
  • the deblocking filter 170 deblocks and filters the current frame to be restored, and the memory 180 stores the deblocking filtered current frame.
  • the deblocking filtering refers to an operation of reducing block distortion generated by encoding an image in block units, and applying a deblocking filter to a block boundary and a macroblock boundary, or applying a deblocking filter only to a macroblock boundary or a deblocking filter. You can optionally use one of the methods that does not use.
  • the image encoding apparatus 100 may further include an intra predictor for intra prediction in the predictor 110.
  • the subtractor 120 may generate a residual frame by subtracting the current frame and the prediction frame generated by the intra predictor, and the transformer and quantizer 130 and the inverse quantizer and the inverse transformer 150 are residual frames. Calculation may be further performed for transform and quantization for and inverse transform and inverse quantization for the transformed and quantized residual frame.
  • the encoder 130 may generate the image encoded data by encoding the transformed and quantized residual frame, which is included in the bitstream.
  • the apparatus 100 may further generate image coded data to be intra predictively coded, and in the bitstream, image coded data generated by intra predictive coding as well as image coded data generated by inter predictive coding and the image coded data. It will be appreciated that information for intra prediction decoding the data or its encoded data may be further included.
  • FIG. 2 is a block diagram schematically illustrating an inter prediction apparatus for image encoding according to an embodiment of the present invention.
  • the inter prediction apparatus for image encoding may be implemented as the predictor 110 in the image encoding apparatus 100 according to the embodiment of the present invention described above with reference to FIG.
  • the predictor 110 is called.
  • the predictor 110 may include a motion estimator 210, a motion compensator 220, and a motion compensation filter 230.
  • the predictor 110 may further include an intra predictor, but is not separately illustrated.
  • the motion estimator 210 determines a motion vector and a reference frame index for a block in which the block mode is inter mode in the current frame. That is, the motion estimator 210 determines the motion vector and the reference frame index by estimating the motion of the blocks in which the block mode is the inter mode among the blocks in the current frame.
  • the motion estimator 210 may determine the block mode for each block in the current frame, but may be determined by a mode determiner (not shown) additionally provided in the predictor 110 or the image encoding apparatus 100. have.
  • the motion estimator 210 performs forward prediction on a block whose block mode is P macroblock in the current frame to estimate a forward motion vector based on the previous frame of the current frame.
  • Bidirectional prediction may be performed on a block having a B macroblock of B macroblock within the block to estimate a forward motion vector and a backward motion vector based on the previous frame and the next frame of the current frame.
  • the motion estimator 210 may determine, as the motion vector and the reference frame index for the corresponding block in the current frame, the motion vector and the reference frame index at which the encoding cost is minimum.
  • the mcost function may be used as a function for calculating the coding cost for determining the motion vector and the reference frame index.
  • the mcost function may be expressed as in Equation 1.
  • is a Lagrange Multiplier for determining the motion vector and the reference frame index
  • rate u, v, i are predictive coding by compensating the motion of the block using the motion vector and the reference frame index.
  • SAD u, v, i represents the sum of squares of the difference between the current frame and the motion compensation frame generated by compensating for the motion of the corresponding block using the motion vector and the reference frame index, and can be expressed by Equation 2.
  • M and N respectively represent the pixel size in the horizontal direction and the vertical direction of the block
  • x and y represent the coordinates in the current frame or the motion compensation frame of the pixel located at the top left of the block
  • m and n represents the coordinates of the pixel located inside the block
  • u and v represent the motion vectors of the block.
  • the motion estimator 210 calculates mcost for each motion vector and reference frame index for each block whose block mode is interblock in the current frame by using a function for calculating a coding cost as shown in Equation 1, A motion vector and a reference frame index for minimizing the encoding cost are determined as a motion vector and a reference frame index of the corresponding block by using a function as shown in FIG. 3.
  • SearchRange represents a range for performing motion estimation. Represents the horizontal and vertical components of the motion vector determined for that block, Denotes a reference frame index determined for the block.
  • the motion compensator 220 generates a motion compensation block by using the determined motion vector and the reference frame index, and generates a motion compensation frame including the motion compensation block. That is, the motion compensator 220 generates a motion compensation block by compensating for the motion of the corresponding block by using the motion vector determined by the motion estimator 210 and the reference frame index, and gathers the generated motion compensation blocks in units of frames. Generate a motion compensation frame comprising motion compensation blocks. To this end, the motion compensator 220 obtains a block indicated by the motion vector determined by the motion estimator 210 in the reference frame identified by the reference frame index determined by the motion estimator 210 as a motion compensation block. And a motion compensation frame including these motion compensation blocks.
  • Equation 4 a method of generating a motion compensation frame
  • W and H represent the horizontal and vertical pixel size of the frame (reference frame and motion compensation frame), Denotes a pixel value for the ( x , y ) coordinate of the motion compensation frame. If the block mode of the block including the pixel identified by the ( x , y ) coordinate is the intra mode, since the motion vector and the reference frame index do not exist, Has no value. Subsequently, when generation of one motion compensation frame is completed, the content may be filled in a manner similar to a mirroring method.
  • FIG. 3 exemplarily illustrating a process of generating a motion compensation frame according to an embodiment of the present invention
  • four blocks among five blocks in the motion compensation frame have an interblock in four reference frames.
  • Each block is filled with one block, but one block does not have a value because the block mode is an intra block and there is no motion vector and reference frame index.
  • the motion compensation frame is filtered later, since the value of the intra mode block is not filled, it is impossible to filter the neighboring pixels, and thus, the motion compensation frame may be filtered using the pixel values of the pixels around the block.
  • the motion compensation filter 230 filters the motion compensation frame. That is, the motion compensation filter 230 is transformed and quantized by filtering pixels of the motion compensation frame generated by the motion compensator 220 using various types of filters to make the filtered motion compensation frame more similar to the current frame. Reduce the amount of data in the residual frame to be encoded or reduce the blocking effect present in the motion compensation frame.
  • the motion compensation filter 230 may use a predetermined filter when filtering the motion compensation frame, or may select and use an appropriate filter or a coefficient of the filter in a predetermined unit such as a block unit, a slice unit, or a frame unit.
  • a predetermined unit such as a block unit, a slice unit, or a frame unit.
  • the motion compensation filter 230 uses a preset filter, a filter pre-appointed by the video encoding apparatus 100 and a video decoding apparatus to be described later, or a filter based on a pre-promised criterion may be used.
  • another filter may be used according to the characteristics of the pixels in the current frame.
  • various filters such as a Wiener filter or a deblocking filter may be used.
  • Wiener filter When the motion compensation frame is filtered using the Wiener filter, the temporal redundancy remaining in the motion compensation frame can be further eliminated, thereby reducing the amount of data in the remaining frames and consequently reducing the amount of image coded data. As it becomes smaller, the compression efficiency can be improved.
  • the motion compensation frame is filtered using the deblocking filter, the blocking phenomenon in the motion compensation frame can be reduced, thereby improving the quality of the image which is decoded and then reconstructed.
  • the filter is selected according to the characteristics of the pixels in the motion compensation frame and filtered using the selected filter, or the filter coefficients of the preset filter are calculated. Filtering may be performed using the calculated filter coefficients or the filter type may be selected and the filter coefficients of the selected filter may also be calculated and filtered.
  • the motion compensation filter 230 determines the size of the motion compensation block in the motion compensation frame and determines the size of the motion compensation block, and the boundary between the pixels located at the boundary of the block within the corresponding motion prediction block and the block boundary. Pixels not located at may be filtered using different types of filters. In this case, the characteristic of the pixel in the motion compensation frame may be the size of the motion compensation block in the motion compensation frame.
  • the motion compensation filter 230 determines a difference between the motion vector of the motion compensation blocks and the reference frame index, and determines whether a blocking phenomenon occurs at the boundary between the motion compensation blocks according to the difference between the motion vector and the index of the reference frame.
  • the pixels of the portion where the blocking phenomenon occurs and the pixels of the portion where the blocking phenomenon does not occur may be filtered using different filters.
  • the characteristic of the pixel in the motion compensation frame may be a difference between the motion vector and the reference frame index between the motion compensation blocks.
  • the motion compensation filter 230 filters the motion compensation frame in an exemplary equation, it may be represented by Equation 5.
  • Equation 5 the case of filtering by using a two-dimensional filter is shown as an example, but may be filtered using a one-dimensional filter, depending on the implementation method or needs.
  • Equation 5 Denotes the pixel value of the pixel indicated by the ( x + m, y + n ) coordinate of the motion compensation frame, ⁇ PP ( m , n ) denotes the ( m, n ) th filter coefficients, and PP is the motion compensation frame The type of filter selected according to the characteristics of the pixels in the blocks within the blocks is shown. Denotes a pixel value of a pixel located at ( x , y ) coordinates in the filtered motion compensation frame.
  • FIG. 4 exemplarily illustrating a process of filtering a motion compensation frame according to an embodiment of the present invention
  • the motion compensation frame having a blocking phenomenon at the boundary between the motion compensation blocks is filtered using the motion compensation filter.
  • a blocking phenomenon may be removed at the boundary between the motion compensation blocks.
  • 4 exemplarily illustrates a case where a two-dimensional deblocking filter is used as a motion compensation filter.
  • * denotes convolution.
  • the motion compensation frame is used as a conceptual term.
  • the motion compensation frame is filtered by generating and filtering the motion compensation frame in frame units. Is applied to video compression.
  • the motion compensation filter 230 may calculate a filter coefficient for filtering the motion compensation frame using the Wiener filter such that the square error of the current frame and the filtered motion compensation frame is minimized.
  • Equation 6 shows an equation for calculating a squared error between the current frame and the filtered motion compensation frame
  • Equation 7 calculates a filter coefficient that minimizes the square error of the current frame and the filtered motion compensation frame. It is shown.
  • ( e PP ) represents the squared error of the current frame and the filtered motion compensation frame
  • the filter coefficients calculated through Equations 6 and 7 are used to filter the motion compensation frame and are passed to the encoder 140.
  • the motion compensation filter 230 may filter the motion compensation frame using the calculated filter as it is, but may filter the motion compensation frame by correcting the filter coefficient through a process such as quantization and rounding down. For example, if the filter coefficient is calculated as a coefficient having a decimal point, if it is encoded as it is, the data amount of the filter coefficient data increases, so that the coefficient is corrected to an integer coefficient through a process such as quantization, rounding, and rounding, and then the corrected filter coefficient is used. The motion compensation frame is filtered and the corrected filter coefficients are transmitted to the encoder 140.
  • the encoder 140 may generate filter coefficient data by encoding filter coefficient information indicating the filter coefficient, and the generated filter coefficient data may be included in the bitstream and transmitted to the image decoding apparatus. By correcting and encoding the filter coefficients as described above, the data amount of the filter coefficient data can be reduced.
  • the filtered motion compensation frame generated as described above is transferred to the subtractor 120, and the subtractor 120 generates a residual frame by subtracting the current frame and the filtered motion compensation frame.
  • the filtered motion compensation block may be adaptively used for each unit such as a block or slice unit.
  • the residual frame includes a residual signal having a difference value between the pixel value of the pixel of the current frame and the pixel value of the pixel of the motion compensation frame.
  • the residual signal may be represented as in Equation 8.
  • Equation (9) the residual signal generated by subtracting the current frame and the motion compensation frame according to a typical video compression method may be represented by Equation (9).
  • Equations 8 and 9 resi ( x , y ) represents the pixel value of the residual signal at the (x, y) coordinate.
  • the motion compensation filtering is performed using the Wiener filter, the temporal redundancy remaining in the motion compensation frame is further removed so that the residual signal of Equation 8 is smaller than the residual signal of Equation 9, and the motion is performed using a deblocking filter.
  • the compensation filtering is performed, the blocking phenomenon remaining in the motion compensation frame is further eliminated, so that the image quality is improved when the residual signal of Equation 8 is transformed and quantized and the encoded data is decoded.
  • Equation 8 illustrates that the residual signal is generated in units of frames
  • Equation 8 is not necessarily used or the residual signal is generated in units of frames, and the residual signal may be generated in various units such as blocks or slices.
  • the residual signal may be generated using an equation other than Equation 8.
  • FIG. 5 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
  • the image encoding apparatus 100 filters the motion compensation frame generated by estimating and compensating for the motion of the current frame of the image (S510), and remaining the difference between the current frame and the filtered motion compensation frame.
  • the frame is transformed, quantized, and encoded (S520).
  • the image encoding apparatus 100 may filter the motion compensation frame using another filter according to the characteristics of the pixels of the block in the motion compensation frame, and use the Wiener filter or the deblocking filter as the motion compensation frame.
  • the motion compensation frame may be filtered using a filter having a filter coefficient that minimizes a square error between the current frame and the filtered motion compensation frame.
  • the image encoding apparatus 100 may further encode filter coefficient information indicating the filter coefficient of the filter used to filter the motion compensation frame.
  • the image encoding apparatus 100 may determine a motion vector and a reference frame index for a block in which the block mode is an inter mode, and use the determined motion vector and the reference frame index to determine a motion compensation block. And generate a motion compensation frame including the generated motion compensation block, and filter the motion compensation frame.
  • the image encoding apparatus 100 may encode block mode information indicating a block mode and further encode motion information indicating a determined motion vector and a reference frame index.
  • the image encoding apparatus 100 may reconstruct the residual frame by inverse quantization and inverse transformation of the transformed and quantized residual frame, restore the current frame by adding the reconstructed residual frame and the filtered motion compensation frame, and restore the current frame. Frames can be deblocked filtered and saved.
  • FIG. 6 is a block diagram schematically illustrating an image decoding apparatus according to an embodiment of the present invention.
  • the image decoding apparatus 600 includes a decoder (Decoder, 610), an inverse quantizer and an inverse converter (620), a predictor (630), an adder (640), a deblocking filter (650), and It may be configured to include a memory 660.
  • the decoder 610 decodes the motion information data and the image encoded data extracted from the bitstream to restore the motion information, the transformed and quantized residual frames.
  • the decoder 610 may recover block mode information by extracting and decoding block mode data from the bitstream, and may restore filter coefficient information by extracting and decoding filter coefficient data from the bitstream.
  • the decoder 610 may decode various data using various encoding techniques such as entropy encoding, and may inversely decode a method encoded by the encoder 140 of the image encoding apparatus 100. .
  • Inverse quantizer and inverse transformer 620 inverse quantizes and inverse transforms the reconstructed transformed and quantized residual frame to reconstruct the residual frame.
  • the inverse quantizer and inverse transformer 620 may perform inverse quantization and inverse transformation by using various transformation and quantization schemes, which are transformed and quantized by the transformer and quantizer 130 of the image encoding apparatus 100.
  • the reverse method can be used to perform inverse magnetization and inverse transformation.
  • the predictor 630 compensates for the motion of the current frame using the motion vector identified by the reconstructed motion information and the reference frame index to generate a motion compensation frame and filters the motion compensation frame. That is, the predictor 630 may generate a motion compensation block using the motion vector and the reference frame index identified by the reconstructed motion information, and generate a motion compensation frame including the motion compensation block.
  • the predictor 630 may filter by using a filter having a predetermined filter coefficient in filtering the motion compensation frame, and, if there is filter coefficient information restored by the decoder 610, the restored filter.
  • the motion compensation frame may be filtered using a filter having a filter coefficient identified by the coefficient information.
  • the predictor 630 may filter using another filter according to the characteristics of the pixels of the block in the motion compensation frame, and may use a Wiener filter or a deblocking filter as a filter for filtering the motion compensation frame.
  • the predictor 630 generates a motion compensation block by using the motion vector and the reference frame index, generates a motion compensation frame including the motion compensation block, and filters the generated motion compensation frame with reference to FIGS. 1 to 5. Through the same or similar to the above-described method, detailed description thereof will be omitted.
  • the adder 640 reconstructs the current frame by adding the reconstructed residual frame and the filtered motion compensation frame. That is, the adder 640 adds the pixel values of the pixels of the residual frame reconstructed by the inverse quantizer and the inverse transformer 620 and the pixel values of the pixels of the motion compensation frame filtered by the predictor 630 to thereby add the pixels of the current frame. Restore them.
  • the deblocking filter 650 deblocks and filters the current frame restored by the adder 640 to output the deblocking filtered current frame as a reconstructed image, and the memory 660 uses the deblocking filtered current frame as a reference frame.
  • the stored reference frame is used by the predictor 630 to predict the next frame.
  • FIG. 7 is a block diagram schematically illustrating an inter prediction apparatus for image decoding according to an embodiment of the present invention.
  • the inter prediction apparatus for image decoding according to an embodiment of the present invention may be implemented as a predictor 630 in the image decoding apparatus 600 according to the embodiment of the present invention described above with reference to FIG. For convenience, the predictor 630 is called.
  • the predictor 630 may include a motion compensator 710 and a motion compensation filter 720.
  • the motion compensator 710 generates a motion compensation block by using a motion vector and a reference frame index identified by the motion information extracted from the bitstream and decoded and reconstructed, and generates a motion compensation frame including the motion compensation block. That is, when the motion compensator 710 receives the motion vector and the reference frame index from the decoder 610, the block indicated by the motion vector in the reference frame indicated by the reference frame index among the reference frames stored in the memory 660. Is generated as motion compensation blocks and a motion compensation frame including the motion compensation blocks.
  • the motion compensation filter 720 filters the motion compensation frame. That is, the motion compensation filter 720 filters the motion compensation frame generated and transmitted by the motion compensator 710 using various filters such as a winner filter or a deblocking filter. In this case, when the motion compensation filter 720 receives the filter coefficient information from the decoder 610, the motion compensation filter 720 may generate a motion compensation frame using a filter having the filter coefficient identified by the filter coefficient information. If not, the motion compensation frame may be filtered using a preset filter or a filter selected according to pixel characteristics of a block in the motion compensation frame.
  • FIG. 8 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
  • the image decoding apparatus 600 currently uses a motion vector and a reference frame index identified by motion information reconstructed by decoding motion information data extracted from a bitstream. Compensate for the motion of the frame to generate a motion compensation frame (S810), filter the motion compensation frame (S820), and restore the current frame by adding the residual motion and the filtered motion compensation frame extracted and decoded from the bitstream (S830).
  • the image decoding apparatus 600 extracts and decodes motion information data from a bitstream and decodes motion information data from a bitstream for a block having a block mode of interblock in a current frame to restore motion information.
  • the motion compensation block may be generated using the motion vector identified by the reconstructed motion information and the reference frame index, and a motion compensation frame including the generated motion compensation block may be generated.
  • the image decoding apparatus 600 may further reconstruct filter coefficient information by extracting and decoding filter coefficient data from the bitstream.
  • the image decoding apparatus 600 may reconstruct the filter coefficient information.
  • the motion compensation frame may be filtered using a filter having a filter coefficient identified by.
  • the image decoding apparatus 600 may filter the motion compensation frame using a filter having a predetermined filter coefficient in filtering the motion compensation frame, and may include characteristics of the pixels of the block in the motion compensation frame.
  • the motion compensation frame may be filtered using another filter, and the motion compensation frame may be filtered using a Wiener filter or a deblocking filter.
  • the image decoding apparatus 600 may deblock the filtered current frame and output the restored current frame.
  • the deblocking filtered current frame may be stored and used to predictively sign another frame.
  • a compression efficiency may be further improved by removing temporal redundancy remaining in a motion compensation frame generated through motion estimation and motion compensation.
  • the deterioration of the image quality of the reconstructed video to be reproduced can be further reduced.
  • the present invention can be applied to an image compression processing field for encoding and decoding a video, thereby further improving compression efficiency by removing temporal redundancy remaining in a motion compensation frame generated through motion estimation and motion compensation.
  • the quantization error can be further reduced by adaptively selecting and filtering the filter type and the filter coefficient according to the characteristics of the pixels of the block in the motion compensation frame, the compression efficiency can be further improved, or the decoding is performed after being decoded and reproduced. It is a very useful invention to generate an effect that can further reduce the deterioration of the image quality.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention porte sur un procédé et un appareil de codage/décodage d'images utilisant le filtrage d'une trame à mouvement compensé. La présente invention porte sur un procédé de codage d'image filtrant une trame à mouvement compensé générée par estimation et compensation du mouvement d'une trame d'une image actuelle, et conversion, quantification et codage de la trame résiduelle qui est une différence entre la trame actuelle et la trame filtrée à mouvement compensé. Selon la présente invention, on filtre la trame à mouvement compensé afin de mieux supprimer ainsi la redondance temporelle subsistant dans la trame résiduelle, ce qui améliore le rendement de compression vidéo.
PCT/KR2010/007027 2009-10-19 2010-10-14 Procédé et appareil de codage/décodage d'images utilisant le filtrage d'une trame à mouvement compensé WO2011049321A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2009-0099339 2009-10-19
KR1020090099339A KR101379189B1 (ko) 2009-10-19 2009-10-19 움직임 보상 프레임의 필터링을 이용한 영상 부호화/복호화 방법 및 장치

Publications (2)

Publication Number Publication Date
WO2011049321A2 true WO2011049321A2 (fr) 2011-04-28
WO2011049321A3 WO2011049321A3 (fr) 2011-08-18

Family

ID=43900787

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2010/007027 WO2011049321A2 (fr) 2009-10-19 2010-10-14 Procédé et appareil de codage/décodage d'images utilisant le filtrage d'une trame à mouvement compensé

Country Status (2)

Country Link
KR (1) KR101379189B1 (fr)
WO (1) WO2011049321A2 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10924730B2 (en) 2016-11-07 2021-02-16 Lg Electronics Inc. Image decoding method and device in image coding system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980067578A (ko) * 1997-02-06 1998-10-15 김광호 동영상부호화시스템에서의 노이즈 감소를 위한 필터링방법 및 장치
KR20040030096A (ko) * 2001-08-14 2004-04-08 제너럴 인스트루먼트 코포레이션 이전에 생성된 모션 벡터 및 적응 공간 필터링을 이용한디지털 비디오용 노이즈 감소 프리-프로세서
KR20060034294A (ko) * 2003-07-16 2006-04-21 코닌클리케 필립스 일렉트로닉스 엔.브이. 엔코딩 방법 및 디바이스

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR19980067578A (ko) * 1997-02-06 1998-10-15 김광호 동영상부호화시스템에서의 노이즈 감소를 위한 필터링방법 및 장치
KR20040030096A (ko) * 2001-08-14 2004-04-08 제너럴 인스트루먼트 코포레이션 이전에 생성된 모션 벡터 및 적응 공간 필터링을 이용한디지털 비디오용 노이즈 감소 프리-프로세서
KR20060034294A (ko) * 2003-07-16 2006-04-21 코닌클리케 필립스 일렉트로닉스 엔.브이. 엔코딩 방법 및 디바이스

Also Published As

Publication number Publication date
KR101379189B1 (ko) 2014-04-10
WO2011049321A3 (fr) 2011-08-18
KR20110042602A (ko) 2011-04-27

Similar Documents

Publication Publication Date Title
WO2011145819A2 (fr) Dispositif et procédé de codage/décodage d'image
WO2013002549A2 (fr) Procédés et appareil de codage/décodage d'une image
WO2013109039A1 (fr) Procédé et appareil de codage/décodage d'images utilisant la prédiction de poids
WO2010027182A2 (fr) Procédé et dispositif de codage/décodage d'images au moyen de pixels aléatoires dans un sous-bloc
WO2013062197A1 (fr) Appareil de décodage d'images
WO2013070006A1 (fr) Procédé et appareil de codage et de décodage vidéo faisant appel à un mode de saut
WO2010038961A2 (fr) Procédé et appareil pour codage/décodage de vecteurs de mouvement au moyen d'une pluralité d'estimations de vecteurs de mouvement, et procédé et appareil pour codage/décodage d'image au moyen d'un tel appareil et d'un tel procédé
WO2012096550A2 (fr) Procédé et dispositif de codage/décodage d'image utilisant une prédiction intra bidirectionnelle
WO2013005941A2 (fr) Appareil et procédé de codage et de décodage d'une image
WO2010039015A2 (fr) Appareil et procédé de codage / décodage sélectif d’une image par transformée en cosinus / en sinus discrète
WO2012011672A2 (fr) Procédé et dispositif de codage/décodage d'images par mode de saut étendu
WO2012099440A2 (fr) Appareil et procédé de génération/récupération d'informations de mouvement basées sur un encodage d'indice de vecteur de mouvement prédictif, et appareil et procédé d'encodage/décodage d'images utilisant ce dernier
WO2013002550A2 (fr) Méthode et appareil de codage/décodage par décision de mode de l'unité de codage grande vitesse
WO2010050706A2 (fr) Procédé et appareil de codage d'un vecteur mouvement, procédé et appareil de codage/décodage d'une image faisant appel à ces derniers
WO2012018198A2 (fr) Dispositif de génération de blocs de prédiction
WO2012144876A2 (fr) Procédé et appareil pour coder/décoder des images à l'aide d'un procédé de prévision adoptant le filtrage en boucle
WO2012077960A2 (fr) Procédé et dispositif de codage/décodage d'une image par inter-prédiction en utilisant un bloc aléatoire
WO2011142603A2 (fr) Procédé et appareil de filtrage d'images, et procédé et appareil de codage/décodage utilisant ces derniers
WO2013062198A1 (fr) Appareil de décodage d'images
WO2010044569A2 (fr) Procédé et appareil permettant de générer une trame de référence et procédé et appareil permettant le coder/décoder une image au moyen de la trame de référence
WO2013069996A1 (fr) Procédé et appareil de codage/décodage d'image à l'aide d'un filtre à boucle adaptatif sur un domaine de fréquence faisant intervenir une conversion
WO2011037337A2 (fr) Procédé et appareil de codage-décodage d'images tenant compte de composantes basse fréquence
WO2013062194A1 (fr) Procédé et appareil de génération de bloc reconstruit
WO2011108879A2 (fr) Dispositif de codage vidéo, procédé de codage vidéo de ce dispositif, dispositif de décodage vidéo, et procédé de décodage vidéo de ce dispositif
WO2012021040A2 (fr) Procédé et dispositif de codage/décodage d'image ayant un mode de filtrage amovible

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10825146

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 27/07/12)

122 Ep: pct application non-entry in european phase

Ref document number: 10825146

Country of ref document: EP

Kind code of ref document: A2