KR20140040605A - Method and apparatus for encoding and decoding using deblocking filtering - Google Patents

Method and apparatus for encoding and decoding using deblocking filtering Download PDF

Info

Publication number
KR20140040605A
KR20140040605A KR1020130024493A KR20130024493A KR20140040605A KR 20140040605 A KR20140040605 A KR 20140040605A KR 1020130024493 A KR1020130024493 A KR 1020130024493A KR 20130024493 A KR20130024493 A KR 20130024493A KR 20140040605 A KR20140040605 A KR 20140040605A
Authority
KR
South Korea
Prior art keywords
block
reliability
filtering
reconstructed
inverse
Prior art date
Application number
KR1020130024493A
Other languages
Korean (ko)
Inventor
전병우
원광현
Original Assignee
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 성균관대학교산학협력단 filed Critical 성균관대학교산학협력단
Priority to KR1020130024493A priority Critical patent/KR20140040605A/en
Publication of KR20140040605A publication Critical patent/KR20140040605A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding

Abstract

The present invention relates to an encoding and decoding apparatus using deblocking filtering. A decoding apparatus for an image according to the present invention may include a deblocking filtering unit which verifies reliability of a block restored from a decoded image block and determines a filtering strength of the restored block based on the reliability. Thus, provided are a deblocking filtering method capable of performing adaptive deblocking based on the set reliability and an encoding and decoding apparatus using the same. [Reference numerals] (S100) High reliability block?; (S110) BS, 慣, 棺 up-scailing; (S120) Compensation value up-scailing; (S130) BS, 慣, 棺 down-scailing; (S140) Compensation value down-scailing

Description

Encoding and decoding apparatus using deblocking filtering and method thereof {Method and Apparatus For Encoding and Decoding using Deblocking Filtering}

The present invention relates to a video encoding and decoding technique, and more particularly, to a deblocking filtering method and an encoding and decoding apparatus using the same.

MPEG, H.26x, and other compression standards are widely used as efficient compression technologies for video players, customized video information services (VOD), video telephony, digital multimedia broadcasting (DMB), and video transmission in wireless mobile environments. The compression standards have a large gain in encoding efficiency by eliminating temporal redundancy, and motion prediction and compensation techniques are a representative method for reducing the temporal redundancy. However, since the motion prediction and compensation technique requires a relatively large amount of computation in the video encoder, power consumption increases. Therefore, in a limited resource environment such as a sensor network, in order to reduce the power of the encoding device, reducing the complexity of the encoding device has emerged as an important technical problem.

Wyner-Ziv encoding technology is a representative method of distributed video coding technology, and generates side information of a current frame using similarity between neighboring frames reconstructed by a decoding apparatus, and a difference between the generated auxiliary information and the current frame. Is regarded as virtual channel noise, and the encoding apparatus receives the parity bits generated by channel encoding to remove the noise included in the auxiliary information to restore the current frame.

In other words, the distributed video encoding technique reduces the complexity of the encoding apparatus by allowing the encoding apparatus to execute motion prediction, which takes the largest amount of computation in the encoding apparatus. The encoding apparatus encodes the video frames independently of each other, and the video frame is similar to the existing technique. Since the video frames are not scanned to detect the similarity between them, the amount of computation of the encoding apparatus can be reduced.

Therefore, this distributed video coding method is suitable for applications requiring low power and low computation amount in an encoding apparatus. Distributed video coding uses channel coding to correct errors that occur in auxiliary information, which is also robust to error durability. However, even in the case of distributed video coding, there is still much discussion for objective and subjective quality compared to conventional predictive video coding. One of these is to remove the blocking artifacts that occur in the reconstructed image. These artifacts occur following quality differences occurring between adjacent blocks, which vary between blocks in the auxiliary information, depending on the accuracy of motion estimation and compensation. For example, although the original pixel values are smoothly connected to change slowly in flat areas, different degrees of coding noise added to each block cause differences in the decoded pixel values. This difference creates visible blocking artifacts. Since the path from which artifacts occurring in such distributed video coding are generated is different from that according to the conventional video coding, MPEG-2 or H.264 / AVC standard, the deblocking filter described in the standard cannot be directly applied. Although there have been discussions of filters to be applied to distributed video coding, they are based on a deblocking filter in accordance with conventional specifications to calculate boundary strength (BS) for strong filtering, default filtering or filtering on two adjacent blocks. It is classified as not.

Hereinafter, the filtering discussed in the existing H.264 / AVC specification will be briefly described as follows. The deblocking filter of the H.264 / AVC standard focuses on artifacts generated by quantization and motion compensation, and applies one of three filtering modes according to block boundary analysis.

(1) quantization dependent parameters

α and β are predetermined thresholds, IndexA and IndexB are adjusted by adding constant adjustment values (-12 to 12) to the quantization parameters, and filtering is performed when the following conditions are satisfied.

P 0 -q 0 │ <α (IndexA) (1)

P 1 -q 0 │ <β (IndexB) (2)

Q 1 -p 0 | <β (IndexB) (3)

(2) boundary strength (Bs)

It is determined by the prediction mode relationship between blocks P and Q. If Bs is 4, a strong filter is selected. If Bs is 0, no filtering is applied and in other cases, a default filter is applied.

(3) Clipping parameters (C 0 , C 1 )

To prevent video quality degradation, the compensation added to the pixels near the block boundary is limited by the values of C 0 and C 1 .

(4) Compensation value (ㅿ p i )

The value added to pixels near the block boundary.

As described above, the filters based on the H.264 / AVC standard focus on the overall error of two adjacent blocks without differentiating the reliability of the two blocks. Therefore, a filtering method that reflects the characteristics of the blocks is required.

An embodiment of the present invention is to provide a deblocking filtering method that can set the reliability for each block and an encoding and decoding apparatus using the same.

One embodiment of the present invention is to provide a deblocking filtering method capable of determining filtering parameters based on a set reliability and an encoding and decoding apparatus using the same.

One embodiment of the present invention is to provide a deblocking filtering method capable of adaptively deblocking based on a set reliability and an encoding and decoding apparatus using the same.

Another embodiment of the present invention is to provide a deblocking filtering method capable of reducing high frequency vibration artifacts and an encoding and decoding apparatus using the same.

An apparatus for decoding an image according to an embodiment of the present invention may include a deblocking filter unit that determines a reliability of a reconstructed block reconstructed from a decoded image block and determines a filtering strength of the reconstructed block based on the reliability. have.

The higher the reliability of the deblocking filter, the lower the filtering strength may be determined.

The deblocking filter unit may upscale the filtering parameters of the reconstruction block as the reliability is higher.

The deblocking filter unit may set a filtering parameter for each of the reconstruction blocks based on the reliability.

The deblocking filter may determine a higher reliability of the reconstructed block as the difference between the reconstructed block and the original image block is smaller.

The deblocking filter may estimate an error between two adjacent blocks, and set a reliability of each block according to the estimated error ratio.

The image decoding apparatus may further include a key frame restoring unit for restoring an input key frame, a side information (SI) generating unit for generating auxiliary information using the key frame restored by the key frame restoring unit, and the SI. The deblocking filter may further set the filtering strength as the SI error increases.

If any adjacent block sharing a boundary is a P block and a Q block, the deblocking filter unit may perform an SI error (

Figure pat00001
) And reliability (
Figure pat00002
) Can be set as follows:

Figure pat00003

Figure pat00004

Figure pat00005
: P block of forward motion compensation frame

Figure pat00006
: P block of backward motion compensation frame

MSE: mean squared value

The image decoding apparatus may further include a prediction unit generating a prediction block for a current block using intra prediction or inter prediction, a residual block using the current block and the prediction block, and converting the residual block. The apparatus further includes an inverse transform / inverse quantization unit for generating the inverse transform / inverse quantized residual block by inverse quantization and inverse transformation of the transform / quantization unit to quantize and the transformed and quantized residual block, and the deblocking filter includes the inverse transform / The number of non-zero coefficients of the dequantized residual block may be set as the reliability.

An image decoding method according to another embodiment of the present invention may include determining a reliability of a reconstructed block reconstructed from a decoded image block, and determining a filtering strength of the reconstructed block based on the reliability. .

According to an embodiment of the present invention, there is provided a deblocking filtering method and encoding and decoding apparatus using the same, which improves coding efficiency by differentiating a well decoded block from a badly decoded block based on reliability.

According to an embodiment of the present invention, there is provided a deblocking filtering method capable of setting reliability for each block to reflect the characteristics of a block, and an encoding and decoding apparatus using the same.

According to an embodiment of the present invention, a deblocking filtering method capable of determining filtering parameters based on a set reliability and an encoding and decoding apparatus using the same are provided.

According to an embodiment of the present invention, there is provided a deblocking filtering method and an encoding and decoding apparatus using the same, which can reduce blocking artifacts and improve the deblocking effect by adaptively performing deblocking based on a set reliability.

In addition, according to another embodiment of the present invention, there is provided a deblocking filtering method capable of reducing high frequency vibration artifacts and an encoding and decoding apparatus using the same.

1 is a control flowchart illustrating a deblocking filtering method according to the present invention.
2 is a control block diagram illustrating the configuration of an image encoding apparatus and a decoding apparatus according to an embodiment of the present invention.
3 is a diagram illustrating adjacent blocks in which blocking is performed.
4 is a diagram for describing artifacts that may occur between restored neighboring blocks.
FIG. 5 is a control flowchart illustrating a method of estimating reliability for each block in the image encoding apparatus and the decoding apparatus of FIG. 2.
FIG. 6 is a diagram illustrating an experimental image to which deblocking filtering is applied according to the encoding apparatus and the decoding apparatus of FIG. 2.
FIG. 7 is a diagram illustrating another experimental image to which deblocking filtering is applied according to the image encoding apparatus and the decoding apparatus of FIG. 2.
FIG. 8 is a control flowchart illustrating a deblocking filtering method according to the image encoding apparatus and the decoding apparatus of FIG. 2.
9 is a control block diagram illustrating the configuration of an image encoding apparatus and a decoding apparatus according to another embodiment of the present invention.
FIG. 10 is a diagram illustrating blocks in which blocking on artifacts is performed in the image encoding apparatus and the decoding apparatus of FIG. 9.
FIG. 11 is a diagram illustrating an experimental image to which deblocking filtering is applied according to the image encoding apparatus and the decoding apparatus of FIG. 9.
FIG. 12 is a control flowchart illustrating a deblocking filtering method according to the image encoding apparatus and the decoding apparatus of FIG. 9.
13 is a control block diagram illustrating the configuration of an image encoding apparatus and a decoding apparatus according to another embodiment of the present invention.
FIG. 14 is a control flowchart illustrating a method of estimating reliability for each block in the image encoding apparatus and the decoding apparatus of FIG. 13.

While the invention is susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the invention is not intended to be limited to the particular embodiments, but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the invention. Like reference numerals are used for like elements in describing each drawing.

 The terms first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another. For example, without departing from the scope of the present invention, the first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component. And / or &lt; / RTI &gt; includes any combination of a plurality of related listed items or any of a plurality of related listed items.

The terminology used in this application is used only to describe a specific embodiment and is not intended to limit the invention. Singular expressions include plural expressions unless the context clearly indicates otherwise. In the present application, the terms "comprises" or "having" and the like are used to specify that there is a feature, a number, a step, an operation, an element, a component or a combination thereof described in the specification, But do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, or combinations thereof.

In addition, the components shown in the embodiments of the present invention are independently shown to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit. In other words, each component is listed as a component for convenience of description, and at least two of the components may form one component, or one component may be divided into a plurality of components to perform a function. The integrated and separated embodiments of each component are also included in the scope of the present invention without departing from the spirit of the present invention.

In addition, some of the components are not essential components to perform essential functions in the present invention, but may be optional components only to improve performance. The present invention can be implemented only with components essential for realizing the essence of the present invention, except for the components used for the performance improvement, and can be implemented by only including the essential components except the optional components used for performance improvement Are also included in the scope of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In describing the embodiments of the present specification, when it is determined that a detailed description of a related well-known configuration or function may obscure the gist of the present specification, the description may be omitted.

1 is a control flowchart illustrating a deblocking filtering method according to the present invention.

The filtering according to the present invention determines the reliability of the block and controls the filtering degree. Reliability may indicate a good degree of pixel value of the reconstructed image, and a well decoded pixel value will be close to the pixel value of the original image. Such a pixel can be regarded as a pixel having high reliability. On the contrary, a pixel having a large difference from the original pixel value due to poor decoding state has low reliability.

As shown, the filtering method according to the present embodiment first estimates the reliability of the block to determine whether the estimated reliability is high (S100).

As a result of the determination, if it is determined that the reliability is not high, it may be determined that the decoding is not good and the filtering degree may be adjusted. That is, upscaling is performed between the boundary strength BS that determines the degree of filtering and the predetermined threshold values α and β that determine whether or not the filter is applied (S110).

After that, the filtering compensation value Δ applied to each pixel is scaled up during filtering (S120).

A strong filter can be applied by upscaling boundary strength, threshold, and compensation. Through this process, it is possible to compensate for the reconstructed image having a poor decoding state.

On the other hand, if it is determined that the reliability is high, the decoding quality may be determined to be high and the filtering strength may be weakly adjusted. That is, down scaling of the boundary strength BS that determines the degree of filtering and the predetermined threshold values α and β that determine whether or not the filter is applied is performed (S130).

In this case, the filtering compensation value Δ applied to each pixel during filtering is also downscaled (S140).

As shown in FIG. 1, a deblocking filter that performs deblocking based on reliability may be implemented as an in-loop filter or a post filter for a reconstructed image in an image encoding and decoding apparatus. Hereinafter, an apparatus for encoding and decoding an image to which a specific filter is applied will be described.

2 is a control block diagram of an image encoding apparatus and a decoding apparatus according to an embodiment of the present invention. As shown, the image encoding apparatus 200A according to the present embodiment includes a keyframe encoding unit 210, a transforming unit 221, a quantization unit 222, an LDPC encoding unit 223, and a buffer 224. The decoding apparatus 200B includes a keyframe decoding unit 231, a buffer 232, an auxiliary information generating unit 240, a conversion unit 241, a channel noise modeler 242, an LDPC decoding unit 251, and an image. A circular unit 252, an inverse transform unit 253, and a deblocking filter unit 260 are included.

The encoding apparatus 200A according to the Wyner-Ziv coding technique classifies pictures of source video content into two types. One is a frame to be coded by a distributed video coding scheme (hereinafter referred to as a 'WZ frame'), and the other is a picture to be coded by a conventional coding scheme (hereinafter referred to as a 'key frame') rather than a distributed video coding scheme. )to be.

The keyframes are encoded by the intraframe encoding method of, for example, H.264 / AVC in the keyframe encoding unit 210 and transmitted to the decoding apparatus 200B. The keyframe decoding unit 231 of the decoding apparatus 200B according to the Wyner-Ziv coding technique reconstructs the transmitted keyframes.

In order to encode the WZ frame, the transform unit 221 and the quantization unit 222 of the encoding apparatus 200A perform transform and quantization on the WZ frame, and the LDPC encoding unit 223 converts the quantized value of the WZ frame. A parity bit for each coding unit is generated by dividing the data into predetermined coding units and using a channel code.

The generated parity bits are temporarily stored in the parity buffer 224 and are sequentially transmitted when the decoding apparatus 200B requests parity through a feedback channel.

The auxiliary information generator 240 generates side information (SI) corresponding to the WZ frame by using the keyframe reconstructed by the keyframe decoding unit 231.

The SI is converted and an SI error is estimated by the channel noise modeler 242, and the SI error is output to the LDPC decoding unit 251.

The auxiliary information generator 240 assumes linear movement between key frames located before and after the current WZ frame, and generates side information corresponding to the WZ frame to be restored using interpolation. In some cases, the interpolation method may be used instead of the interpolation method, but since the noise in the auxiliary information generated by the interpolation method is smaller than the noise in the auxiliary information generated by the interpolation method, the interpolation method is used in most cases.

The LDPC decoding unit 251 receives a parity transmitted from the encoding apparatus 200A and estimates a quantized value.

The image reconstructor 252 receives the quantization value estimated by the LDPC decoder 251, and the inverse transformer 253 reconstructs the WZ frame by inversely transforming it.

If the decoding fails because the parity bits provided from the encoding apparatus 200A are not sufficient to ensure successful decoding, the LDPC decoding unit 251 may transmit more parity bits to the encoding apparatus 200A through the feedback channel. The transmission is requested, and this process is repeated until decryption succeeds.

The deblocking filter 260 performs deblocking filtering on the reconstructed WZ frame. 3 is a diagram illustrating adjacent blocks in which blocking is performed according to the present embodiment. As shown, the P block and the Q block are located adjacent to each other sharing a boundary. The filtering may be performed on pixels p0, p1, p2, p3, q0, q1, q2 and q3 located adjacent to the boundary.

Since the decoding apparatus generates SI by performing motion estimation and motion compensation, the quality of the SI is very sensitive to motion estimation error. The performance of the moving estimation varies between blocks, which also changes the quality of the SI. Block artifacts occur due to quality differences and quantization between blocks generated by motion compensation in the SI, which degrades the objective and subjective picture quality of the decoded WZ frame.

4 is a diagram for describing artifacts between adjacent blocks. As shown, there are two adjacent P blocks and Q blocks with different SI errors. Assuming that the P block and the Q block are located in a gentle region, the original pixels p3 to q3 have values very close to each other. However, the difference in SI error added to the blocks makes a very different recovery value when the block is restored. Thus, pixels p0 through p3 are similar, and pixels q0 through q3 are similar, but the difference between pixels p0 and q0 becomes large. That is, blocking artifacts occur. In practice, the reconstructed value reaches close to the median of the quantization interval, so the blocking artifacts of the reconstructed frame tend to be smaller than the artifacts in the SI. However, blocking artifacts are still acknowledged and require a very careful approach to filtering.

Considering the case where the SI error of the P block is larger than the Q block, blocking artifacts occur mostly in the P block as shown in FIG. Therefore, ideally, a strong filter should be applied to the P block and a weak filter should be applied to the Q block. However, since the decoding apparatus only knows the estimated SI error instead of the actual correct value, it is impossible to know approximately whether each block has a large SI or a small SI. In a block expected to have a small SI error, the pixels contained in the block will have high reliability because they are close to the original pixel values. Therefore, a small compensation value must be added to the block. Such a block may be represented as a "high confidence" block.

On the other hand, a block having a relatively large compensation value is added as a “low reliability” block to remove the blocking art factor because of a large error.

As a result, a block having a high reliability serves to help correct a low reliability block. This analysis can also be seen in FIG. 4, where a strong filter with a larger compensation value than the Q block will be applied to the P block.

An important issue of deblocking is how small the pixel value difference is between the two blocks determined by the blocking artifact, not the difference of the original pixel value. Therefore, it may be effective to determine the reliability for each block in the deblocking process. If a block is found to have a lower reliability than other blocks, the block will change more than other blocks. Therefore, reliability can play a very important role in the deblocking filter in distributed video coding, and in the present invention, the boundary strength can be controlled based on the reliability of each block itself as well as the overall error of two adjacent blocks.

As shown in FIG. 3, the boundary between two adjacent blocks may have a boundary strength for the P block and a boundary strength for the Q block. The deblocking filter 260 according to the present embodiment filters the pixel values of the P block and the Q block in different ways based on two boundary intensities. The pixels in the block with high reliability are changed less, and the pixels in the block with low reliability are changed much so as to approach the pixel values with high reliability.

According to the conventional limited channel noise modeling, the difference between the forward motion frame and the backward motion frame of the key frame used in the SI generation is used to estimate the SI error. In this embodiment, the inverse of the difference between the frames is used as the reliability.

That is, the mean square value of the difference between the portions corresponding to the P blocks of the forward motion frame and the backward motion frame is estimated as the error of the P block, and the error ratio of the P block and the Q block is determined by the reliability of the P block.

Figure pat00007
. If this is expressed as an expression, it is as follows.

Figure pat00008
(4)

Figure pat00009
(5)

5 is a control flowchart for explaining a method of estimating reliability for each block according to the present embodiment. Referring to FIG. 5, the reliability estimation method is summarized as follows.

First, a block for estimating reliability is set (n) (S510), and a forward motion frame and a backward motion frame corresponding to the block are generated (S520).

The average square value E n = MSE (R n fwd -R n bwd ) of the difference between the generated forward motion frame and the backward motion frame is calculated as a coding error (S530).

Inversely, the calculated inverse of the coding error (R n = 1 / E n ), and more specifically, the error ratio of the block to the total error with the neighboring block is estimated as reliability (S540).

When the reliability estimation for one block is completed, the reliability estimation for the next block is performed (S550), and this process is repeated until the frame is completed (S560).

Determining the parameters for filtering based on the estimated reliability as follows.

(a) Quantization dependent parameters

Quantization dependent parameters are used to indicate the activity of video content. In the existing H.264 / AVC specification, high QP results in a rougher quantization of the pixels, which results in more severe blocking artifacts. The threshold used in the H.264 / AVC specification scales with this severe blocking artifact. In the present embodiment, such a concept is also introduced. When the reliability is very low and the error is determined to be large, the threshold values α 'and β' are expanded as follows. When the threshold values according to the H.264 / AVC standard are represented by α and β, an extension factor represented by a premise error with respect to the threshold (σ = 64) is represented by α 'and β' according to the present embodiment. Can be applied as

α ’= (1 + filter_extension) α (QP) (6)

β '= (1 + filter_extension) β (QP) (7)

Figure pat00010
(8)

(b) Boundary Intensity (Bs p )

In the deblocking filter, Bs is a very important factor in determining how strong deblocking should be applied to each block boundary. In this embodiment, Bs must be large because a block with low reliability is expected to have very serious blocking artifacts. In addition, Bs must be a function of the reliability of two blocks sharing the same block boundary. Another important factor of Bs is the quantization matrix (QM). The quality of the decoded keyframe is very low at large QP. Thus, motion estimation cannot be performed properly and the quality of the generated SI may also be very low. Therefore, blocking artifacts are very serious when the quantization staff corresponding to the small quantization matrix is large. As mentioned above, Bs should be a function of the reliability R p ratio and not from the perspective of P blocks and Q blocks. Bs p according to the present embodiment is as follows.

Figure pat00011
(9)

Where ll p is the Laplace variance parameter that estimates the error of the P block in SI. Having a small ll p, that is, strong filtering the block with the largest SI errors, big ll p, i.e. Bs p is a 1 to lower rapidly the effect of for deblocking filter to the block having a small SI error-reliability as llp Becomes the exponential function of. If a is 32, if Bs p is larger than the threshold value mu = 2a, a strong filter is applied, otherwise the default filter is applied.

(3) Clipping parameters (C p 0 , C p 1 )

C p 1 is similar to the C 1 of the H.264 / AVC standard. However, in the case of distributed video coding, since Bs is designed in a different manner, C p 1 is also calculated by applying Bsp / a to the maximum value of C 1 (Bs = 4, current QP). The ratio of Bs p / a is the same as that of Bs / 4 in the H.264 / AVC specification. C p 1 according to the present embodiment is as follows.

C p 1 = C 1 (Bs = 4, QP) (Bs p / a) (10)

C p 0 is initially set equal to C p 1, and increments by one when satisfying p 2 -p 0 <β.

(4) Compensation value (ㅿ pi ’)

To use the reliability between adjacent blocks, the compensation value depends not only on the reliability between the pixels included in the currently processed line but also on the reliability relationship between the two blocks. How to use the inter pixel relationship to calculate the compensation value is well described in the H.264 / AVC specification. Therefore, this embodiment focuses on the effect of block reliability. If the P block and the Q block have different reliability, the blocks make different contributions to reduce the blocking artifacts. Blocks with high reliability tend to retain their original pixel values without much change by filtering. Also, blocks with low reliability tend to change more pixel values than blocks with high reliability. Thus, in the block with high reliability, the degree of deformation is not large, but blocking artifacts can be eliminated. In addition, two adjacent blocks satisfying the filtering conditions (1), (2) and (3) have a high spatial correlation. Thus, the pixel values in the low reliability block move closer to the pixel values in the high reliability block, which tends to bring the pixels closer to the original pixel values or increase the reliability of the pixels. When the compensation value according to the H.264 / AVC standard is referred to as 'pi', the compensation value 'pi' according to the present embodiment may be expressed as follows.

ㅿ pi ’= ㅿ pi (0.5+ Rp) i = 0,1,2, (11)

After the compensation value is calculated, the filtering value is added to the restored value and the clipping process is performed.

Table 1 shows the results of filtering according to the present embodiment for four sequences of Foreman, Hallmonitor, Coastguard, and Soccer.

Sequence QM Gain due to the DISCOVER
DF filter  (I)
Gain due to the  proposed
DF filter  ( II )
Difference between
gains of  (I) and  ( II )
SSIM PSNR  [ dB ] SSIM PSNR  [ dB ] SSIM PSNR  [ dB ] Foreman One 0.010 0.156 0.008 0.164 -0.002 0.008 4 0.006 0.146 0.006 0.214 0.000 0.068 7 0.002 0.086 0.003 0.174 0.001 0.088 8 0.000 0.036 0.001 0.074 0.001 0.038 Average 0.005 0.106 0.005 0.157 0.000 0.051 Hallmonitor One 0.001 0.026 0.002 0.076 0.001 0.050 4 0.001 0.050 0.001 0.084 0.000 0.034 7 0.001 0.060 0.001 0.066 0.000 0.006 8 0.000 0.048 0.000 0.036 0.000 -0.012 Average 0.001 0.046 0.001 0.066 0.000 0.020 Soccer One 0.005 0.144 0.009 0.254 0.004 0.110 4 0.005 0.216 0.009 0.344 0.004 0.128 7 0.000 0.132 0.003 0.236 0.003 0.104 8 -0.003 -0.052 0.001 0.112 0.004 0.164 Average 0.002 0.110 0.005 0.237 0.004 0.127

Each sequence consists of a QCIF file, 15 Hz, 150 frames. In order to evaluate the objective performance, PSNR using 1, 4, 7, 8, Group of Picture (GOP), and Structural Similarity Index (SSIM) using 2 are used. In the table, when the filtering method according to the present embodiment is applied to TDWZ (Transform domain Wyner-Ziv video coding) without using any deblocking filter, DISCOVER DF (R Martins, C. Brites, J. Ascenso, and F. PSNR and SSIM with Pereira, “Adaptive deblocking filter for transform domain Wyner-Ziv video coding”, filters discussed in IET Image Proces SIng, vol. 3, no. 6, pp. 315-328, December, 2009. The gain of is shown.

As shown in Table 1, similar results are obtained when the filtering according to the present embodiment is applied and when the DISCOVER DF is used rather than when no filter is used. On the average, the filter according to the present embodiment is better than 0.051 dB in the Foreman sequence and 0.127 dB in the Soccer sequence. In the case of a slow motion sequence such as Hallmonitor, the gain of the filter according to this embodiment is not large. This is because substantially large blocking artifacts have not occurred. In the filtering method according to the present embodiment, the SSIM is better than the case using the DISCOVER DF. In the case of the Soccer sequence, the gain was about 0.005 higher than that of the TDWZ without the deblocking filter, and the other filtering result was the best in the Foreman sequence and the Hallmonitor sequence.

6 and 7 illustrate subjective results of (a) the original frame, (b) the unfiltered frame, (c) the frame to which the DISCOVER DF is applied, and (d) the frame to which the filtering according to the present embodiment is applied. In the Foreman sequence of FIG. 6, the QM is 4, the GOP is experimented at 2, 7 frames, the Soccer sequence of FIG. 7 is the QM is 4, and the GOP is experimented at 2, 9 frames.

Unfiltered frames have very low quality due to blocking artifacts. Subjective quality issues were alleviated in some cases with the DISCOVER DF frame, but there were still blocking artifacts at the cheek and chin of the, behind the football player. When the filtering according to the present embodiment is applied, it can be seen that a large part of these remaining artifacts are removed. In other words, it can be seen that the subjective performance and the objective performance are both improved by applying the filtering according to the present embodiment.

In addition, when the filtering according to the present embodiment is applied, not only the blocking artifacts are reduced, but also the rate-distortion performance is increased by about 0.23 dB experimentally.

According to the present embodiment, a deblocking filter is designed as a post filter, and this deblocking filter is based on a new concept of treating each block sharing a block boundary in a different manner based on reliability.

8 is a control flowchart for explaining a deblocking filtering method according to the present embodiment. Referring to FIG. 8, the filtering method according to the present embodiment is summarized as follows.

First, reliability is estimated based on an error for each block as shown in FIG. 5 (S810).

Based on the estimated reliability, the filtering parameters, that is, BS, α, β, C 0 , and C 1 are calculated (S820).

According to the calculated BS, the deblocking filter unit 260 selects a filter by determining whether to apply a filter, and if so, whether to apply a strong filter or a default filter (S830).

When the filter is selected, the filtering compensation value is calculated (S840), and the filtering is performed by applying the calculated compensation value to pixels adjacent to the boundary (S850).

9 is a block diagram showing the configuration of an image encoding apparatus and a decoding apparatus according to another embodiment of the present invention. The image encoding apparatus and the decoding apparatus according to the present embodiment are image coding apparatuses based on Distributed Compressive Video Sensing (DCVS).

Distributed compressed video sensing has a very low complexity in sampling and compression in encoders, and therefore has a framework position very important for future video coding in mobile devices. However, blocking artifacts and vibration artifacts, especially at high frequencies, seriously undermine perceptual quality. The image decoding apparatus according to the present embodiment aims to improve the blocking artifacts and the high frequency vibration as a whole. By introducing the concept of reliability, the subjective and objective quality can be improved by differentiating a high quality decoded block from a low quality decoded block.

Compressive Sensing (CS) is a technique that uses the sparsity inherent in the signal to be sampled at a rate much lower than the Nyquist rate while restoring the original signal.

In image compression, if the transform coefficients of the decoded image are very large in some areas and zero or close to zero in other areas, this pixel data is called ψ-based sparse. The coefficients in ψ-based can be sampled using a very small number of measurements, where the measurement matrix φ can simply be selected from a Gaussian random matrix.

y = φx = φψs (12)

s is a pulse representative value of the image signal x, or x may be represented by ψ s.

In implementing an encoding device with low complexity in sampling and compression, this CS concept is well suited for distributed video coding. In fact, the combination of CS and distributed video coding can be an opportunity for the encoding device to provide a very low cost for sampling and compression of video data.

The general reconstruction algorithm according to DCVS focuses on the sparsity of the signal, not on the characteristics of the image signal itself, such as smoothness. However, considering sparsity alone is not sufficient for best image reconstruction, and high frequency vibration artifacts may occur as already discussed.

As illustrated, the image encoding apparatus and the decoding apparatus according to the present embodiment include a keyframe encoding unit 910, a measurement generator 920, a keyframe decoder 930, an auxiliary information generator 940, and a measurement generator. A unit 950, a reconstructor 960, a channel noise modeler 970, and a deblocking filter unit 980 are included.

Sequences input to the DCVS system are separated into keyframes and non-keyframes, and are sampled and encoded. The keyframe is encoded as an intra picture by the keyframe encoding unit 910 according to the H.264 / AVC standard, and the non-keyframe, i.e., the CS frame, is encoded by the measurement generator 920 as a CS block base having a specific CS size. Can be. The reconstruction unit 960 reconstructs the CS frame based on the SI measurement generated by the motion estimation and compensation of the key frame in the subsidiary generator generation unit 940 and the measurement generation unit 950. Encoding and decoding of an image according to the present embodiment are implemented by applying the concept of the DVC system described in the above embodiment. Therefore, a description of components that perform duplicate or similar functions will be omitted.

Since the non-keyframe frame of DCVS is encoded using CS, the reconstructed frame may be severely degraded by high frequency vibration artifacts. SI can have many artifacts because it is generated by motion estimation and compensation. Moreover, the current DCVS framework is based on block level, and the performance of the CS may vary from block to block because sample sparsity can also vary from block to block within the frame.

Thus, different coding errors in the distributed domain (e.g., the transform domain) in the decoded block create discontinuities between block boundaries known as block artifacts.

High frequency vibration and blocking artifacts severely damage the perceptual quality of the reconstructed video. To solve this problem, the Wiener filter may be included in the restoration process. However, because these filters are designed as in-loop filters that require very rigorous calculations, this approach may not be able to eliminate the artifacts sufficiently. In this embodiment, we propose a post-processing deblocking filter having the characteristics of a low pass filter to reduce both high frequency vibration and blocking artifacts. The quality difference between reconstructed blocks is considered by differentiating the high quality decoded blocks from the low quality decoded blocks.

Pixels included in a decoded block of high quality are close to the original values, which means pixels with high reliability. In this case, a weak filter is applied so as not to change the pixel values much. Conversely, pixels in blocks decoded with low quality are less reliable. In the gentle region, the original pixels located at the boundary of the block were similar on both sides of the high quality decoded block or the low quality decoded block. If a pixel with a low reliability after filtering changes its value toward a pixel with a high confidence in a neighboring block, the reliability can be improved because the pixel can approach the original pixel value. Therefore, the filtering method according to the present embodiment selectively applies a strong filter to a block having high reliability because the estimated artifact is small and avoids adverse effects on the filtering. Apply a weak filter to avoid.

If blocking artifacts occur at the boundaries of the CS block as in conventional video coding, high frequency vibration artifacts may occur throughout the block. Therefore, filtering needs to be applied to the CS block. FIG. 10 is a diagram illustrating blocks in which blocking on artifacts is performed according to the present embodiment. As shown, filtering is performed regularly in the horizontal and vertical directions for each step with a particular “step size”. In summary, the filtering according to the present embodiment is performed at both the inside and the boundary of the CS block as shown in FIG.

In filtering, artifacts are estimated to apply different reliability factors from block to block. If (x-SI), i.e., SI is good, a good frame is recovered. The measured sparsity of y to be restored in equation (12) strongly depends not only on the scarcity of the image itself, but also on the suitability of the SI for the original signal x. The sparsity of y = φ (x-SI) determines the recovery performance. Thus, the amount of artifacts in the reconstructed frame can be estimated as the amount of error in SI. According to the conventional limited channel noise modeling, the difference between the forward motion frame and the backward motion frame is used to estimate the SI error. Similarly to the embodiment described above, this embodiment uses this interframe difference for reliability estimation. However, if the error is calculated for a block that is too small, the estimation may not be accurate. On the contrary, the estimated artifact in a block that is too large may not represent the pixels around the filtering line. Thus, the estimated artifact is calculated for each block with a "step size" of the appropriate size.

Assuming there are P blocks and Q blocks proximate the filtering line, the estimated artifact ep for the P blocks is the mean square of the difference of the portions corresponding to the P blocks of the forward motion frame and the backward motion frame.

Figure pat00012
(13)

In addition, this difference in the reliability of neighboring blocks can be expressed as their ratio as follows.

Figure pat00013
(14)

This embodiment uses this reliability information to improve filtering performance. Estimation of reliability may be performed in the same or similar manner as in FIG. 5.

Determining the parameters for filtering based on the estimated reliability as follows.

(a) Quantization dependent parameters

Thresholds α and β, which were used in the existing H.264 / AVC specification, are used to distinguish whether or not the boundary is located in a gentle region. These thresholds can also be used adaptively by adjusting IndexA and IndexB. Α "and β" according to the present embodiment extend the threshold values α and β according to the H.264 / AVC standard by using an expansion factor of σ = E p + E q (2e 2 ) as follows.

α "= (1+ σ) α (QP) (15)

β "= (1+ σ) β (QP) (16)

(b) Boundary Intensity (Bs p )

In the deblocking filter, Bs controls how strong deblocking should be applied to the boundary of each block. If the artifact is assumed to be large, a strong filter is used. If the artifact is estimated to be small, a very weak filter is applied or no filter is applied. To this end, in the present embodiment, Bs is designed as an exponential function of the estimated artifact. High frequency vibration and blocking artifacts are more severe at low subrates. Therefore, in the present embodiment, unlike the above embodiment, the subrate r is used as the factor of Bs instead of both the quantum and the matrix index QM. Bs p according to the present embodiment is as follows.

Figure pat00014
(17)

In DCVS systems, the deblocking artifact is not very serious because the quantization process of transform coefficients is omitted. Therefore, according to the present embodiment, a strong filter is not applied, and a default filter may be applied or filtering may be omitted.

(3) Clipping value and filtering compensation (C p 0 , C p 1 )

The other clipping values in this embodiment differ from the H.264 / AVC standard because the boundary strengths are generated in different ways. The clipping value according to the present embodiment is applied to the ratio of Bs / a as follows.

C p 1 = C 1 (Bs = 4, QP) (Bs p / a) (17)

For the effect of reliability, the filtering compensation can be expressed as the inverse of the ratio of the reliability factor or as a function of Rp which is the estimated artifact ratio. When the compensation value according to the H.264 / AVC standard is referred to as 'pi', the compensation value 'pi' according to the present embodiment may be expressed as follows.

ㅿ pi ’= ㅿ pi (0.5+ Rp) i = 0,1,2, (18)

An experimental result of applying the filtering method according to the present embodiment to a DCVS system will be described. For experiments, the CS block sizes are 16 and 4, the CS algorithm is block-based compression sensing, and the reconstruction algorithm is smoothed projected-Landweber. In fact, although Wiener filtering is applied in the restoration process, even after Wiener filtering, high frequency vibration and blocking artifacts are still recognized (see FIG. 11).

The sequences used in the experiments are the Hallmonitor sequence for slow motion (165 frames), the Foreman sequence for complex motion (149 frames), the Coastguard sequence for detailed content, and the Soccer sequence for fast motion. The difference in the motion characteristics creates various SI qualities, which will be useful for evaluating the filtering. The subrate distortion point is appropriately selected as shown in Table 2 so that the quality of the keyframe and the quality of the non-keyframe do not differ significantly.

Figure pat00015

In general, the low accuracy of motion estimation for fast sequences produces an SI with low quality. By further spreading the image signal, it is expected that the SI will aid in reconstruction.

However, poorly generated SI is not helpful, and high frequency vibration artifacts strongly influence the reconstructed frame. Therefore, high gain of the proposed filtering is expected in the case of fast sequences such as Soccer. In other words, the opposite is true for small monitors with small movements. Thus, if the object in the picture is stable, an SI is generated properly, which is very effective for reconstruction. If the new signal being put into reconstruction is very sparse, then no large artifacts are found in many reconstructed frames. Therefore, the filtering proposed in this embodiment is not very effective in this sequence.

Table 3 and Table 4 show the result data of applying the filter RF and the low pass filter LPF according to the present embodiment to the first framework I and the second framework II. Table 3 shows the results for the size of the CS block 16, Table 4 shows the results for the size of the CS block 4.

Figure pat00016

Figure pat00017

When applied with a low pass filter, it does not directly remove the high frequency components of the signal or alleviate different differences in the coding artifacts. Thus, the low pass filter seriously degrades the efficiency of the DCVS system, especially in sequences with high quality or containing detailed content. In both the first framework (I) and the second framework (II), gain reduction of about 8dB in Hallmonitor and about 6dB in Coastguard occurred. On the other hand, the filter according to this embodiment substantially improved the reconstructed frame of Soccer as expected. On average, a gain of about 0.28 dB was obtained for a CS with a large block size of 16x16 and about 0.24 dB for a CS with a small block size of 4x4. In the Hallmonitor and Foreman sequences, the artifacts were not as large as the Soccer sequence, so the gain from the applied filter was not large.

In fact, Foreman has more complex movements and more artifacts than Hallmonitor. However, the efficiency of the filter according to this embodiment depends not only on the amount of filtered artifacts but also on the accuracy of the reliability. Due to the nonlinear movement in Foreman's face region, the artifacts estimated in Foreman are less accurate than in Hallmonitor. Thus, despite more artifacts, Foreman's gain is not large compared to Hallmonitor.

11 is a diagram for an experimental image to which deblocking filtering according to the present embodiment is applied. 11 (a1) is a raw image of Foreman, which is frame 39 at sub-rates 0.2 and QP = 40. (a2) is an image to which no filtering is applied to the second framework (II) of the original image (a1), and (a3) is an image to which filtering is applied. (a4) is an image to which no filtering is applied to the first framework (I) of the original image (a1), and (a5) is an image to which filtering is applied. FIG. 11B is an original image of Soccer, and has a sub rate of 0.2 and QP = 42 frames 1. (b2) is an image without filtering applied to the second framework (II) of the original image (b1), and (b3) is an image with filtering applied. (b4) is an image without filtering applied to the first framework (I) of the original image (b1), and (b5) is an image with filtering applied.

As can be seen in FIG. 11, the subjective quality is considerably improved. In general, subjective quality has increased in both the first framework (I) and the second framework (II). Many blocking artifacts occur at small CS block sizes, and high frequency vibration artifacts occur at large CS block sizes. There are blocking artifacts on Foreman's face and Soccer player's feet that include a lot of non-linear movement. In FIG. 11, the result of encoding the subrate to 0.2 clearly shows high frequency vibration and blocking artifacts. After filtering, the blocking artifacts that existed in Foreman's face were significantly reduced. In addition, many high frequency vibration artifacts, which were severely deteriorated in the restored frame of the first framework I, were also removed. In the end, you've got smooth keyframes, and your perceptual quality has improved significantly.

This embodiment provides a reliability-based filtering method to reduce high frequency vibration and blocking artifacts in the DCVS framework. Reliability can be represented by each pixel value having high reliability. Therefore, the filtering according to the present embodiment includes the process of preserving the pixel with the high reliability as much as possible and adjusting the pixel with the low reliability.

12 is a control flowchart for explaining a deblocking filtering method according to the present embodiment.

First, reliability is estimated for each block based on an error (S1210).

Then, the filtering parameters, that is, BS, α, β, C 0 , and C 1 are calculated based on the estimated reliability (S1220).

According to the calculated BS, the deblocking filter unit 260 selects a filter (S830). In the case of the image coding according to the present embodiment, since the quantization process of transform coefficients is omitted, the deblocking artifact is not very serious. Therefore, according to the present embodiment, the strong filter is not applied to the filtering, and the default filter may be applied or the filtering may be omitted.

When the filter is selected, the filtering compensation value is calculated (S1240), and the filtering is performed by applying the calculated compensation value to a pixel adjacent to the boundary (S1250).

13 is a control block diagram illustrating the configuration of an image encoding apparatus and a decoding apparatus according to another embodiment of the present invention. The image encoding apparatus according to the present embodiment may include a prediction unit 1310, a transform and quantization unit 1320, an inverse transform and inverse quantization unit 1330, an entropy encoding unit 1340, and a deblocking filter unit 1350. have.

The prediction unit 1310 generates a prediction block having a predicted pixel value as a pixel value of each pixel by predicting the current block by using intra prediction or inter prediction. .

In the case of intra prediction, the prediction unit 1310 generates an intra prediction block of the current block by using the available neighboring pixel values spatially located around the current block. In this case, an error value between the current block and the intra prediction block is calculated for each of the available intra prediction modes, and the intra prediction block is generated by applying the intra prediction mode having the minimum error value. In addition, by encoding an intra prediction mode having a minimum error value, information about the intra prediction mode is provided to the entropy encoding unit 1340.

In the case of inter prediction, the prediction unit 1310 calculates an error value between the current block and the inter prediction block for each of the available reference pictures located in the temporal vicinity of the current picture, and calculates an error value of the reference picture having the minimum error value. An inter prediction block is generated as an inter prediction block for the current block. In this case, the motion vector is estimated based on the position of the inter prediction block having the minimum error value with the current block. In addition, the entropy encoding unit 1340 provides index information about the estimated motion vector and the reference picture.

The prediction block generated by using intra or inter prediction is subtracted from the current block to generate a residual block. That is, a residual block is generated by calculating a difference value between an original pixel value of each pixel of the current block and a predicted pixel value of each pixel of the prediction block, and the residual block is generated and provided to the transform and quantization unit 1320. .

The transform and quantization unit 1320 transforms and quantizes the residual block, and then generates an encoded residual block. In this case, various methods of transforming a spatial domain signal such as a Hadamard transform, a discrete cosine transform, a discrete sine transform, or the like, may be used. Can be.

The inverse transform and inverse quantization unit 1330 inversely quantizes and inverse transforms the residual block transformed and quantized by the transform and quantization unit 1320 to restore the residual block. The inverse quantization and the inverse transformation perform the transformation and quantization processes inversely performed by the transform and quantization unit 1320 and may be implemented in various ways. For example, a transform and inverse transform or quantization and inverse quantization of the same process shared by the transform and quantization unit 1320 and the inverse transform and inverse quantization unit 1330 may be used in advance, or the inverse transform and inverse quantization unit 1330 may be used. Is a transform and quantization unit using information about a transform and quantization process generated and transmitted by the transform and quantization unit 1320 (for example, information on transform size, transform shape, and quantization type). By inversely performing the transform and quantization processes of 1320, inverse quantization and inverse transform may be performed.

The residual block output through the inverse transform and inverse quantization unit 1330 is added to the prediction block reconstructed by the prediction unit 1310 and generated as a reconstruction block.

The entropy encoding unit 1340 entropy encodes and outputs a residual block output from the transform and quantization unit 1320. Although not shown in the embodiment of the present invention, the entropy encoding unit 1340 may encode not only the residual block but also various pieces of information necessary to decode the encoded bit string. The various pieces of information necessary to decode the encoded bit string may include information about a block type, information about an intra prediction mode when the prediction mode is an intra prediction mode, information about a motion vector when the prediction mode is an inter prediction mode, Information about the transform and quantization type may be included.

The entropy encoding unit 1340 may use various methods of entropy encoding such as Context Adaptive Variable Length Coding (CAVLC) and Context Adaptive Binary Arithmetic Coding (CABAC). .

The deblocking filter unit 1350 filters the reconstructed current block in order to reduce the blocking effect caused by block prediction and quantization. According to an embodiment of the present invention, the deblocking filter unit 1350 may provide information on prediction in units of blocks transmitted together with the reconstructed current block (for example, intra prediction mode and intra prediction block size in case of intra encoding). Information such as a reference picture index and a motion vector in the case of inter coding, or information on transform and quantization (for example, information on the size and shape of a transform block and information on a quantization parameter). Deblocking filtering may be performed.

The entropy decoding unit 1360 may perform entropy decoding corresponding to the entropy coding scheme used in the encoding apparatus. For example, when CABAC is used in the encoding apparatus, the entropy decoding unit 1360 may also perform entropy decoding using CABAC. The information for generating the prediction block among the information decoded by the entropy decoding unit 1360 is provided to the prediction unit 1380, and a residual value where the entropy decoding is performed by the entropy decoding unit 1360, that is, the quantized transform coefficient is The inverse quantization and inverse transform unit 1370 may be input.

The inverse quantization and inverse transform unit 1370 performs inverse quantization on the basis of the quantization parameter provided by the encoding apparatus and the coefficient values of the rearranged block to generate transform coefficients, and the transform performed by the transform unit of the encoding apparatus on the transform coefficients. A residual block may be generated by performing an inverse transform on. For example, the inverse quantization and inverse transform unit 1370 may perform inverse DCT and / or inverse DST on a discrete cosine transform (DCT) and a discrete sine transform (DST) performed by an encoding apparatus.

The prediction unit 1380 may generate a prediction block for the current block based on the prediction block generation related information transmitted from the entropy decoding unit 1360 and previously decoded block and / or picture information.

The deblocking filter unit 1390 may apply deblocking filtering, sample adaptive offset (SAO), and / or ALF to the reconstructed block and / or picture.

The deblocking filter 1390 according to the present embodiment determines the reliability of the reconstruction block and controls the filtering strength. Reliability may indicate a good degree of pixel value of the reconstructed image, and a well decoded pixel will be close to the pixel value of the original image. Such a pixel can be regarded as a pixel having high reliability. On the contrary, a pixel having a large difference from the original pixel value due to poor decoding state has low reliability.

In order to estimate the reliability, the deblocking filter 1390 uses the number of nonzero transform coefficients which have undergone inverse quantization. FIG. 14 is a control flowchart illustrating a method of estimating reliability for each block in the image encoding apparatus and the decoding apparatus according to FIG. 13.

First, a block for estimating reliability is set (n) (S1410), and then the number of non-zero coefficients (n zero T -1 & Q-1 ) after inverse quantization is determined in the block (S1420). .

The deblocking filter unit 1390 according to the present embodiment estimates the coefficient as the reliability of the corresponding block as a result of the noise (S1430).

When the reliability estimation for one block is completed, the reliability estimation for the next block is performed (S1440), and this process is repeated until the frame is completed (S1450).

Based on the estimated reliability, the filtering parameters, i.e., BS, α, β, C 0 , C 1, etc., are calculated and, according to the calculated BS, whether to apply a filter, if applicable, to apply a strong filter or a default filter It is determined whether or not. When the filter is selected, the filtering compensation value is subsequently calculated and filtering is performed.

In the above-described exemplary system, the methods are described on the basis of a flowchart as a series of steps or blocks, but the present invention is not limited to the order of the steps, and some steps may occur in different orders or simultaneously . In addition, the above-described embodiments include examples of various aspects. For example, combinations of the embodiments are also to be understood as one embodiment of the present invention.

210 keyframe encoding unit 221: conversion unit
222 quantization unit 223 LDPC encoding unit
224: buffer unit 231: key frame decoding unit
240: auxiliary information generator 242: channel noise modeler
260 deblocking filter unit

Claims (25)

And a deblocking filter unit to determine a reliability of the reconstructed block reconstructed from the decoded image block and to determine the filtering strength of the reconstructed block based on the reliability. The method of claim 1,
And the deblocking filter unit determines the filtering strength as the reliability is higher.
The method of claim 1,
The deblocking filter unit descales the filtering parameter of the reconstruction block as the reliability is higher.
The method of claim 1,
The deblocking filter unit sets a filtering parameter for each of the reconstructed blocks based on the reliability.
The method of claim 1,
The deblocking filter unit determines a reliability of the reconstructed block as the difference between the reconstructed block and the original image block is smaller.
The method of claim 1,
The deblocking filter unit estimates an error between two adjacent blocks, and sets the reliability of each block according to the estimated error ratio.
The method of claim 1,
A key frame restoration unit for restoring the input key frame;
A side information (SI) generation unit generating auxiliary information by using the key frame reconstructed by the key frame reconstruction unit;
Further comprising a channel noise modeler for recognizing an SI error present in the SI,
And the deblocking filter sets the filtering strength higher as the SI error increases.
8. The method of claim 7,
If any adjacent block sharing a boundary is a P block and a Q block,
The deblocking filter unit has an SI error of the P block (
Figure pat00018
) And reliability (
Figure pat00019
) Is set as follows.
Figure pat00020

Figure pat00021

Figure pat00022
: P block of forward motion compensation frame
Figure pat00023
: P block of backward motion compensation frame
MSE: mean squared value
The method of claim 1,
A prediction unit generating a prediction block for the current block by using intra prediction or inter prediction;
A transform / quantization unit generating a residual block by using the current block and the prediction block, and converting and quantizing the residual block;
And an inverse transform / inverse quantization unit for generating the inverse transform / inverse quantized residual block by inverse quantization and inverse transformation of the transformed and quantized residual block,
And the deblocking filter sets the number of non-zero coefficients of the inverse transformed / dequantized residual block to the reliability.
Determining a reliability of the reconstructed block reconstructed from the decoded image block;
And determining the filtering strength of the reconstructed block based on the reliability.
11. The method of claim 10,
And the higher the reliability, the lower the filtering strength.
11. The method of claim 10,
And upscaling the filtering parameters of the reconstruction block as the reliability is higher.
11. The method of claim 10,
And setting a filtering parameter for each of the reconstruction blocks based on the reliability.
11. The method of claim 10,
The determining of the reliability may include determining the reliability of the restored block as the difference between the restored block and the original image block is smaller.
11. The method of claim 10,
Estimating an error between two adjacent blocks,
The determining of the reliability comprises: setting reliability for each block according to the estimated ratio of errors.
11. The method of claim 10,
Estimating an error between two adjacent blocks,
The determining of the reliability comprises: setting reliability for each block according to the estimated ratio of errors.
17. The method of claim 16,
If any adjacent block sharing a boundary is a P block and a Q block,
SI error of the P block (
Figure pat00024
) And reliability (
Figure pat00025
) Is set as follows.
Figure pat00026

Figure pat00027

Figure pat00028
: P block of forward motion compensation frame
Figure pat00029
: P block of backward motion compensation frame
MSE: mean squared value
11. The method of claim 10,
Generating a prediction block for the current block using intra prediction or inter prediction;
Generating a residual block using the current block and the prediction block, and transforming and quantizing the residual block;
Generating the inverse transform / inverse quantized residual block by inverse quantizing and inverse transforming the transformed and quantized residual block;
And setting the number of non-zero coefficients of the inverse transformed / dequantized residual block to the reliability.
And a deblocking filter unit to determine a reliability of the reconstructed block reconstructed from the decoded image block and to determine the filtering strength of the reconstructed block based on the reliability. 20. The method of claim 19,
And the deblocking filter unit determines the filtering strength as the reliability is higher.
20. The method of claim 19,
The deblocking filter unit is configured to upscale the filtering parameters of the reconstruction block as the reliability is higher.
20. The method of claim 19,
The deblocking filter unit sets a filtering parameter for each of the reconstructed blocks based on the reliability.
20. The method of claim 19,
And the deblocking filter unit determines the reliability of the reconstructed block as the difference between the reconstructed block and the original image block is smaller.
20. The method of claim 19,
The deblocking filter unit estimates an error between two adjacent blocks, and sets a reliability of each block according to the estimated error ratio.
20. The method of claim 19,
A prediction unit generating a prediction block for the current block by using intra prediction or inter prediction;
A transform / quantization unit generating a residual block by using the current block and the prediction block, and converting and quantizing the residual block;
And an inverse transform / inverse quantization unit for generating the inverse transform / inverse quantized residual block by inverse quantization and inverse transformation of the transformed and quantized residual block,
And the deblocking filter sets the number of non-zero coefficients of the inverse transformed / dequantized residual block as the reliability.
KR1020130024493A 2012-09-25 2013-03-07 Method and apparatus for encoding and decoding using deblocking filtering KR20140040605A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130024493A KR20140040605A (en) 2012-09-25 2013-03-07 Method and apparatus for encoding and decoding using deblocking filtering

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US61/705,601 2012-09-25
KR1020130024493A KR20140040605A (en) 2012-09-25 2013-03-07 Method and apparatus for encoding and decoding using deblocking filtering

Publications (1)

Publication Number Publication Date
KR20140040605A true KR20140040605A (en) 2014-04-03

Family

ID=50650771

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130024493A KR20140040605A (en) 2012-09-25 2013-03-07 Method and apparatus for encoding and decoding using deblocking filtering

Country Status (1)

Country Link
KR (1) KR20140040605A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190089426A (en) * 2018-01-22 2019-07-31 삼성전자주식회사 Method and apparatus for image encoding using artifact reduction filter, method and apparatus for image decoding using artifact reduction filter

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190089426A (en) * 2018-01-22 2019-07-31 삼성전자주식회사 Method and apparatus for image encoding using artifact reduction filter, method and apparatus for image decoding using artifact reduction filter

Similar Documents

Publication Publication Date Title
KR101888333B1 (en) Image Encoding/Decoding Method and Apparatus Using Deblocking Filtering
KR101232420B1 (en) Rate-distortion quantization for context-adaptive variable length coding (cavlc)
US7738716B2 (en) Encoding and decoding apparatus and method for reducing blocking phenomenon and computer-readable recording medium storing program for executing the method
KR101045199B1 (en) Method and apparatus for adaptive noise filtering of pixel data
KR102323427B1 (en) Method and Apparatus for image encoding
US20080008246A1 (en) Optimizing video coding
KR101621854B1 (en) Tsm rate-distortion optimizing method, encoding method and device using the same, and apparatus for processing picture
JP2006211152A (en) Device and method for coding image and decoding image, and programs for coding and decoding image
US20120307898A1 (en) Video encoding device and video decoding device
KR20100026779A (en) Method and apparatus for inverse quantization, and method and apparatus for decoding of image
KR101615643B1 (en) Compression of pictures
KR100949475B1 (en) Apparatus and method for determining scan pattern, and Apparatus and method for encoding image data using the same, and method for decoding image data using the same
US20130114690A1 (en) Video encoding device and video decoding device
Baig et al. Distributed video coding based on compressed sensing
US8442338B2 (en) Visually optimized quantization
KR20140040605A (en) Method and apparatus for encoding and decoding using deblocking filtering
JP4942208B2 (en) Encoder
KR101307431B1 (en) Encoder and method for frame-based adaptively determining use of adaptive loop filter
Nguyen et al. Content based side information creation for distributed video coding
JP7357481B2 (en) Deblocking filter control device and program
KR101533051B1 (en) Encoding method method using block quantization level based on block characteristic and system thereof
Chien et al. AQT-DVC: adaptive quantization for transform-domain distributed video coding
KR101307469B1 (en) Video encoder, video decoder, video encoding method, and video decoding method
Kesireddy A new adaptive trilateral filter for in-loop filtering
Kolla et al. A perceptual adaptive quantizer for quality improvement of HEVC

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E601 Decision to refuse application