KR102007050B1 - Filtering of blocks coded in the pulse code modulation mode - Google Patents

Filtering of blocks coded in the pulse code modulation mode Download PDF

Info

Publication number
KR102007050B1
KR102007050B1 KR1020147000323A KR20147000323A KR102007050B1 KR 102007050 B1 KR102007050 B1 KR 102007050B1 KR 1020147000323 A KR1020147000323 A KR 1020147000323A KR 20147000323 A KR20147000323 A KR 20147000323A KR 102007050 B1 KR102007050 B1 KR 102007050B1
Authority
KR
South Korea
Prior art keywords
block
filter
samples
indicator
deblocking
Prior art date
Application number
KR1020147000323A
Other languages
Korean (ko)
Other versions
KR20140094496A (en
Inventor
마티아스 나로쉬케
토마스 베디
세미흐 에센릭
아난드 코트라
Original Assignee
선 페이턴트 트러스트
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 선 페이턴트 트러스트 filed Critical 선 페이턴트 트러스트
Publication of KR20140094496A publication Critical patent/KR20140094496A/en
Application granted granted Critical
Publication of KR102007050B1 publication Critical patent/KR102007050B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to deblocking filtering applicable to smoothing block boundaries of image or video encoding and decoding. In particular, the present invention relates to the filtering of pulse code modulation (PCM) coded blocks of samples. Thus, separate for enabling or disabling deblocking filtering of the PCM coded block to individually switch-on or switch-off other kinds of filtering, such as deblocking filtering and adaptive loop filtering or adaptive sample offset. An indicator of and a separate indicator for enabling or disabling the second filtering are inserted in the encoded bitstream.

Figure R1020147000323

Description

FILTERING OF BLOCKS CODED IN THE PULSE CODE MODULATION MODE}

The present invention relates to filtering of an image. In particular, the present invention relates to deblocking filtering and its application to PCM-encoded samples.

Currently, many standardized video coding algorithms are based on hybrid video coding. In general, hybrid video coding methods incorporate several different lossless and lossy compression schemes to achieve the desired compression gain. Hybrid video encoding also supports ISO / IEC standards (MPEG-1, MPEG-2, and MPEG-X standards such as MPEG-4) as well as ITU-T standards (H.26x standards such as H.261 and H.263). Is the basis for The most recent and advanced video coding standard is currently specified as H.264 / MPEG-4 AVC (advanced video coding), which is applied to the joint video team (JVT) and federation teams of the ITU-T and ISO / IEC MPEG groups. It is the result of standardization efforts made by This codec is the name of High-Efficiency Video Coding (HEVC) and is being further developed by Joint Collaborative Team on Video Coding (JCT-VC), with the aim of improving the efficiency for high resolution video coding.

The video signal input to the encoder is an image sequence called a frame, each frame being a two-dimensional matrix of pixels. The above-described standards and the like based on hybrid video coding all involve subdividing each individual video frame into smaller blocks of a plurality of pixels. The size of the block may change depending on the content of the image, for example. The encoding method may be generally changed in units of blocks. For example, the largest possible size of such a block in HEVC is 64 x 64 pixels. This is called a large coding unit (LCU). In H.264 / MPEG-4 AVC, a macroblock (typically representing a block of 16 x 16 pixels) is the elementary image element on which encoding is to be performed, which is likely to be divided into smaller subblocks to which some encoding / decoding steps are applied. There is this.

In general, the encoding step of hybrid video encoding includes spatial and / or temporal prediction. Thus, each block to be encoded is first predicted using either spatially contiguous blocks or temporally contiguous blocks, that is, previously encoded video frames. A block of the difference between the block to be encoded and the block to be predicted, also called a block of prediction residuals, is calculated. Another encoding step is to transform the residual block from the spatial (pixel) domain to the frequency domain. This transformation aims to reduce the correlation of the input block. Another encoding step is quantization of the transform coefficients. At this stage, actual irreversible (irreversible) compression occurs. In general, the compressed transform coefficient values are further compressed by entropy coding (reversible compression). In addition, side information necessary for reconstruction of the encoded video signal is encoded and provided together with the encoded video signal. This side information is information about spatial and / or temporal prediction, quantization amount, etc., for example.

1 is an example of a typical H.264 / MPEG-4 AVC and / or HEVC video encoder 100. The subtractor 105 is a current block to be encoded of the input video image (input signal s) and a corresponding prediction block used as prediction of the current block to be encoded.

Figure 112014001178927-pct00001
Determine the difference between first. The prediction signal may be obtained by temporal or spatial prediction 180. The type of prediction may be changed on a frame basis or on a block basis. Blocks and / or frames predicted using temporal prediction are referred to as "inter" encoded, and blocks and / or frames predicted using spatial prediction are referred to as "intra" encoded. The prediction signal using temporal prediction is derived from a pre-coded image stored in the memory. Prediction signals using spatial prediction are derived from the values of the boundary pixels of adjacent blocks that are previously encoded or decoded and stored in the memory. The difference e between the input signal and the prediction signal indicated as prediction error or residual is transformed (110) so that the coefficient is quantized (120). Entropy encoder 190 is applied to the quantized coefficients to further reduce the amount of data stored and / or transmitted in a reversible manner. This is mainly done by applying a code word of variable length to the code, where the length of the code word is selected based on its likelihood of occurrence.

The decoding unit is integrated into video encoder 100 to obtain a decoded (reconstructed) video signal s'. According to the encoding step, the decoding step includes inverse quantization and inverse transform 130. The prediction error signal e 'thus obtained is changed from the original prediction error signal due to the quantization error, also referred to as quantization noise. The reconstructed image signal s' is the prediction signal

Figure 112014001178927-pct00002
And the decoded prediction error signal e 'are obtained (140). In order to maintain compatibility between the encoder side and the decoder side, a prediction signal based on the encoded and subsequently decoded video signal known to both encoder and decoder sides
Figure 112014001178927-pct00003
Is obtained.

Due to quantization, quantization noise is superimposed on the reconstructed video signal. Due to block-wise coding, the superimposed noise has a blocking feature, so strong quantization results in visible block boundaries in the decoded image. Such blocking defects have a negative effect on the human vision. To reduce these defects, deblocking filter 150 is applied to all reconstructed image blocks. A deblocking filter is applied to the reconstructed signal s'. For example, the deblocking filter of H.264 / MPEG-4 AVC is characterized by local adaptation. In the case of having a high blocking noise, a strong (narrowband) low pass filter is applied, while for a low blocking noise, a weak (wide band) low pass filter is applied. The strength of the lowpass filter is predictive signal

Figure 112014001178927-pct00004
And the quantized prediction error signal e '. Deblocking filters generally smooth the block edges to improve the subjective quality of the decoded image. Moreover, since the filtered portion of the image is used for motion compensated prediction of the image, the filtering also reduces the prediction error and thus improves the coding efficiency.

Following the deblocking filter, adaptive sample offset 155 and / or adaptive loop filter 160 may be applied to an image that includes a signal s ″ that is already deblocked. Deblocking filters improve subjective quality, while sample adaptive offset (SAO) and ALF aim to improve pixel-wise fidelity (“objective” quality). In particular, SAO adds an offset in accordance with the degree of adjacency of the pixels. An adaptive loop filter (ALF) is used to compensate for image distortion caused by compression. In general, an adaptive loop filter is a Wiener filter having a filter coefficient determined to minimize the mean square error (MSE) between the reconstructed image s' and the source image s. The coefficient of the ALF is calculated and transmitted in units of frames. ALF can be applied to an entire frame (image of a video sequence) or to a local area (block). Additional side information indicating which region is filtered may be transmitted (block-based, frame-based or quadtree-based).

For decoding, for inter-coded blocks, it is necessary to store in the reference frame buffer 170 the portions of the image that have been previously encoded and then decoded. The inter-coded block is predicted 180 using motion compensated prediction. First, the closest block is found using the motion estimator for the current block within the pre-coded and decoded video frame. The closest block becomes the prediction signal, and the relative displacement (movement) between the current block and the closest block is the motion data in the form of a three-dimensional motion vector in the side information provided with the encoded video data. Signaled. Three dimensions consist of two spatial dimensions and one temporal dimension. In order to optimize the prediction accuracy, the motion vectors are determined at spatial sub-pixel resolution, for example half pixel or quarter pixel resolution. A motion vector with spatial sub-pixel resolution may represent a spatial location in an already decoded frame, that is, a sub-pixel location, where the actual pixel value is not useful. Therefore, spatial interpolation of these pixel values is needed to make motion compensated prediction. This can be done by an interpolation filter (integrated within prediction block 180 in FIG. 1).

For both intra and inter encoding modes, the difference e between the current input signal and the prediction signal is transformed (110), quantized (120), and the coefficient is quantized. In general, orthogonal transformations such as two-dimensional discrete cosine transformation (DCT) or integer versions thereof are used because they effectively reduce the correlation of natural video images. After conversion, more bits are consumed to encode lower frequency components than higher frequency components since lower frequency components are typically more important for image quality. In an entropy coder, a two-dimensional matrix of quantized coefficients is transformed into a one-dimensional array. In general, this transformation is performed by a so-called zig-zag scan, starting with the DC-coefficient of the upper left corner of the two-dimensional array, and ending the two-dimensional array in a predetermined sequence ending with the AC coefficient of the lower right corner. Inject. Since the energy is generally concentrated in the upper left portion of the two-dimensional matrix of coefficients, corresponding to the lower frequency, the last value of the array typically becomes zero due to the zig-zag scan. This enables effective coding using run-length codes as part of or before actual entropy coding.

In addition to HEVC, H.264 / MPEG-4 H.264 / MPEG-4 AVC includes two functional layers, a video coding layer (VCL) and a network abstraction layer (NAL). The VCL provides an encoding function as briefly described above. The NAL encapsulates information elements into standardized units called NAL units in accordance with another application such as transmission over a channel or storage to a storage medium. For example, the information element is an encoded prediction error signal or other information necessary for decoding the video signal, such as the type of prediction, the quantization parameter, the motion vector, or the like. A VCL NAL unit containing compressed video data and related information, a non-VCL unit encapsulating side data such as a parameter set related to the entire video sequence, or an SEI providing side information that can be used to improve decoding performance. Supplemental Enhancement Information).

2 shows an example of a decoder 200 in accordance with the H.264 / MPEG-4 AVC or HEVC video coding standard. The encoded video signal (signal input to the decoder) is first delivered to the entropy decoder 290 to decode information elements necessary for decoding such as quantization coefficients, motion data, prediction modes, and the like. The quantization coefficients are scanned inversely to obtain a two-dimensional matrix and fed to inverse quantization and inverse transformation 230. After inverse quantization and inverse transformation 230, a decoded (quantized) prediction error signal e 'is obtained, which subtracts the prediction signal from the signal input to the encoder when no quantization noise enters and no error occurs. Corresponds to the difference obtained.

The prediction signal is obtained from temporal or spatial prediction 280. The decoded information element typically further includes information necessary for prediction, such as prediction type in the case of intra-prediction and motion data in the case of motion compensated prediction. The quantization prediction error signal in the spatial domain is added to the prediction signal obtained from the motion compensated prediction or intra-frame prediction 280 by the adder 240. The reconstructed image s' passes through deblocking filter 250, adaptive sample offset processing 255, and adaptive loop filter 260, and the resulting decoded signal is temporal or spatial prediction of the next block / image. Stored in memory 270 to be applied for.

When compressing and decompressing an image, the blocking defect is generally the most annoying defect for the user. Deblocking filtering helps to improve the user's perceptual experience by smoothing the edges between blocks of the reconstructed image. One of the difficulties of deblocking filtering is to accurately determine between edges caused by blocking due to the application of the quantizer and between edges that are part of the coded signal. If the edge of the block boundary has a compression defect, it is desired to apply a deblocking filter. In other cases, applying the deblocking filter can distort and destroy the reconstructed signal. Another difficulty is choosing the appropriate filter for deblocking filtering. In general, strong or weak low pass filtering is done because a decision is made between several low pass filters with different frequency responses. In order to determine if deblocking filtering is applied and to select an appropriate filter, image data in the vicinity of the boundary between the two blocks is considered.

For example, H.264 / MPEG-4_AVC evaluates the absolute value of the first derivative in each of two adjacent blocks whose boundaries are deblocked. In addition, the absolute value of the first derivative across the edge between the two blocks is evaluated as described, for example, in the H.264 / MPEG-4 AVC standard, section 8.7.2.2. However, HEVC also uses a similar mechanism, the second derivative.

The deblocking filter determines whether or not each filter at the block boundary is filtered and which filter or filter type is used. If it is determined that a filter is to be applied, a low pass filter is applied to smooth the block boundaries. Determining whether or not to filter is aimed at filtering only on samples where large signal changes occur at the block boundary as a result of quantization applied to the block-wise processing as described in the background art selection above. As a result of the deblocking filtering, a smoothed signal is obtained at the block boundary. Smoothed signals are less annoying to the viewer than blocking defects. Samples belonging to the original signal to which large signal changes present at the block boundary are encoded should not be filtered to maintain high frequency and maintain visual clarity. In case of a wrong decision, the image is unnecessarily smoothed or remains a block. Deblocking filtering is done over the vertical edge of the block (horizontal filtering) and the horizontal edge of the block (vertical filtering).

FIG. 4A shows the determination for the vertical boundary (filtered or not filtered with the horizontal deblocking filter), and FIG. 4B shows the determination for the horizontal boundary (filtered or not filtered with the vertical deblocking filter). In particular, FIG. 4A shows the current block 440 being decoded and the neighboring blocks 410, 420, and 430 already decoded. Decisions are made on one line of pixels 460. Similarly, in FIG. 4B, a determination is made for the same current block 440 and one column of pixels 470.

The determination of whether to apply the deblocking filter can be performed as follows. As shown in Fig. 4, the first three pixels p2, p1, p0 belong to the left adjacent block A430, and the next three pixels q0, q1, and q2 belong to the current block B440 ( Take line 460.

Line 1410 in FIG. 14 represents the boundary of blocks A and B. FIG. Pixels p0 and q0 are the pixels of each left adjacent block A and the current block B, and are located adjacent to each other. If the following conditions are satisfied, pixels p0 and q0 are filtered by a deblocking filter, for example.

Figure 112014001178927-pct00005

Figure 112014001178927-pct00006
And

Figure 112014001178927-pct00007

Where, in general,

Figure 112014001178927-pct00008
to be. These conditions are for detecting whether the difference between p0 and q0 is due to a blocking defect. These correspond to the evaluation of each of the blocks A and B and the first derivative therebetween. In addition to the three conditions, the pixel p1 is filtered if the following condition is satisfied.

Figure 112014001178927-pct00009

For example, pixel q1 is filtered if the following conditions are met in addition to the first three conditions.

Figure 112014001178927-pct00010
.

These conditions correspond to the first derivative in the first block and the first derivative in the second block, respectively. In the above conditions, QP represents a quantization parameter indicating the amount of quantization applied,

Figure 112014001178927-pct00011
Is a scalar constant. Especially,
Figure 112014001178927-pct00012
Quantization parameters QP A applied to each of the first and second blocks A and B, respectively. And a quantization parameter as derived based on QP B.

Figure 112014001178927-pct00013
,

Here, ">> 1" represents the right shift by one bit.

Decisions may be made only for the lines of the selected line or block, and thus filtering of pixels is performed for all lines 460. An example 520 of lines 530 involved in determining according to HEVC is shown in FIG. 5. Based on lines 530, a determination is made whether or not to filter the entire block.

Another example of deblocking filtering in HEVC is found in section 8.6.1 of the JCTVC-E603 document by JTC-VC of ITU-T SG16 WP3 and ISO / IEC JTC1 / SC29 / WG11, http://phenix.int Freely available at -evry.fr/jct/index.php /

Two lines 1430 are used to determine whether and how to apply deblocking filtering. Example 1420 assumes evaluating the third (with index 2) and sixth (with index 5) lines for the purpose of horizontal blocking filtering. In particular, the second derivative in each measurement block is evaluated as follows: d 2 And d 5 is obtained.

Figure 112014001178927-pct00014

Pixel p belongs to block A and pixel q belongs to block B. The first number after p or q indicates the column index, and the number after that indicates the row number in the block. If the following conditions are met, deblocking for the eight lines shown in Example 520 is enabled.

Figure 112014001178927-pct00015

If the above conditions are not met, deblocking does not apply. If deblocking is enabled, the filter used for deblocking is determined. This decision is based on the evaluation of the first derivative between blocks A and B. In particular, for each line i, where i is an integer between 0 and 7, it is determined whether a strong low pass filter or a weak low pass filter is to be applied. A strong filter is selected if the following conditions are met.

Figure 112014001178927-pct00016

According to the HEVC model, "strong filter" sample

Figure 112014001178927-pct00017
of
Figure 112014001178927-pct00018
Filter using "weak filter"
Figure 112014001178927-pct00019
of
Figure 112014001178927-pct00020
To filter. Under the above conditions,
Figure 112014001178927-pct00021
And t c are both functions of the quantization parameter QP Frame that can be set for slices such as images. value
Figure 112014001178927-pct00022
And t c are generally driven based on the QP Frame using a lookup table.

Note that strong filtering is only beneficial for very smooth signals. If not, weaker lowpass filtering is beneficial.

Pixels related to strong low pass filtering in accordance with conventional hybrid coding are shown in FIG. 15A. In particular, FIG. 15A shows a sample used for filtering. These samples correspond to four pixels each adjacent to the left and the right from the boundary between blocks A and B. These samples are used for filtering, which means that their values are entered into the filtering process. 15A also shows a sample changed by the filter. This is three pixels each adjacent to its right and its left, closest to the boundary between blocks A and B. These values are changed, ie smoothed, by the filter. In particular, below, the values of the changed sample of the line with index i

Figure 112014001178927-pct00023
And
Figure 112014001178927-pct00024
This is listed.

Figure 112014001178927-pct00025

function

Figure 112014001178927-pct00026
Is defined as

Figure 112014001178927-pct00027

By this,

Figure 112014001178927-pct00028
Is the maximum value, where x is possible. For PCM encoding of k bit samples, the maximum value is
Figure 112014001178927-pct00029
to be. For example, in the case of 8-bit samples of PCM encoding, the maximum value is
Figure 112014001178927-pct00030
to be. For PCM encoding of 10 bit samples, the maximum value is
Figure 112014001178927-pct00031
to be.

The above equation describes the processing of the strong filtering applied. As can be seen from the above equation, pixel p3 i of row i And q3 i are used in the expression, ie in the filtering, but not changed, ie not filtered.

16B illustrates the application of a weak deblocking filter. In particular, the samples used for filtering are shown on the left, and the samples changed by filtering are shown on the right. For the operation of the weak filter, only two adjacent pixels each of the boundary between blocks A and B are filtered, while each three adjacent blocks in each of blocks A and B are used at that boundary. Two decisions are made for the purpose of weak filtering. The first decision is whether the weak filter is applied to the whole or to a particular line. This decision is calculated as

Figure 112014001178927-pct00032
Based on the value.

Figure 112014001178927-pct00033

Figure 112014001178927-pct00034
Is calculated
Figure 112014001178927-pct00035
Based on, filtering is applied. Otherwise, two pixels at the boundary of each block A and B
Figure 112014001178927-pct00036
And
Figure 112014001178927-pct00037
No filtering is applied to. If filtering is applied, the following is done.

Figure 112014001178927-pct00038

here,

Figure 112014001178927-pct00039

function

Figure 112014001178927-pct00040
Is defined as above. Also, the function
Figure 112014001178927-pct00041
Is defined as

Figure 112014001178927-pct00042

Filtering is applied,

Figure 112014001178927-pct00043
And
Figure 112014001178927-pct00044
Is determined to be filtered,
Figure 112014001178927-pct00045
And
Figure 112014001178927-pct00046
It is further determined whether this is filtered. Only
Figure 112014001178927-pct00047
Back side
Figure 112014001178927-pct00048
Is filtered, only
Figure 112014001178927-pct00049
If,
Figure 112014001178927-pct00050
Is filtered. The filtering of these pixels is done as follows.

Figure 112014001178927-pct00051

here,

Figure 112014001178927-pct00052

Figure 112014001178927-pct00053

Figure 112014001178927-pct00054

In addition to predictive coding, blocks may be coded without applying prediction. The corresponding coding mode is called "pulse coded modulation (PCM) mode". Samples coded in the PCM mode may contain quantization noise, but are not necessarily so. According to the JCTVC-E192 contribution "Suggested Improved PCM Coding in HEVC", a switch is used at the HEVC encoder and decoder to switch filtering of PCM coded samples between enable and disable. Thus, all filters in the loop are switched on or switched off. If the PCM coded sample does not contain quantization noise, the switching mechanism is beneficial because it can switch off filtering. In such a case, filtering may worsen the quality of the noise-free filtered image. On the other hand, if the PCM coded sample contains quantization noise, it is beneficial to enable filtering.

If the PCM coding region having no noise is adjacent to the region which is not PCM coding but predicted by spatial or temporal prediction, blocking defects may degrade the sensory quality of the image. On the other hand, the filtering performed in this case can degrade the quality of the PCM coding region having no noise.

Since these problems exist in the current technology, it is advantageous to provide an effective deblocking filtering method applicable to this PCM sample when the PCM sample is surrounded by a sample encoded by predictive coding.

According to the method of the present invention, a deblocking filter or another filter can be separately enabled or disabled for a PCM coded block.

According to the configuration of the present invention, there is provided a method of encoding a block of samples of an image of a video signal into a bitstream by pulse-coded modulation (PCM), wherein the deblocking filter is provided with Determining whether it applies to a block; Determining whether a second filter, different from the deblocking filter, is applied to the block of samples; Including a deblocking filter indicator in the bitstream indicating a result of determining whether the deblocking filter is applied; And including, in the bitstream, a second filter indicator, the second filter indicator being different from the deblocking filter indicator, indicating a result of determining whether the second filter is applied.

According to yet another aspect of the invention, there is provided a method for decoding a block of samples encoded by pulse code modulation (PCM) in an image of a video signal from a bitstream, wherein the deblocking filter comprises Extracting a deblocking filter indicator from the bitstream indicating whether the block applies to the block; Extracting a second filter indicator from the bitstream, separate from the deblocking filter indicator, indicating whether a second filter is applied to the block of samples; Applying or not applying the deblocking filter to the block of samples in accordance with the extracted deblocking filter indicator; And applying or not applying the second filter to the block of samples in accordance with the extracted second filter indicator.

According to another configuration of the invention, there is provided an apparatus for encoding a block of samples of an image of a video signal into a bitstream by pulse code modulation (PCM), wherein the deblocking filter is applied to the block of samples. A deblocking judging unit for judging A second judging unit for judging whether a second filter, which is different from the deblocking filter, is applied to the block of samples; And a second filter indicator different from the deblocking filter indicator, the deblocking filter indicator indicating a result of determining whether the deblocking filter is applied, and the result of determining whether the second filter is applied. It includes an insertion unit for inclusion in the bitstream.

According to still another aspect of the invention, there is provided an apparatus for decoding a block of samples encoded by pulse code modulation (PCM) in an image of a video signal from a bitstream, wherein the deblocking filter comprises Extract a deblocking filter indicator from the bitstream indicating whether to apply to a block, and extracting a second filter indicator from the bitstream, separate from the deblocking filter indicator to indicate whether a second filter is applied to a block of samples. An extraction unit; A deblocking filtering unit configured to apply or not apply a deblocking filter to the block of samples according to the extracted deblocking filter indicator; And a second filtering unit configured to apply or not apply a second filter to the block of samples in accordance with the extracted second filter indicator.

The accompanying drawings are incorporated into and form a part of the specification to illustrate some embodiments of the invention. Together with the detailed description, these drawings illustrate the subject matter of the present invention. These drawings are only intended to illustrate some preferred examples of how to implement and use the invention, and are not intended to be limited to the illustrated and described embodiments of the invention. The features and advantages of the present invention will become more apparent from the following more detailed description of the various embodiments of the invention, as shown in the accompanying drawings wherein like reference numerals refer to like elements.
1 is a block diagram of an example video encoder.
2 is a block diagram of an example video decoder.
3 is a block diagram of an example video encoder in which separate vertical and horizontal filtering is performed.
4A is a schematic diagram illustrating the application of horizontal deblocking filtering.
4B is a schematic diagram illustrating the application of vertical deblocking filtering.
5 is a block diagram illustrating an encoder including a PCM encoding mode in which the bit-depth is reduced.
6 is a block diagram illustrating an encoder including a PCM encoding mode in which the bit-depth is not reduced.
7 is a block diagram illustrating an encoder including a PCM encoding mode with a switch to switch filtering on and off.
8 is a schematic diagram illustrating an example of enabling / disabling of filtering for PCM coded and PCM uncoded blocks.
9 is a block diagram illustrating an encoder for individually switching on and off different filters according to an embodiment of the present invention.
10A is a schematic diagram showing an area that can be changed by the deblocking filter.
10B is a schematic diagram illustrating switching on / off the application of an adaptive sample offset for an area that may or may not be changed by a deblocking filter.
10C is a schematic diagram illustrating switching on / off the application of an adaptive loop filter for an area that may or may not be changed by a deblocking filter.
11 is a schematic diagram illustrating an example of PCM coded and non-PCM coded adjacent blocks and examples of characteristic quantization parameters thereof.
12 is a flowchart illustrating an example of an encoding method using deblocking filtering according to an embodiment of the present invention.
13 is a flowchart illustrating an example of a decoding method using deblocking filtering according to an embodiment of the present invention.
14 is a schematic diagram illustrating deblocking filtering in accordance with the prior art.
15A is a schematic diagram showing a sample used by deblocking filtering and a sample modified by deblocking filtering.
15B is a schematic diagram showing a sample used by deblocking filtering and a sample modified by deblocking filtering.
16A is a flowchart illustrating an example of a decoding method of decoding and filtering a PCM coded sample.
16B is a flowchart illustrating an example of an encoding method for PCM encoding and filtering a block of samples.
17 shows an overall configuration of a content providing system implementing a content distribution service.
18 shows the overall configuration of a digital broadcasting system.
19 is a block diagram illustrating an exemplary configuration of a television.
20 is a block diagram showing a configuration example of an information reproducing / recording unit that reads information from or records information on a recording medium which is an optical disc.
21 shows an example of the configuration of a recording medium which is an optical disk.
22A shows an example of a mobile phone.
22B is a block diagram illustrating an exemplary configuration of a mobile phone.
23 shows the structure of the multiplexed data.
24 schematically shows how each stream is multiplexed in the multiplexed data.
25 illustrates in more detail how a video stream is stored in a stream of PES packets.
26 shows a structure of a TS packet and a source packet in multiplexed data.
27 shows a data structure of PMT.
28 shows an internal structure of multiplexed data information.
29 shows the internal structure of the stream attribute information.
30 illustrates identifying video data.
Fig. 31 shows an example of the configuration of an integrated circuit for implementing the video encoding method and the video decoding method described in each embodiment.
32 shows a configuration of switching between driving frequencies.
33 illustrates steps of identifying video data and switching between driving frequencies.
34 shows an example of a lookup table in which a video data standard corresponds to a driving frequency.
35A is a diagram illustrating an example of a configuration in which modules of a signal processing unit are shared.
35B is a diagram illustrating still another example of a configuration in which modules of a signal processing unit are shared.

The present invention is based on the observation that turning on or off all filters in a loop degrades subjective picture quality in some cases. In particular, applying a deblocking filter may be beneficial in scenarios other than where an adaptive loop filter and / or an adaptive sample offset is beneficial.

In particular, when the PCM coded block is adjacent to a block coded by another method, the deblocking filter may be beneficial. In this case, the deblocking filter smoothes the signal in the boundary region of the two blocks, thereby improving the subjective quality even when noise enters the PCM coded block. However, the quality of the PCM coded block may be reduced in the region where the deblocking filter is not applied to this block but the adaptive loop filter and / or the adaptive sample offset is applied. Adaptive loop filters and adaptive sample offsets can introduce additional noise and / or defects.

According to the present invention, the application of the deblocking filter to the PCM coded samples is controlled separately from controlling the application of other filters. Thus, the present invention further reduces unwanted quantization noise to further improve image quality.

FIG. 5 shows an encoder 500 that basically corresponds to the encoders shown in FIGS. 1 and 3. Also, for example, the PCM coding mode is introduced as described in the JCTVC-D0044 contribution "Pulse code modulation mode for HEVC". For example, the bit depth may be extended 510 to allow for higher accuracy encoding operations and may be applied to the original video signal. If the video signal is encoded by the PCM encoding mode, the bit depth is reduced again (550) and the signal is output directly to the multiplexer 595 to be included in the bitstream. The switch 570 switches between input from transform mode or PCM encoding mode or prediction. Thus, the PCM coded sample is passed through a deblocking filter (“deblocking” in FIG. 5) and another loop filter (“ALF” in FIG. 5) and buffered 530 to be used as a reference sample.

6 shows another implementation of the PCM encoding mode according to JCTVC-E057 "Pulse Code Modulation Mode for HEVC". In particular, after the bit depth is increased 610, the PCM samples are fed directly to the multiplexer 695 and included in the encoded bitstream. The switch 670 switches between PCM coding mode samples or prediction / transform coding mode samples. PCM coded samples or prediction / transform coded samples are deblocking filtered and adaptive loop filtered, and also adaptive sample offset filtered. The filtered sample is then stored in buffer 630 and used as a reference sample as well.

If the PCM sample does not contain quantization error, applying additional filtering such as deblocking filtering, adaptive loop filtering, or adaptive sample offset may introduce additional noise into the image signal. Therefore, the JCTVC-E192 contribution suggests switching-on or switching-off of filtering.

7 shows a possible implementation of an encoder in accordance with HEVC supporting PCM coding mode and switching filtering on / off. In particular, the original video signal is input to unit 710 to increase the bit depth. If the encoding mode is the PCM encoding mode, in the bit depth reduction unit 750, the PCM sample is reduced to a predetermined bit depth. At the same time, this predetermined bit depth is signaled in the bitstream. The parameter PCM _ sample _ bit _ depth _ xxx _ minus8 , where " xxx " represents the luminance or chrominance ( "luma", "chroma" ) of the sample , controls the bit depth of the resulting PCM signal and supplies it to the multiplexer 795. And inserted into the bitstream. The bit depth of the PCM coded sample is increased (760) for the purpose of buffering the sample as a reference sample for use in predictive coding. Through the switch 770, the PCM samples are input to the deblocking filtering unit 740 and the second filtering unit 745 (in this case, the adaptive loop filter unit) and finally stored in the buffer 730. According to the JCTVC-E192 contribution, a switch is installed for switching on or off loop filtering such as deblocking filtering 740 and other types of loop filtering 745. To enable similar operation of the decoder and encoder, switch values (on / off) are included in the bitstream and signaled to the decoder. Switching (controlled by an indicator such as a flag) is shown in FIG. In particular, the parameter PCM _ sample _ loop _ filter _ disable _ flag is signaled in the SPS NAL unit to enable or disable both adaptive loop filtering 745 and deblocking filtering 740.

The purpose of the in-loop filter, i.e. the deblocking filter and the adaptive loop filter, is to correct the distortion associated with irreversible compression. However, when the original uncompressed sample is encoded using the PCM encoding mode, the sample has no distortion. Therefore, the in-loop filtering process must be conceptually disabled as shown in FIG. On the other hand, some encoder implementations may encode non-original (reconstructed) samples as PCM samples. In this case, these PCM samples can surely contain quantization noise. Thus, in-loop filtering is useful because it performs in-loop filtering to improve the image quality of the result. Therefore, the JCTVC-E192 can switch on or off in-loop filtering. However, such a switch is not sufficient to prevent quality degradation caused by applying or not applying filtering.

8 shows two blocks 801 and 802, which are encoded by the PCM encoding mode and contain the original video sample, denoted by I_PCM. Block 802 includes predicted samples encoded using intra or inter prediction and indicated as non-I_PCM samples. The samples p 2 , p 1 , p 0 , q 0 , q 1 and q 2 are located at the boundary between the blocks 801 and 802 in one of the lines indicated by the 8 × 6 grid in FIG. 8. Pixels. When the proposal of JCTVC-E192 is applied to this scenario, both adaptive loop filtering and deblocking filtering are disabled for block 801. This means that pixels p 0 , p 1 , and p 2 are not filtered. In this regard, adaptive loop filtering and deblocking filtering are both enabled for block 802. In this case, this block boundary is visible after decoding and may contain blocking defects because it is not smoothed in the portion of block 801. In the above examples, deblocking filtering and adaptive loop filtering are described. However, the same applies to the adaptive sample offset, which may be executed before or after adaptive loop filtering, or may be executed without adaptive loop filtering.

In order to overcome the quality reduction, the signals in the present invention individually enable or disable the deblocking filter separately from enabling or disabling other filters such as adaptive loop filters or adaptive sample offsets.

According to a preferred embodiment of the present invention, the application of each noise reduction filter is controlled individually for the PCM coded block. This can be done by one flag that is encoded and transmitted for each individual noise reduction filter, which indicates the application of the filter to the PCM coded block. As shown in Figures 1 and 3, in the case of three filters, a deblocking filter, a SAO and an adaptive loop filter, three separate flags may be used.

However, the present invention is not limited to this, and one flag may be used to control deblocking filtering, and another flag may be used to commonly control the remaining filters such as the adaptive loop filter and the adaptive sample offset. Can be.

9 illustrates an example of a video encoder 900 including respective switches 981, 982, and 983 for switching on or switching off adaptive loop filters, adaptive sample offsets, and deblocking filters, respectively. Illustrated. This switch position may be encoded as each flag sent in the bitstream to enable the decoder to control to perform the filtering in the same way as the encoder.

For samples that can be changed by the deblocking filter and samples that cannot be changed by the deblocking filter, it is more advantageous to control the application of each filter to the PCM coded blocks separately. This control can be made by providing additional indicators (flags) encoded and transmitted in the bitstream with the encoded image data. By controlling the altered and unchanged samples separately by the deblocking filter, a finer adaptation of the filtering is achieved. In particular, if deblocking filtering is not applied to the PCM coded samples, additional noise may occur in these samples due to the adaptive loop filter or the adaptive sample offset. In general, a deblocking filter can be applied to the image signal to improve the subjective quality at the block boundary. Once the deblocking filter has been applied, adaptive sample offset and adaptive loop filtering may be more advantageous to improve the objective image quality.

Moreover, for the PCM coded block, adaptive loop filtering as well as SAO offset may be switched on or switched off based on the result of the deblocking filter decision. In particular, if noise is introduced due to deblocking, this noise may be reduced by adaptive loop filtering and / or adaptive sample offset applied later. If it is determined that the deblocking filter has not been applied, noise may not be introduced, so additional noise may be introduced due to adaptive loop filtering and / or adaptive sample offset. In this case, it may also be beneficial to switch off filtering.

10A, 10B and 10C show an embodiment according to the present invention. In particular, it is individually determined whether a deblocking filter should be applied for the current block 1010 and whether another filter should be applied for this block. It is individually determined whether the following filtering should be applied to the sample that is changed by the deblocking filter and another sample (which is not changed by the deblocking filter). Separate flags are then used to indicate the application of filtering on the modified block samples and the unchanged block samples.

10A shows a current block 1010 and a block 1020 adjacent to the left of the current block. In this example, assume that each of the three pixels closest to the boundary between the two blocks is changed by the deblocking filter. The dashed rectangles in FIG. 10A represent the sample changed by the deblocking filter and the sample unchanged by the deblocking filter in adjacent blocks 1010 and 1020. It is assumed that the current block 1010 is a PCM coded block without quantization noise. On the other hand, assume that block 1020 is a non-PCM coded block with quantization noise. In this example, the block is a coding unit having a size of 16 x 16 samples. An individual flag ("flag 1") indicates whether the deblocking filter is applied to the current block 1010. When the deblocking filter is applied, only samples near the block boundary are changed (modifiable samples). Note that only the horizontal deblocking filtering region is shown in FIG. However, the present invention is not limited thereto, and vertical filtering across a horizontal block boundary may similarly be applied.

10B shows the same blocks 1010 and 1020, where an adaptive sample offset is applied to adjacent block 1020, eg, all 8 × 8 samples. Whether or not to apply the adaptive sample offset to the current block 1010 is determined separately in this embodiment for samples changed by the deblocking filter and samples not changed by the deblocking filter. Thus, the two flags ("flag 2a" and "flag 2b" in FIG. 10B) are applied to the region where the adaptive sample offset is changeable by the deblocking filter and whether the adaptive sample offset is changeable by the deblocking filter. It may be included in the bitstream to specify whether it is applied to a non-region.

FIG. 10C illustrates applying the adaptive loop filter to the same blocks 1010 and 1020 and the sample changed by the deblocking filter and the sample unchanged by the deblocking filter. Similar to the case of the application of SAO, the two flags ("flag 3a" and "flag 3b" in FIG. 10C) are changed by the region of the current block and the deblocking filter whose adaptive sample offset has been changed by the deblocking filter. It may be included in the bitstream to individually indicate whether the adaptive loop filter is applied to the region of the current block that is not present.

According to the present invention, the filtering of the PCM coded block can be controlled separately for different filter types such as deblocking filter, adaptive sample offset and adaptive loop filter. The order of application of these filters is not critical to the present invention and can be arbitrarily selected. The indicator may take a value of 0 or 1 and may be a binary flag indicating whether or not filtering is applied. These flags may be inserted in the bitstream at different locations. Preferably, flags are inserted in the slice header and applied for all blocks contained in that slice. However, the present invention is not limited thereto, and indicators may be inserted through a packet including SPS, PPS, or APS. In particular, for fine adjustment of filtering, the flag is signaled in units of coding units. The indicator (flag) indicates the position of the switch that switches the filtering on or off for the special block sub-region. For example, the position of the switch may be encoded immediately after the encoding of the information in which the block is encoded in the PCM mode.

To improve the reconstruction quality of the video signal, the boundary between the PCM coded block and the adjacent block can be deblocked if the quantization error of the adjacent block is considered higher. This can be tested using some threshold. For example, the quantization error may be considered high when the quantization parameter value of the neighboring block exceeds a predetermined threshold. This threshold may be, for example, a fixed value or an adaptive value coded in the bitstream with the coded data.

Instead of or in addition to an indicator for enabling or disabling deblocking filtering, a quantization parameter value QP PCM may be included in the coded bitstream. This PCM quantization parameter represents the amount of quantization used for adjusting the deblocking filter for the PCM coded block. The PCM quantization parameter may be determined as a characteristic of the original video signal input to the PCM encoding. For example, the PCM quantization parameter may depend on the bit depth of the PCM sample.

To enable adaptive selection of deblocking or other filtering, the QP PCM may be determined at the encoder (because the encoder knows the original video signal) by optimization. For example, different values of the QP PCM can be tested and the subjective video quality of the result is evaluated. The value resulting in the highest subjective quality is taken as the QP PCM . Note that subjective quality can be tested by calculating a subjective video quality metric. There are a number of different metrics defined for the assessment of subjective video quality. In general, the present invention is not limited by any particular metric.

Alternatively, QP PCM can be evaluated. At the encoder, the mean squared error is measured for the PCM coded block. Then, the equivalent quantization parameters are evaluated and the same mean squared error is obtained if transform coding is used. The equivalent QP PCM thus obtained is used.

However, the above example of obtaining the parameter QP PCM is not exclusive, and a parameter indicating the presence or absence of noise in the PCM coded block may be used for the purpose of the present invention.

11 shows two adjacent blocks, one of which is encoded by the PCM encoding mode and the other is encoded by the non-PCM encoding mode such as the intra or inter prediction mode described in the background section. For non-PCM coded blocks, quantization parameters are generally signaled and applied to the transformed prediction error to enable irreversible compression.

PCM coded blocks are not quantized. Rather, the sample is assigned a certain number of bits, called bit depths. However, the PCM coded signal is due to the video sequence that was previously quantized and may contain quantization noise or other noise or defects. According to the configuration of the present invention, applying the deblocking filtering to the PCM coded sample is performed as a function of the "PCM quantization parameter" QP PCM and quantization parameters of non-PCM blocks. For example, the average quantization parameter QP AVE can be calculated as follows.

Figure 112014001178927-pct00055

Here, operation ">> 1 " indicates shifting to the right by one bit. This shift corresponds to integer division by two. According to the calculated average quantization parameter QP AVE , adjustment of the deblocking filter may be performed. For example, it may be determined whether the deblocking filter is applied to the PCM coded block. This can be done by comparing the average quantization parameter with a predetermined threshold. This threshold may be a fixed threshold or an adaptive threshold signaled within the video sequence. This threshold may be obtained by applying an optimization based on a test image or an optimization based on a pre-coded image signal.

Alternatively or additionally, the deblocking filter may be selected based on a function of the PCM quantization parameter and the quantization parameter of the adjacent block. In particular, the strength (frequency response and filter coefficients) of the applied deblocking filter can be selected. As an optional or additional selection criterion, it may be determined which samples (how many samples) are filtered at the boundaries of the current block.

PCM quantization parameter QP PCM may be encoded using predictive coding. The prediction may be, for example, a slice quantization parameter or a quantization parameter of a previously coded block. QP PCM encoding may be performed based on the PCM encoding block or blocks, and the bit depth may be encoded and transmitted (inserted into the bitstream). The encoded PCM quantization parameter may be inserted into a header, a picture, a picture sequence, and the like of the slice. However, it may be inserted into an adaptation parameter set (APS) or another parameter set such as PPS or SPS.

PCM quantization parameter QP PCM may be derived based on the bit depth of the PCM coding block.

12 illustrates an example of an encoding method of encoding a block of a sample of an image of a video signal into a bitstream according to an embodiment of the present invention. In particular, the input block is encoded 1210 using PCM. Thus, each sample is represented by a PCM symbol. PCM symbols are, for example, binary symbols with a fixed bit length of 8 bits per 8 bit sample. However, any other length may be used, such as 6, 7, 9, 10, etc. per sample. PCM encoding may involve increasing or decreasing the bit depth of the original input image signal. If the input image signal is already the desired bit depth, no further action is required. The PCM coding block of the sample is passed to determine whether or not filtering should be applied.

In particular, it is determined whether a deblocking filter should be applied to the PCM coded block of the sample (1220). If it is determined at 1220 that the deblocking filter should be applied (YES at step 1230), the PCM block is deblocked (1240). If it is determined at 1220 that no deblocking filtering is applied (“No” at step 1230), the block is not filtered by the deblocking filter. Depending on the characteristics of the vertically and horizontally adjacent blocks, separate decisions may be made on vertical and horizontal filtering. After a possible application and determination of the deblocking filtering is made, it is determined at 1250 whether a second filter other than the deblocking filter should be applied to the current block of samples. If it is determined that the second filter should be applied (“YES” in step 1260), the second filter is applied to the current block (1270). If it is determined that the second filter should not be applied (“No” in step 1260), the second type of filter is not applied (1270). In accordance with the determination of 1220 and 1250, two indicators are inserted into the bitstream. First, a deblocking filter indicator is included in the bitstream indicating the result of the determination of whether or not the deblocking filter should be applied (1280). A second filter indicator is then included (1290) in the bitstream indicating the result of the determination of whether or not the second filter should be applied.

The determination of whether the deblocking filter should be applied to the PCM coded block of the sample further includes determining whether a block adjacent to the block of sample is coded using pulse code modulation or coded by predictive / transform coding. . When adjacent blocks are encoded by predictive coding, it is determined that a deblocking filter is applied to the blocks of samples. Otherwise, the deblocking filter is not applied. Here, it is assumed that when predictive coding is applied, quantization is performed to reduce the quality of the neighboring block, that is, bring quantization noise into the neighboring block. Therefore, it is desirable that the original PCM coded block boundary for this adjacent block is more visible, resulting in block defects, so that the block is released.

Alternatively or additionally, the determination of whether the deblocking filter is applied to the block of samples is made based on comparing the quantization error of the block adjacent to the block of samples with a predetermined threshold.

The determination of whether the second filter is applied to the current block of samples includes determining the amount of quantization noise in the PCM coded current block, and applying the second filter to the current block based on the determined amount of quantization noise. Determining whether or not. Quantization noise in the PCM coding block may be estimated based on, for example, the bit depth used for PCM coding of the sample. Alternatively or additionally, this quantization noise can be estimated based on preliminary knowledge of the input image signal. For example, the input image signal may be a signal reconstructed after previous quantization. If the previous quantization amount is known, the previous quantization amount can be used to determine whether the second filter is applied. However, the amount of quantization noise can be estimated using any available estimation or optimization method.

The second filter can be either an adaptive loop filter or an adaptive sample offset. However, other types of noise suppression filters are possible. If the second filter is an adaptive loop filter, the corresponding second indicator may be inserted in the bitstream. In addition, a third filter, which may be an adaptive sample offset, may be applied after the determination, and a corresponding third indicator may be included in the bitstream independently of the indicator regarding the application of the deblocking filter and the adaptive loop filter. .

Alternatively, the second filter indicator may be a binary indicator that indicates in common whether both ALF and SAO apply to a block of samples. If this indicator has a value of zero, ALF and SAO do not apply. If this indicator has a value of 1, both ALF and SAO apply.

According to an embodiment of the present invention, determining whether the second filter is applied to the block of samples further includes determining whether the second filter is applied to the sample of the block changeable by deblocking filtering. The method further includes determining whether the second filter is applied to a sample of a block that is not changeable by deblocking filtering. The corresponding indicator is included in the bitstream. In particular, the changeable sample indicator is included in the bitstream indicating the result of determining whether the second filter is applied to the changed sample. Also, the non-changeable sample indicator is included in the bitstream indicating the result of determining whether the second filter is applied to the non-changeable sample. This embodiment provides the advantage of spatially separating a sample changed by the deblocking filter and a sample not changed by the deblocking filter. In particular, the deblocking filter aims to improve subjective image quality. However, this worsens the objective image quality, i.e., the pixel-wise difference between the original signal and the filter signal. In order to improve the objective quality of the deblocked sample, the determination of whether or not to further apply the filtering to the sample is made separately from the determination of whether or not to apply the filtering to the sample not changed by the deblocking filter.

The terms "modifiable" and / or "modified" by a deblocking filter refer to samples adjacent to the boundary between blocks where the deblocking filter is applicable. In general, deblocking filtering is only applicable to one, two, or three samples closest to the boundary. Irrespective of whether or not deblocking filtering is actually applied to a "modifiable sample", it is possible to form an area in which determination of whether to enable or disable another filtering can be made separately from the above description. More details of the determination and selection of the deblocking filter known from the prior art are described below. The invention is not limited to separating the mutable and unmodifiable samples of the block. Alternatively, the sample that has actually changed and the sample that has not changed can form separate areas where the determination is made separately.

The determination of whether the second filter is applied to the block of samples may be made based on the determination of whether the deblocking filter is applied to the block of samples. Similarly, the encoding of the indicator indicating whether the second filter is applied to the block of samples may predict for the indicator indicating whether the deblocking filter is applied to the block of samples.

Figure 13 illustrates a method of decoding a block of samples of an image of a video signal in accordance with the inventive arrangements, wherein decoding from the bitstream of the PCM encoded sample indicates whether a deblocking filter from the bitstream is applied to the block of samples. Extracting a deblocking filter indicator (1310) and extracting a second filter indicator (1320) separate from the deblocking filter indicator, which indicates whether a second filter is applied to the block of samples from the bitstream (1320). ). When the deblocking filter indicator indicates that deblocking filtering is to be applied (“YES” in step 1330), deblocking is applied (1340). Next, when the second filter indicator indicates that the second filtering is to be applied (“YES” in step 1350), the second filter is applied 1360. If each indicator indicates that each filter is not applied, then each filter is not applied.

Corresponding to the encoder described above, the second filter indicator may indicate whether both the adaptive loop filter and the adaptive sample offset apply to the block of samples, or there may be two different indicators extracted from the bitstream, one of which Indicates whether the adaptive loop filter is applied to the block of samples, and the other indicates whether the adaptive sample offset is applied to the block of samples. Alternatively or additionally, a changeable sample indicator may be extracted and used to indicate whether a second filter is applied to samples of the block that could have been changed by deblocking filtering, and / or the non-changeable sample indicator may be decoded. It may indicate whether the second filter is applied to samples of the block that could not be changed by the blocking filtering.

According to another arrangement of the present invention, there is provided a method for decoding a block of samples of an image of a video signal of a bitstream, the blocks of samples being encoded by pulse-signal modulation, and PCM, which method is adapted from the bitstream. Extracting a PCM quantization parameter representing a noise amount of the block of the sample (1610), selecting a deblocking filter applied to the block based on the extracted PCM quantization parameter (1620), and selecting the deblocking filter from the sample Step 1640 is applied to the block.

An example of the decoding method is shown in FIG. 16A. In particular, a “PCM quantization parameter” is extracted 1610 from the bitstream. Thereafter, the selection 1620 of the deblocking filter is made according to the extracted PCM quantization parameter QP_PCM, which includes a determination of whether or not the deblocking filter is to be applied. Thus, when deblocking filtering is applied (“YES” in step 1630), the block is filtered 1640 by the selected deblocking filter, otherwise the block is not deblocked.

Similarly, a method is provided for encoding a block of samples of an image of a video signal into a bitstream by pulse-code modulation (PCM), the method comprising determining a PCM quantization parameter indicative of the amount of noise in a block of samples ( 1650, selecting a deblocking filter applied to the block based on the extracted PCM quantization parameter (1660), applying the selected deblocking filter to the block of samples (1670), and applying the PCM quantization parameter to the bitstream. Including step 1680. This method is shown in the flowchart of FIG. 16B.

Selecting a deblocking filter is based on comparing a function of the amount of quantization (or quantization step size) and the PCM quantization parameter applied to a block adjacent to a block of samples with a predetermined threshold. In particular, it is based on comparing a function of the PCM quantization parameter with a quantization parameter associated with an adjacent block.

This function can be an average. However, other functions are possible, such as minimum, maximum, weighted average, and the like.

Selecting a deblocking filter may include determining whether or not to apply the deblocking filter to a block of samples, which may be done at both the encoder and the decoder in the same manner as filter selection.

Optionally or additionally, selecting the deblocking filter may include selecting a filter passband bandwidth (frequency response) and / or selecting a sample in the block to which the filter is applied.

The PCM quantization parameter may be based on the bit depth of the PCM encoding of the block sample. PCM quantization parameters may be encoded using prediction and / or based on bit depths of PCM encoded samples and / or by entropy codes. The PCM quantization parameter may be inserted in a picture header, an image slice header, information related to a block of samples, or additional information related to a plurality of video pictures in the bitstream.

According to another configuration of the invention, there is provided an apparatus for decoding a block of samples of an image of a video signal from a bitstream, wherein the blocks of samples are encoded by pulse-signal modulation and PCM, which apparatus samples from the bitstream. An extraction unit for extracting a PCM quantization parameter representing a noise amount of a block of N, a filter selection unit for selecting a deblocking filter applied to a block based on the extracted PCM quantization parameter, and applying the selected deblocking filter to a block of samples And a filtering unit.

According to another configuration of the present invention, there is provided an apparatus for encoding a block of samples of an image of a video signal into a bitstream by pulse-code modulation, and a PCM, the apparatus comprising PCM quantization representing the amount of noise in a block of samples. A parameter determination unit for determining a parameter, a filter selection unit for selecting a deblocking filter applied to the block based on the extracted PCM quantization parameter, a filtering unit for applying the selected deblocking filter to the block of samples, and a bit of the PCM quantization parameter Embedding unit for inclusion in the stream.

The filter selection unit may be configured to select the deblocking filter based on a comparison of the function of the PCM quantization parameter and the quantization parameter associated with the adjacent block. The filter selection unit may be configured to determine whether to apply the deblocking filter to the block of samples. Alternatively or additionally, the filter selection unit may be configured to select the strength and / or number of samples in the block to which the filter is applied.

The apparatus of the present invention determines, selects and selects as described above by changing special filtering units (deblocking filtering unit 150 in the encoder and deblocking filtering unit 250 in the decoder) and / or by changing the ALF and / or SAO filtering units. May be implemented to allow filtering.

The parameter determination unit may be configured to estimate the PCM quantization parameter by estimating the PCM quantization parameter as a value that maximizes subjective quality after deblocking filtering or as a quantization parameter that generates the same noise when transform coding is applied.

As described above, the determination and / or selection of the deblocking filter may be done according to the PCM quantization parameter. According to another embodiment of the present invention, the selection and determination of the deblocking filter may be similarly performed when the PCM coded block is related and when the PCM coded block is not related. In particular, with reference to FIG. 15, there are basically three possibilities for adjacent blocks A and B, where deblocking filtering may be advantageous.

Block A is a non-PCM coded block, and block B is also a non-PCM coded block. In this case, block A is characterized by having a quantization parameter, that is, a quantization parameter (QP A = QP (A)) applied to the encoding of block A. The same applies to block B (QP B = QP (B)).

Block A is a non-PCM coded block and block B is a PCM coded block (or vice versa). In this case, block A is characterized by having a quantization parameter QP A = QP (A). Block B is characterized by having an estimated "PCM quantization parameter" (QP B = QP PCM (B)) that represents the amount of noise in block B.

Block A and Block B are both PCM coded blocks. In this case, not only block A but also block B are each estimated " PCM quantization parameter " QP A = QP PCM (A) and QP B = QP PCM (B), which represent the respective amounts of noise in block A and block B, respectively. It is characterized by having.

The function of the quantization parameters of blocks A and B can be used for the determination and / or selection of the deblocking filter. For example, the following QP A And the function of QP B can be executed.

Figure 112014001178927-pct00056
or

Figure 112014001178927-pct00057

However, only one example is presented and other functions may be applied.

Similar to FIG. 14, a decision may be made first regarding whether to apply filtering to the entire block A and / or B. For example,

Figure 112014001178927-pct00058
If,

Deblocking is enabled.

If deblocking filtering is enabled for blocks A and / or B, it is determined line by line whether to apply deblocking filtering to a particular line (row or column) of the block.

Figure 112014001178927-pct00059

Figure 112014001178927-pct00060
when,

 It may be decided to apply strong filtering.

Otherwise, weak filtering or no filtering is applied for the particular line i.

Strong filtering can be done as shown in the background art. As shown above, i.e.

Figure 112014001178927-pct00061

Delta values can also be calculated as

Figure 112014001178927-pct00062
Is determined only for the filter. Otherwise, deblocking filtering is not done. When filtering is done (weak deblocking filtering), the following value (delta1) is calculated,

Figure 112014001178927-pct00063

The boundary pixels closest to both blocks A and B are filtered as follows:

Figure 112014001178927-pct00064

It is also determined whether the second pixel closest to the boundary is filtered.

Figure 112014001178927-pct00065
Is filtered, otherwise it is not filtered by the deblocking filter.
Figure 112014001178927-pct00066
Pixel q1 is filtered, otherwise it is not filtered by the deblocking filter. The filtering is done as follows.

Figure 112014001178927-pct00067

The method has the advantage that the filtering of the PCM coded block and the non-PCM coded block is performed in the same manner, where the PCM quantization parameter represents the noise characteristic of the PCM coded block, and thus the same method as that of the non-PCM block Can be used as

In general, an indicator indicating whether or not to apply deblocking filtering to a block of samples can also be used to determine whether to enable or disable the deblocking filter for a block / block of samples, so that the PCM quantization parameter is May be considered to be an indicator.

The above embodiments all relate to a block or a coding unit. However, as will be apparent to those skilled in the art, the present invention can be applied to an image region of a different shape (shape, size) than a block or coding unit used in HEVC.

The processing described in each of the embodiments simply records a program for implementing the configurations of the moving picture coding method (image coding method) and moving picture decoding method (image decoding method) described in each embodiment in a separate computer system. Can be executed. The recording medium may be any recording medium such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card and a semiconductor memory as long as a program can be recorded.

The following describes the application of the moving picture coding method (image coding method) and the moving picture decoding method (image decoding method) described in each embodiment, and a system using them. The system has a feature having an image encoding and decoding apparatus, the image encoding apparatus uses an image encoding method, and the image decoding apparatus uses an image decoding method. Other configurations of the system may be changed as appropriate.

(Example A)

17 shows the overall configuration of a content providing system ex100 for executing a content distribution service. The area providing the communication service is divided into cells of a desired size, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are located in each cell.

The content providing system ex100 may include a computer ex111, a personal digital assistant ex112, and the like through the Internet ex101, an ISP (external service provider) ex102, a telephone network ex104, and a base station ex106 to ex110. Connected to devices such as a camera ex113, a mobile phone ex114, and a game machine ex115, respectively.

However, the configuration of the content providing system ex100 is not limited to the configuration illustrated in FIG. 17, and any combination of arbitrary elements may be connected. In addition, each device may be directly connected to the telephone network ex104 rather than through the base stations ex106 to ex110 which are fixed wireless stations. In addition, the devices may be connected to each other via short range wireless communication or the like.

A camera ex113, such as a digital video camera, can capture the video. Camera ex116, such as a digital camera, can capture still images and video. The mobile phone ex114 may also include Global System for Mobile Communications (GSM), Code Division Multiple Access (CDMA), Wideband-Code Division Multiple Access (W-CDMA), Long Term Evolution (LTE), and HSPA ( High speed packet access). Alternatively, the mobile phone ex114 may be a personal handyphone system (PHS).

In the content providing system ex100, the streaming server ex103 is connected to the camera ex113 or the like through the telephone network ex104 and the base station ex109 to enable distribution of live broadcast images and the like. In this distribution, the content captured by the user using the camera ex113 (e.g., video of a live music show) is encoded as described above in each embodiment (i.e., the camera is in accordance with the configuration of the present invention). Function as an image encoding device according to the above), and the encoded content is transmitted to the streaming server ex103. On the other hand, upon receiving the request, the streaming server ex103 distributes the stream of the transmitted content data to the client. The client includes a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 capable of decoding the above-described encoded data. Each device that receives the distributed data decodes and reproduces the coded data (i.e., functions as an image decoding device in the configuration of the present invention).

The captured data may be encoded by the camera ex113 or the streaming server ex103 that transmits the data, or the encoding process may be shared between the camera ex113 and the streaming server ex103. Similarly, the distributed data can be decrypted by the client or streaming server ex103, or the decryption process can be shared between the client and streaming server ex103. In addition, data of still images and videos captured by the camera ex113 as well as the camera ex113 may be transmitted to the streaming server ex103 via the computer ex111. The encoding process may be executed by the camera ex116, the computer ex111, or the streaming server ex103 or shared between them.

In addition, the encoding and decoding processing can be executed by the LSI ex500 generally included in each of the computer ex111 and the apparatus. The LSI ex500 may be composed of a single chip or a plurality of chips. Software for encoding and decoding video can be integrated into some types of recording media (CD-ROM, flexible disk, hard disk, etc.) that can be read by a computer ex111 or the like, and the encoding and decoding processing is performed using software. Can be executed. In addition, when the mobile phone ex114 includes a camera, video data obtained by the camera may be transmitted. The video data is data encoded by the LSI ex500 included in the mobile phone ex114.

In addition, the streaming server ex103 may be configured as a server and a computer, and may distribute data, process the distributed data, and record or distribute the data.

As described above, the client can receive and reproduce the encoded data in the content providing system ex100. That is, the client can receive and decode information transmitted by the user, and can reproduce the decoded data in real time in the content providing system ex100, so that a user without special authority and equipment can implement personal broadcasting. .

Unlike the example of the content providing system ex100, at least one of the moving picture coding apparatus (image coding apparatus) and the moving picture decoding apparatus (image decoding apparatus) described in each embodiment is used in the digital broadcasting system ex200 shown in FIG. Can be implemented. More specifically, the broadcasting station ex201 communicates with the broadcasting station ex202 or transmits the multiplexed data obtained by multiplexing audio data and the like into video data or through the radio station ex202. The video data is data encoded by the video encoding method described in each embodiment (that is, data encoded by the image encoding device according to the configuration of the present invention). Upon receiving the multiplexed data, the broadcast satellite ex202 transmits broadcast radio waves. The home antenna ex204 having the satellite broadcast reception function receives radio waves. Next, a device such as a television (receiver) ex300 and a set top box (STB) ex217 decodes the received multiplexed data, and reproduces the decoded data (that is, the image decoding apparatus according to the configuration of the present invention). Function as).

In addition, the reader / recorder ex218 reads and decodes the multiplexed data recorded on the recording medium ex215 such as DVD and BD, or encodes a video signal on the recording medium ex215, and in some cases, the encoding Data obtained by multiplexing the audio data on the obtained data is recorded. The reader / recorder ex218 may include a video decoding apparatus or a video encoding apparatus as shown in each embodiment. In this case, the reproduced video signal is displayed on the monitor ex219 and can be reproduced by another apparatus or system using the recording medium ex215 in which the multiplexed data is recorded. A video decoding apparatus is implemented in a set-top box ex217 connected to a cable television ex203 or a satellite and / or terrestrial broadcast antenna ex204 to display a video signal on a monitor ex219 of the television ex300. The video decoding apparatus may be implemented not only in the set top box but also in the television ex300.

Fig. 19 shows a television (receiver) ex300 that uses the moving picture coding method and the moving picture decoding method described in each embodiment. The television ex300 demodulates the received multiplexed data and the tuner ex301 which acquires or provides the multiplexed data obtained by multiplexing the audio data into the video data through an antenna ex204 or a cable ex203 for receiving a broadcast. A modulation / demodulation unit ex302 that modulates or modulates the data into externally supplied multiplexed data, and demultiplexes the modulated multiplexed data into video data and audio data, or encoded by the signal processing unit ex306. And a multiplexing / demultiplexing unit ex303 for multiplexing video data and audio data into data.

The television ex300 includes an audio signal processing unit ex304 and a video signal processing unit ex305 for decoding audio data and video data and encoding audio data and video data, respectively (image encoding apparatus and image decoding according to the configuration of the present invention). And an output unit ex309 including a signal processing unit ex306, a speaker ex307 for providing a decoded audio signal, and a display unit ex308 for displaying a decoded video signal. . In addition, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. In addition, the television ex300 includes a control unit ex310 for controlling each component of the television ex300 and a power supply circuit unit ex311 for supplying power to each element. Unlike the operation input unit ex312, the interface unit ex317 is a slot capable of attaching a bridge ex313 connected to an external device such as a reader / recorder ex218, a recording medium ex216 such as an SD card, and the like. The unit ex314 may include a driver ex315 connected to an external recording medium such as a hard disk, and a modem ex316 connected to a telephone network. Here, the recording medium ex216 may electrically record information using a storage nonvolatile / volatile semiconductor memory device. The components of the television ex300 are connected to each other via a synchronous bus.

First, a configuration in which the television ex300 decodes the multiplexed data obtained from the outside via the antenna ex204 or the like and reproduces the decoded data will be described. In the television ex300, if there is a user operation via the remote controller ex220 or the like, the multiplexing / demultiplexing unit ex303 is demodulated by the modulation / demodulation unit ex302 under the control of the control unit ex310 including the CPU. Multiplexed multiplexed data. Further, in the television ex300, using the decoding method described in each embodiment, the audio signal processing unit ex304 decodes the demultiplexed audio data, and the video signal processing unit ex305 decodes the demultiplexed video data. Decrypt The output unit ex309 provides the decoded video signal and the audio signal to the outside, respectively. When the output unit ex309 provides the video signal and the audio signal, the signals are stored in the buffers ex318 and ex319 and the like, so that the signals are reproduced in synchronization with each other. In addition, the television ex300 reads the multiplexed data not only through broadcast and the like, but also from recording media ex215 and ex216 such as a magnetic disk, an optical disk, and an SD card. Next, a configuration in which the television ex300 encodes an audio signal and a video signal to transmit data to the outside and records the data on a recording medium will be described. In the television ex300, if there is a user operation via the remote controller ex220 or the like, under the control of the control unit ex310 using the encoding method described in each embodiment, the audio signal processing unit ex304 receives the audio signal. The video signal processing unit ex305 encodes a video signal. The multiplexing / demultiplexing unit ex303 multiplexes the encoded video signal and the audio signal, and provides the resulting signal to the outside. When the multiplexing / demultiplexing unit ex303 multiplexes the video signal and the audio signal, the signals are temporarily stored in the buffers ex320 and ex321 or the like, so that the signals are reproduced in synchronization with each other. Here, the buffers ex318, ex319, ex320, and ex321 may have a plurality or at least one buffer shared on the television ex300, as shown. In addition, since data can be stored in a buffer, for example, system overflow and underflow between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303 can be prevented.

Also, the television ex300 may include a configuration for receiving an AV input from a microphone or a camera in addition to the configuration for obtaining audio and video data from a broadcast or recording medium, and may encode the obtained data. In the above description, the television ex300 encodes and multiplexes the data and provides the data to the outside. However, the television ex300 encodes and multiplexes the data and provides the data to the outside.

In addition, when the reader / recorder ex218 reads the multiplexed data from the recording medium or records on the recording medium, one of the television ex300 and the reader / recorder ex218 may decode or encode the multiplexed data. The television ex300 and the reader / recorder ex218 may share decoding or encoding.

As an example, FIG. 20 shows the configuration of the information reproducing / recording unit ex400 when data is read from or written to the optical disc. The information reproducing / recording unit ex400 includes component elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407, which will be described later. The optical disk ex401 irradiates a laser spot to the recording surface of the recording medium ex215 that is an optical disk to record information, and detects the light reflected from the recording surface of the recording medium ex215 to read the information.

The modulation recording unit ex402 electrically drives the semiconductor laser included in the optical head ex401 to modulate the laser light according to the recorded data. The reproduction demodulation unit ex403 amplifies the reproduction signal obtained by electrically detecting the light reflected from the recording surface using the optical detector included in the optical head ex401, and separates the signal component recorded on the recording medium ex215. The reproduced signal is demodulated to reproduce necessary information. The buffer ex404 temporarily holds information recorded on the recording medium ex215 and information reproduced from the recording medium ex215. The disc motor ex405 rotates the recording medium ex215. The servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405 to follow the laser spot. The system control unit ex407 controls the entire information reproducing / recording unit ex400. The read and write process is operated by or in cooperation with the system control unit ex407 which generates or adds new information as needed using various information stored in the buffer ex404 and records information through the optical head ex401. It can be implemented by the modulation recording unit ex402 to reproduce, the reproduction demodulation unit ex403, and the servo control unit ex406. The system control unit ex407 includes, for example, a microprocessor, and the computer executes a program by executing a program for reading and writing.

In the above description, the optical head ex401 irradiates a laser spot, but high density recording can be performed using near field light.

21 shows a recording medium ex215 that is an optical disk. On the recording surface of the recording medium ex215, guide grooves are formed spirally, and the information track ex230 records in advance address information indicating an absolute position on the disc in accordance with the change of the shape of the guide groove. The address information includes information for determining the position of the recording block ex231, which is a unit for recording data. In the apparatus for recording and reproducing data, the information track ex230 can be reproduced and address information can be read to determine the position of the recording block. The recording medium ex215 also includes a data recording area ex233, an inner circumferential area ex232, and an outer circumferential area ex234. The data recording area ex233 is an area used for recording user data. The inner circumferential area ex232 and the outer circumferential area ex234, which are inside and outside the data recording area ex233, are for specific use except for recording of user data, respectively. The information reproduction / recording unit 400 is obtained by multiplexing the encoded audio, the encoded video data, or the encoded audio and video data from the data recording area ex233 of the recording medium ex215 and into the data recording area ex233. Read and write the multiplexed data.

In the above description, an optical disc having one layer, such as DVD and BD, has been described as an example, but the optical disc is not limited to this, and may be an optical disc having a multi-layer structure and which can be recorded on a part other than the surface. Further, the optical disc may have a structure for multidimensional recording / reproducing, such as recording information using color light having different wavelengths on the same portion of the optical disc, and a structure for recording information having different layers from various angles.

In addition, the vehicle ex210 having the antenna ex205 can receive data from the satellite ex202 or the like, and in the digital broadcasting system ex200, the display device like the car navigation system ex211 installed in the vehicle ex210 is provided. Play the video on. Here, the configuration of the car navigation system ex211 is, for example, a configuration including a GPS receiving unit in the configuration shown in FIG. 19. The same applies to the configurations of the computer ex111, the mobile phone ex114, and the like.

22A shows a mobile phone ex114 that uses the video encoding method and the video decoding method described in the embodiments. The mobile phone ex114 is captured by the antenna ex350 for transmitting and receiving radio waves through the base station ex110, the camera unit ex365 capable of capturing moving images and still images, and the camera unit ex365, or the antenna ex350. A display unit ex358, such as a liquid crystal display, that displays data such as decoded video received by < RTI ID = 0.0 > The mobile phone ex114 includes a main unit including an operation key unit ex366, an audio output unit ex357 such as a speaker for output of audio, an audio input unit ex356 such as a microphone for input of audio, and a captured video. Or a storage unit ex367 for storing still pictures, recorded audio, encoded or decoded data of received video, e-mail, and the like, and an interface unit for a recording medium for storing data in the same manner as the storage unit ex367. A slot unit ex364 is further included.

Next, a configuration example of the mobile phone ex114 will be described with reference to FIG. 22B. In the mobile phone ex114, the main body control unit ex360 designed to control not only the display unit ex358 but also each unit of the main body including the operation key unit ex366 is connected to the power supply circuit unit via the synchronization bus ex370. (ex361), operation input control unit ex362, video signal processing unit ex355, camera interface unit ex363, liquid crystal display (LCD) control unit ex359, modulation / demodulation unit ex352, multiplexing / demultiplexing The unit ex353, the audio signal processing unit ex354, the slot unit ex364, and the storage unit ex367 are interconnected.

When the telephone termination key or power key is turned on by a user's operation, the power supply circuit unit ex361 supplies power to each unit from the battery pack to activate the cellular phone ex114.

In the cellular phone ex114, the audio signal processing unit ex354 collects the audio signal collected by the audio input unit ex356 in the voice call mode, under the control of the main controller unit ex360 including a CPU, a ROM, and a RAM. To a digital audio signal. The modulation / demodulation unit ex352 performs spread spectrum processing on the digital audio signal, and the transmission / reception unit ex351 performs digital-to-analog conversion and frequency change on the data, and transmits the resulting data through the antenna ex350. Further, in the cellular phone ex114, the transmission / reception unit ex351 amplifies the data received by the antenna ex350 in the voice call mode, and performs frequency conversion and analog-digital conversion on the data. Then, the modulation / demodulation unit ex352 performs inverse spectrum spreading processing on the data, and the audio signal processing unit ex354 converts the data into an analog audio signal and outputs it through the audio output unit ex357.

Further, when the e-mail is transmitted in the data communication mode, the text data of the e-mail input by operating the operation key unit ex366 or the like of the main body is transmitted to the main control unit ex360 through the operation input control unit ex362. The main control unit ex360 causes the modulation / demodulation unit ex352 to perform spread spectrum processing on the text data, and the transmission / reception unit ex351 performs digital-to-analog conversion and frequency conversion on the resultant data, and transmits the data to the antenna ex350. Through the transmission to the base station ex110. When an e-mail is received, almost the reverse of the process of sending an e-mail is performed on the received data, and the resulting data is supplied to the display unit ex358.

When video, still picture, or video and audio are transmitted in the data communication mode, the video signal processing unit ex355 compresses and compresses the video signal supplied from the camera unit ex365 using the moving picture coding method described in each embodiment. It encodes (that is, functions as an image encoding device according to the configuration of the present invention), and transmits the encoded video data to the multiplexing / demultiplexing unit ex353. In contrast, while the camera unit ex365 captures video, still images, and the like, the audio signal processing unit ex354 encodes the audio signal collected by the audio input unit ex356 to multiplex / encode the encoded audio data. The demultiplexing unit ex353 is transmitted.

The multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by using a predetermined method. The modulation / demodulation unit (modulation / demodulation circuit unit) ex352 performs spread spectrum processing on the multiplexed data, and the receiving unit ex351 performs digital-to-analog conversion and frequency conversion on the data to output the resulting data to the antenna ex350. Send it through.

When receiving data of a video file linked to a web page or the like in the data communication mode, or when receiving an email with a video and / or audio attached thereto, decode the multiplexed data received through the antenna ex350. The multiplexing / demultiplexing unit ex353 demultiplexes the multiplexed data into the video data bitstream and the audio data bitstream, and supplies the encoded video data to the video signal processing unit ex355 through the synchronization bus ex370. Then, the encoded audio data is supplied to the audio signal processing unit ex354. The video signal processing unit ex355 decodes the video signal using the video decoding method corresponding to the video encoding method described in each embodiment (that is, functions as an image decoding device according to the configuration of the present invention), and displays the display unit ( ex358 displays, for example, the video and still images included in the video file linked to the web page via the LCD control unit ex359. In addition, the audio signal processing unit ex354 decodes the audio signal, and the audio output unit ex357 provides audio.

In addition, similar to the television ex300, the terminal such as the mobile phone ex114 includes (i) a receiving terminal including only the encoding device and the decoding device, as well as (ii) the receiving device and (iii) the encoding device. It may have three types of realization configurations including a receiving terminal including only the decoding device. In the above description, the digital broadcasting system ex200 transmits and receives multiplexed data obtained by multiplexing audio data into video data, but the multiplexed data is data obtained by multiplexing not only audio data but also character data related to video with video data, and multiplexing. It may be video data itself as well as data that has been acquired.

As such, the video encoding method and the video decoding method of each embodiment can be used in one of the described apparatuses and systems. Thus, the advantages described in each embodiment can be obtained.

In addition, the present invention is not limited to the above embodiments, and various changes and modifications can be made without departing from the scope of the present invention.

(Example B)

(i) a video encoding method or a video encoding apparatus described in each embodiment, and (ii) a video encoding method or a video encoding apparatus according to different standards such as MPEG-2, MPEG-4 AVC, and VC-1. Video data may be generated by switching.

Here, when a plurality of video data conforming to different standards is decoded, the decoding method is selected to conform to that different standard. However, since it cannot be detected which standard each of a plurality of video data to be decoded conforms to, there is a problem that an appropriate decoding method cannot be selected.

In order to solve the above problem, the multiplexed data obtained by multiplexing audio data and the like into video data has a structure including identification information indicating which standard the video data conforms to. A specific structure of the multiplexed data including video data generated by the video encoding method and the video encoding apparatus described in each embodiment will be described below. The multiplexed data is a digital stream in MPEG-2 transport stream format.

23 shows the structure of the multiplexed data. As shown in FIG. 23, the multiplexed data is obtained by multiplexing at least one of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream. The video stream represents the primary video and the secondary video of the movie, the audio stream (IG) represents the primary audio portion and the secondary audio portion mixed with this primary audio portion, and the presentation graphics stream displays the subtitles of the movie. Display. Here, the primary video is a normal video displayed on the screen, and the secondary video is a video displayed in a smaller window of the primary video. The interactive graphics stream also displays an interactive screen created by arranging GUI components on the screen. The video stream is encoded by the moving picture coding method and the moving picture coding apparatus shown in each embodiment, or by the moving picture coding method or the moving picture coding apparatus according to conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1. . Audio streams are encoded according to standards such as Dolby-AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, and linear PCM.

Each stream included in the multiplexed data is identified by a PID. For example, 0x1011 is assigned to the video stream used for the video of the movie, 0x1100 to 0x111F is assigned to the audio stream, 0x1200 to 0x121F is assigned to the presentation graphics stream, and 0x1400 to 0x141F is assigned to the interactive graphics stream. 0x1B00 to 0x1B1F are assigned to the video stream used for the secondary video of the movie, and 0x1A00 to 0x1A1F are assigned to the audio stream used for the secondary audio mixed to the primary audio.

24 schematically illustrates how data is multiplexed. First, a video stream ex235 composed of video frames and an audio stream ex238 composed of audio frames are respectively a PES packet ex236, a PES packet ex239, and a stream of TS packets ex237 and TS packets ex240. Is converted. Similarly, the data of the presentation graphics stream ex241 and the data of the interactive graphics stream ex244 include the stream of the PES packet ex242 and the stream of the PES packet ex245, and the TS packet ex243 and the TS packet ex246. Are converted to Since these TS packets are multiplexed into a stream, the multiplexed data ex247 can be obtained.

25 illustrates in more detail how a video stream is stored in a stream of PES packets. The first bar of FIG. 25 represents a video frame stream in the video stream. The second bar represents the stream of PES packets. As indicated by the arrows indicated by yy1, yy2, yy3, and yy4 in FIG. 25, the video stream is divided into pictures of I pictures, B pictures, and P pictures, which are video presentation units, respectively, and the picture is each PES packet. Is stored in the payload. Each PES packet has a PES header, and the PES header stores a Presentation Time-Stamp (PTS) indicating the display time of the picture and a Decoding Time-Stamp (DTS) indicating the decoding time of the picture.

Fig. 26 shows the format of a TS packet finally recorded in the multiplexed data. Each TS packet is a 188-byte fixed length packet that contains a 4-byte TS header with information such as a PID for identifying the stream, and an 184-byte TS payload for storing data. PES packets are divided and stored in TS payloads, respectively. When the BD ROM is used, each TS packet is given a 4-byte TP_Extra_Header, resulting in a 192-byte source packet. Source packets are recorded in multiplexed data. TP_Extra_Header stores information such as ATS (Arrival_Time_Stamp). ATS indicates the transmission start time for each packet to be sent to the PID filter. The source packet is placed in the multiplexed data as shown below in FIG. The increasing number from the head of the multiplexed data is called source packet numbers (SPNs).

Each TS packet included in the multiplexed data includes a program association table (PAT), a program map table (PMT), and a program clock reference (PCR) as well as a stream of audio, video, and subtitles. PAT indicates that the PID indicates in the PMT used in the multiplexed data, and the PID of the PAT itself is registered as zero. The PMT stores PIDs of streams such as video, audio, and subtitles included in the multiplexed data, and attribute information of streams corresponding to the PIDs. PMT also has various descriptors for multiplexed data. The descriptor has information such as copy control information indicating whether copying of the multiplexed data is allowed or not. In order to obtain synchronization between the Arrival Time Clock (ATC), which is the time axis of the ATS, and the System Time Clock (STC), which is the time axis of the PTS and DTS, the PCR stores STC time information corresponding to the ATS representing the time transmitted to the decoder.

27 shows the data structure of the PMT in detail. The PMT header is placed on top of the PMT. The length of data included in the PMT and the like are described in the PMT header. A plurality of descriptors for the multiplexed data is placed after the PMT header. Information such as copy control information is described in the descriptor. After the descriptor, a plurality of stream information about the stream included in the multiplexed data is disposed. Each stream information includes a stream descriptor representing information such as a stream type, a stream PID, and stream attribute information (frame rate or aspect ratio) for identifying a compression codec of the stream, respectively. The number of stream descriptors is equal to the number of streams of multiplexed data.

When the multiplexed data is recorded on the recording medium or the like, the multiplexed data information file is also recorded.

Each multiplexed data information file is management information of the multiplexed data shown in FIG. The multiplexed data information file is in accordance with the multiplexed data, each file including the multiplexed data information, stream attribute information, and entry map.

As shown in Fig. 28, the multiplexed data information includes a system rate, a reproduction start time and a reproduction end time. The system rate indicates a maximum data rate at which the system target decoder to be described below transmits multiplexed data to the PID filter. The interval of ATS included in the multiplexed data is set not to be higher than the system rate. The playback start time represents the PTS of the video frame at the head of the multiplexed data. The interval of the frame is added to the PTS of the video frame at the end of the multiplexed data, and the PTS is set as the reproduction end time.

As shown in Fig. 29, one attribute information is registered in the stream attribute information for each PID of each stream included in the multiplexed data. Each attribute information has different information depending on whether the corresponding stream is a video stream, an audio stream, a presentation graphics stream, or an interactive graphics stream. Each video stream attribute information has information including the type of compression codec used to compress the video stream, the resolution of the picture data included in the video stream, the aspect ratio, and the frame rate. Each audio stream attribute information has information including the type of compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported by the audio stream, and the height of the sampling frequency. The video stream attribute information and the audio stream attribute information are used to initialize the decoder before the player plays the information.

In this embodiment, the multiplexed data used is the stream type included in the PMT. Also, when multiplexed data is recorded on the recording medium, video stream attribute information included in the multiplexed data information is used. More specifically, the moving picture coding method or the moving picture coding apparatus described in each embodiment includes unique information representing video data generated by the moving picture coding method or the moving picture coding apparatus in each embodiment, including the stream type or the video stream attribute information included in the PMT. Assigning to the step or unit. In the configuration, video data generated by the video encoding method or the video encoding apparatus described in each embodiment can be distinguished from video data conforming to other standards.

30 shows the steps of the video decoding method according to the present embodiment. In step exS100, the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is obtained from the multiplexed data. Next, in step exS101, it is judged whether the stream type or the video stream attribute information indicates whether the multiplexed data has been generated by the video encoding method or the video encoding apparatus of each embodiment. If it is determined that the data multiplexed with the stream type or the video stream attribute information has been generated by the video encoding method or the video encoding apparatus of each embodiment, decoding is performed by the video decoding method of each embodiment in step exS102. Further, if the stream type or video stream attribute information indicates conformance with the conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1, in step exS103, decoding is performed by the moving picture decoding method according to the conventional standard. Is done.

In this way, by assigning a new unique value to the stream type or the video stream attribute information, it is possible to determine whether or not the moving picture decoding method or the moving picture decoding apparatus described in each embodiment can perform decoding. Even when multiplexed data conforming to different standards is input, an appropriate decoding method or apparatus can be selected. Therefore, the information can be decoded without error. In addition, the video encoding method or apparatus, or the video decoding method or apparatus of this embodiment can be used in the above-described apparatus and system.

(Example C)

In each embodiment, each of the video encoding method, the video encoding apparatus, the video decoding method, and the video decoding apparatus is generally formed in the form of an integrated circuit or a large scale integrated (LSI) circuit. As an example of the LSI, FIG. 31 shows a configuration of an LSI ex500 formed of one chip. The LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 to be described below, and the elements are connected to each other through a bus ex510. When the power supply circuit unit ex505 is turned on, the power supply circuit unit ex505 is driven to supply power to each element.

For example, when encoding is performed, the LSI ex500 is under the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, and the driving frequency control unit ex512. The AV signal is received from the microphone ex117, the camera ex113, or the like via the AV IO ex509. The received AV signal is temporarily stored in an external memory ex511 such as SDRAM. Under the control of the control unit ex501, the stored data is divided into data portions according to the throughput and the speed transmitted to the signal processing unit ex507. The signal processing unit ex507 then encodes the audio signal and / or the video signal. Here, the encoding of the video signal is the encoding described in each embodiment. In some cases, the signal processing unit ex507 multiplexes the encoded audio data and the encoded video data, and the stream IO ex506 supplies the multiplexed data to the outside. The supplied multiplexed data is transmitted to the base station ex107 or recorded in the recording medium ex215. When the data sets are multiplexed, the data sets are synchronized with each other because the data must be temporarily stored in the buffer ex508.

The memory ex511 is a device external to the LSI ex500, but may be included in the LSI ex500. The buffer ex508 is not limited to one buffer, but may be composed of buffers. In addition, the LSI ex500 may be made of one chip or a plurality of chips.

The control unit ex501 includes a CPU ex502, a memory controller ex503, a stream controller ex504, and a drive frequency control unit ex512, but the configuration of the control unit ex501 is not limited to this. . For example, the signal processing unit ex507 may further include a CPU. The signal processing unit ex507 may include another CPU to improve the processing speed. As another example, the CPU ex502 may function as a signal processing unit ex507 or as a part thereof, and may include, for example, an audio signal processing unit. In this case, the control unit ex501 includes the signal processing unit ex507 or the CPU ex502 including a part of the signal processing unit ex507.

The name used here is LSI, but depending on the degree of integration, it is called IC, system LSI, super LSI, or ultra LSI.

In addition, the integration method is not limited to the LSI, and a special circuit or a general purpose processor may be integrated. After the LSI is manufactured, a programmable field programmable gate array (FPGA) or a reconfigurable processor capable of reconfiguring the connection and configuring the LSI may be used for the same purpose.

In the future, due to advances in semiconductor technology, a new name technology may replace LSI. Functional blocks can be integrated using this technique. The present invention may also be applied to biotechnology.

(Example D)

When video data generated by the video encoding method or the video encoding apparatus described in each embodiment is decoded, video data conforming to conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1 is decoded. There is a possibility that the throughput increases. Therefore, the LSI ex500 needs to set the driving frequency higher than the CPU ex502 used when video data is decoded according to the conventional standard. However, when the driving frequency is set high, a problem arises in that power consumption is increased.

To solve this problem, moving picture decoding apparatuses such as television ex300 and LSI ex500 are configured to select a standard followed by video data and to switch between driving frequencies according to the selected standard. 32 shows the configuration ex800 of this embodiment. The driving frequency switching unit ex803 sets the driving frequency higher when video data is generated by the moving picture coding method or the moving picture coding apparatus described in each embodiment. Then, the driving frequency switching unit ex803 instructs the decoding processing unit ex801 to execute the video encoding method described in each embodiment to decode the video data. When the video data follows the conventional standard, the driving frequency switching unit ex803 sets the driving frequency lower than the driving frequency of the video data generated by the moving picture coding method or the moving picture coding apparatus described in each embodiment. The driving frequency switching unit ex803 instructs the decoding processing unit ex802 according to the conventional standard to decode the video data.

More specifically, the driving frequency switching unit ex803 includes the CPU ex502 and the driving frequency control unit ex512 shown in FIG. 31. Here, each of the decoding processing unit ex801 which performs the moving picture coding method described in each embodiment and the decoding processing unit ex802 conforming to the conventional standard correspond to the signal processing unit ex507 in FIG. CPU ex502 determines the standard to which the video data follows. The drive frequency control unit ex512 determines the drive frequency based on the signal from the CPU ex502. The signal processing unit ex507 decodes the video data based on the signal from the CPU ex502. For example, the identification information described in Embodiment B can be used to identify the video data. The identification information is not limited to that described in Embodiment B, but may be any information as long as the information indicates which standard the video data conforms to. For example, when the standard followed by the video data can be determined based on an external signal for determining that the video data is to be used for a television or a disc or the like, the determination is made based on this external signal. In addition, the CPU ex502 selects the driving frequency based on a look-up table in which the standard of video data corresponds to the driving frequency, as shown in FIG. 34, for example. The driving frequency stores the lookup table in the buffer ex508 and the internal memory of the LSI, and the CPU ex502 may be selected by referring to the lookup table.

33 shows the steps of executing the method of the present embodiment. First, in step exS200, the signal processing unit ex507 obtains identification information from the multiplexed data. Next, in step exS201, the CPU ex502 determines whether video data has been generated by the encoding method and the encoding apparatus described in each embodiment based on the identification information. When video data is generated by the moving picture coding method and the moving picture coding apparatus described in each embodiment, in step exS202, the CPU ex502 transmits a signal for setting the drive frequency higher to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets the drive frequency higher. On the other hand, when the identification information indicates that the video data conforms to conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1, in step exS203, the CPU ex502 sets a signal for setting a lower driving frequency. The data is transmitted to the drive frequency control unit ex512. The driving frequency control unit ex512 sets the driving frequency lower than when video data is generated by the video encoding method and the video encoding apparatus.

In addition, with the switching of the driving frequency, the power conservation effect may be improved by changing the voltage applied to the device including the LSI ex500 or the LSI ex500. For example, when the driving frequency is set low, the voltage applied to the device including the LSI ex500 or the LSI ex500 may be set to a lower voltage than when the driving frequency is set higher.

Further, as a method for setting the driving frequency, if the throughput of decoding is higher, the driving frequency is set higher, and if the throughput of decoding is smaller, the driving frequency can be set lower. Therefore, the setting method is not limited to that described above. For example, when the throughput for the decoded video data according to MPEG-4 AVC is higher than the throughput for the decoded video data generated by the video encoding method and the video encoding apparatus described in each embodiment, the driving frequency is set as described above. Can be set inversely.

In addition, the method of setting the driving frequency is not limited to the method of setting the driving frequency lower. For example, when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each embodiment, the voltage applied to the device including the LSI ex500 or the LSI ex500 is higher. Can be set. When the identification information indicates that the video data conforms to the conventional standards of MPEG-2, MPEG-4 AVC, and VC-1, the voltage applied to the device including the LSI ex500 or the LSI ex500 may be set lower. Can be. As another embodiment, when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each embodiment, the driving of the CPU ex502 does not have to be suspended. When the identification information indicates that the video data conforms to conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1, the time at which the driving of the CPU ex502 is given because the CPU ex502 has excess processing capacity Can be stopped. Even when the identification information indicates that the video data is generated by the moving picture coding method and the moving picture coding apparatus described in each embodiment, when the CPU ex502 has excess processing capacity, the drive of the CPU ex502 is given at a given time. Can be stopped. In this case, the pause time can be set shorter than when the identification information indicates that the video data conforms to conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1.

Thus, the power conservation effect can be improved by switching between drive frequencies according to the standard followed by the video data. In addition, when the device including the LSI ex500 or the LSI ex500 is driven using a battery, the battery life can be extended due to the power conservation effect.

(Example E)

There are cases where a plurality of video data conforming to different standards is provided to devices and systems such as televisions and mobile phones. In order to enable decoding of a plurality of video data conforming to different standards, the signal processing unit ex507 of the LSI ex500 needs to follow different standards. However, the individual use of the signal processing unit ex507 according to each standard raises the problem of the increase in the size and the cost of the LSI ex500.

In order to solve the above problem, a decoding processing unit for executing the moving picture decoding method described in each embodiment and a decoding processing unit conforming to conventional standards such as MPEG-2, MPEG-4 AVC, and VC-1 are partially shared. The configuration is devised. In FIG. 35A, Ex900 represents a configuration example. For example, the moving picture decoding method described in each embodiment and the moving picture decoding method according to MPEG-4 AVC have in common the details of processing such as entropy coding, inverse quantization, deblocking filtering, and motion compensation prediction. Details of the processes that can be shared include the use of the decoding processing unit ex902 according to MPEG-4 AVC. In contrast, a dedicated decoding processing unit ex901 is used for other processing unique to the configuration of the present invention. Since the configuration of the present invention is characterized by inverse quantization, in particular, for example, a dedicated decoding processing unit ex901 is used for inverse quantization. Alternatively, the decoding processing unit may be shared for one or all of the processing of entropy decoding, deblocking filtering, and motion compensation. Decoding processing units that execute the moving picture decoding method described in each embodiment can be shared for shared processing, and a dedicated decoding processing unit can be used for processing unique to MPEG-4 AVC.

In addition, ex1000 of FIG. 35B shows another example in which the processing is partially shared. This example includes a dedicated decoding processing unit ex1001 supporting processing unique to the configuration of the present invention, a dedicated decoding processing unit ex1002 supporting processing unique to another conventional standard, and the configuration of the present invention. A configuration including a decoding processing unit ex1003 that supports processing shared between the moving picture decoding method and the conventional moving picture decoding method is used. Here, the dedicated decoding processing units ex1001 and ex1002 are not necessarily dedicated to the processes of the configuration of the present invention and the processes of the conventional standard, respectively, and can execute general processing. In addition, the configuration of this embodiment can be executed by the LSI ex500.

As such, reducing the size of the circuit of the LSI and reducing the cost are possible by sharing a decoding processing unit for processing shared between the moving picture decoding method according to the configuration of the present invention and the moving picture decoding method according to the conventional standard. Become.

In summary, the present invention relates to deblocking filtering that can smooth the block boundaries of image or video encoding and decoding. In particular, the present invention relates to filtering pulse code modulation (PCM) coded blocks of samples. Thus, a separate indicator that enables or disables the deblocking filtering of the PCM coded block and a separate indicator that enables or disables the second filtering is inserted into the coded bitstream to deblocking filtering, and adaptive loop filtering or Another type of filtering, such as adaptive sample offset, is switched on or off individually.

Claims (18)

A method of encoding a block of samples of an image of a video signal into a bitstream by pulse coded modulation (PCM),
Determining whether a deblocking filter is applied to the block of samples;
Determining whether a second filter different from the deblocking filter is applied to the block of samples;
Including a deblocking filter indicator in the bitstream indicating a result of determining whether a deblocking filter is applied; And
Including a second filter indicator different from the deblocking filter indicator in the bitstream indicating a result of determining whether a second filter is applied,
Determining whether the second filter is applied,
Determining an amount of quantization noise in a PCM coded block of samples; And
Based on the determined amount of quantization noise, determining whether a second filter is applied.
The method according to claim 1,
And the second filter is an adaptive loop filter or adaptive sample offset (SAO).
The method according to claim 1,
And the deblocking filter indicator is included in the bitstream within a sequence parameter set.
The method according to claim 1,
And the second filter indicator is included in the bitstream on a block-by-block basis.
delete The method according to claim 1,
Determining whether an adaptive sample offset (SAO) is applied to the block of samples; And
Including a SAO indicator in the bitstream indicating a result of determining whether SAO is applied.
The method according to claim 1,
Determining whether the second filter is applied to the block of samples,
Determining whether the second filter is applied to a sample of the block that may be changed by deblocking filtering;
Including a change sample indicator in the bitstream indicating a result of determining whether the second filter is applied to a changed sample;
Determining whether the second filter is applied to a sample of the block that cannot be changed by the deblocking filtering; And
And including a non-modified sample indicator in the bitstream that indicates a result of determining whether the second filter is to be applied to an unmodified sample.
The method according to claim 1,
Determining whether the second filter is to be applied to the block of samples is performed based on a result of determining whether the deblocking filter is to be applied to the block of samples.
The method according to claim 1,
Determining whether the deblocking filter is applied to the PCM coded block of the sample,
Determining whether a block adjacent to the block of samples is encoded using the pulse code modulation or by predictive coding; And
When the adjacent block is encoded by predictive coding, determining that a deblocking filter is applied to the block of samples.
The method according to claim 1,
Determining whether the deblocking filter is applied to the block of samples is performed based on comparing a quantization error of a block adjacent to the block of samples with a predetermined threshold.
A method of decoding a block of samples encoded by pulse code modulation (PCM) in an image of a video signal from a bitstream,
Extracting a deblocking filter indicator from the bitstream indicating whether a deblocking filter is applied to the block of samples;
Extracting from the bitstream a second filter indicator separate from the deblocking filter indicator, indicating whether a second filter is applied to the block of samples;
Applying or not applying the deblocking filter to the block of samples in accordance with the extracted deblocking filter indicator; And
Applying or not applying the second filter to the block of samples in accordance with the extracted second filter indicator,
The second filter indicator,
Determining an amount of quantization noise in a PCM coded block of samples; And
And determining whether the second filter is applied based on the determined amount of quantization noise.
The method according to claim 11,
The second filter indicator indicates whether both an adaptive loop filter and an adaptive sample offset (SAO) apply to the block of samples, or
Two separate indicators are extracted from the bitstream, one of which indicates whether an adaptive loop filter is applied to the block of samples, the other indicates whether SAO is applied to the block of samples, and
Adaptive loop filtering and the SAO are applied or not applied to the block of samples according to the extracted indicator (s).
The method according to claim 11,
The second filter indicator is
A change sample indicator indicating whether the second filter is applied to a sample of the block that can be changed by deblocking filtering; And / or
An unaltered sample indicator indicating whether the second filter is applied to a sample of the block that cannot be altered by deblocking filtering,
Or apply the second filter to each of the modified and unchanged samples of the block in accordance with the extracted change sample indicator and the unchanged sample indicator.
The method according to claim 1,
And the deblocking filter indicator and / or the second filter indicator are inserted in an image slice header or block information.
A computer readable storage medium having computer readable program code embodied thereon, the program code being configured to perform the method of any one of claims 1 to 4 and 6 to 14. An apparatus for encoding a block of samples of an image of a video signal into a bitstream by pulse code modulation (PCM),
A deblocking determining unit that determines whether a deblocking filter is applied to the block of samples;
A second judging unit for judging whether a second filter different from the deblocking filter is applied to the block of samples; And
Include a deblocking filter indicator indicating the result of determining whether the deblocking filter is applied to the bitstream, and a second filter indicator different from the deblocking filter indicator indicating the result of determining whether the second filter is applied to the bitstream. Including an insertion unit for inclusion,
The second determination unit,
Determine an amount of quantization noise in a PCM coded block of samples, and determine, based on the determined amount of quantization noise, whether a second filter is applied.
An apparatus for decoding a block of samples encoded by pulse code modulation (PCM) of an image of a video signal from a bitstream,
A second filter separate from the deblocking filter indicator, extracting a deblocking filter indicator from the bitstream indicating whether a deblocking filter is applied to the block of samples, and indicating whether a second filter is applied to the block of samples An extraction unit for extracting an indicator from the bitstream;
A deblocking filtering unit configured to apply or not apply the deblocking filter to the block of samples in accordance with the extracted deblocking filter indicator; And
A second filtering unit configured to apply or not apply the second filter to the block of samples in accordance with the extracted second filter indicator,
The second filter indicator,
Determining the amount of quantization noise in the PCM coded block of samples and determining, based on the determined amount of quantization noise, whether a second filter is to be applied.
An integrated circuit implementing the apparatus of claim 16 or 17, further comprising a memory, wherein the memory is a vertical and / or horizontal line memory that stores pixels to be filtered.
KR1020147000323A 2011-11-03 2012-11-02 Filtering of blocks coded in the pulse code modulation mode KR102007050B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201161555193P 2011-11-03 2011-11-03
US61/555,193 2011-11-03
PCT/EP2012/071756 WO2013064661A1 (en) 2011-11-03 2012-11-02 Filtering of blocks coded in the pulse code modulation mode

Publications (2)

Publication Number Publication Date
KR20140094496A KR20140094496A (en) 2014-07-30
KR102007050B1 true KR102007050B1 (en) 2019-10-01

Family

ID=47178630

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020147000323A KR102007050B1 (en) 2011-11-03 2012-11-02 Filtering of blocks coded in the pulse code modulation mode

Country Status (3)

Country Link
KR (1) KR102007050B1 (en)
TW (1) TWI577191B (en)
WO (1) WO2013064661A1 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201119206D0 (en) 2011-11-07 2011-12-21 Canon Kk Method and device for providing compensation offsets for a set of reconstructed samples of an image
CN104303505A (en) * 2012-06-26 2015-01-21 日本电气株式会社 Video encoding device, video decoding device, video encoding method, video decoding method, and program
US10382754B2 (en) 2014-04-29 2019-08-13 Microsoft Technology Licensing, Llc Encoder-side decisions for sample adaptive offset filtering
US9747673B2 (en) 2014-11-05 2017-08-29 Dolby Laboratories Licensing Corporation Systems and methods for rectifying image artifacts
WO2016145240A1 (en) * 2015-03-10 2016-09-15 Apple Inc. Video encoding optimization of extended spaces including last stage processes
WO2016204531A1 (en) * 2015-06-16 2016-12-22 엘지전자(주) Method and device for performing adaptive filtering according to block boundary
GB2582029A (en) * 2019-03-08 2020-09-09 Canon Kk An adaptive loop filter
WO2021006651A1 (en) 2019-07-09 2021-01-14 엘지전자 주식회사 Method for coding image on basis of deblocking filtering, and apparatus therefor
CN113411584A (en) * 2020-03-17 2021-09-17 北京三星通信技术研究有限公司 Video coding and decoding method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008057308A2 (en) * 2006-11-08 2008-05-15 Thomson Licensing Methods and apparatus for in-loop de-artifact filtering
US20080267297A1 (en) 2007-04-26 2008-10-30 Polycom, Inc. De-blocking filter arrangements
EP2375747A1 (en) 2010-04-12 2011-10-12 Panasonic Corporation Filter Positioning and Selection

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3284932B2 (en) * 1997-08-05 2002-05-27 松下電器産業株式会社 Image processing device
KR100399932B1 (en) * 2001-05-07 2003-09-29 주식회사 하이닉스반도체 Video frame compression/decompression hardware system for reducing amount of memory
DE102004059993B4 (en) * 2004-10-15 2006-08-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a coded video sequence using interlayer motion data prediction, and computer program and computer readable medium
US20080219582A1 (en) * 2005-08-29 2008-09-11 Koninklijke Philips Electronics, N.V. Apparatus for Filtering an Image Obtained by Block Based Image Decompression
EP2141927A1 (en) * 2008-07-03 2010-01-06 Panasonic Corporation Filters for video coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008057308A2 (en) * 2006-11-08 2008-05-15 Thomson Licensing Methods and apparatus for in-loop de-artifact filtering
US20080267297A1 (en) 2007-04-26 2008-10-30 Polycom, Inc. De-blocking filter arrangements
EP2375747A1 (en) 2010-04-12 2011-10-12 Panasonic Corporation Filter Positioning and Selection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Osman G. Sezer et al., "Subjective Tests on ALF and SAO", JCT-VC of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 6th Meeting: Torino, IT, 14-22 July, 2011, JCTVC-F320*

Also Published As

Publication number Publication date
TW201325242A (en) 2013-06-16
WO2013064661A1 (en) 2013-05-10
KR20140094496A (en) 2014-07-30
TWI577191B (en) 2017-04-01

Similar Documents

Publication Publication Date Title
JP7246008B2 (en) decoder and encoder
JP2023166602A (en) Decoding method and decoder
EP2774362B1 (en) Quantization parameter for blocks coded in the pcm mode
RU2598799C2 (en) Image encoding method, image decoding method, image encoder, image decoder and apparatus for encoding/decoding images
KR102007050B1 (en) Filtering of blocks coded in the pulse code modulation mode
WO2012169184A1 (en) Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
KR101863397B1 (en) Efficient decisions for deblocking
KR20140098740A (en) Deblocking filtering with modified image block boundary strength derivation
EP2559247A2 (en) Filter positioning and selection
KR20130051950A (en) Filtering mode for intra prediction inferred from statistics of surrounding blocks
WO2012175196A1 (en) Deblocking control by individual quantization parameters
WO2011134642A1 (en) Predictive coding with block shapes derived from a prediction error
WO2012013327A1 (en) Quantized prediction error signal for three-input-wiener-filter

Legal Events

Date Code Title Description
N231 Notification of change of applicant
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant