KR20130084053A - Sample adaptive offset(sao) edge offset prediction simplification - Google Patents

Sample adaptive offset(sao) edge offset prediction simplification Download PDF

Info

Publication number
KR20130084053A
KR20130084053A KR1020120004756A KR20120004756A KR20130084053A KR 20130084053 A KR20130084053 A KR 20130084053A KR 1020120004756 A KR1020120004756 A KR 1020120004756A KR 20120004756 A KR20120004756 A KR 20120004756A KR 20130084053 A KR20130084053 A KR 20130084053A
Authority
KR
South Korea
Prior art keywords
offset
category
prediction
current
region
Prior art date
Application number
KR1020120004756A
Other languages
Korean (ko)
Inventor
이배근
권재철
Original Assignee
주식회사 케이티
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 케이티 filed Critical 주식회사 케이티
Priority to KR1020120004756A priority Critical patent/KR20130084053A/en
Publication of KR20130084053A publication Critical patent/KR20130084053A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

In Sample Adaptive Offset (SAO), an offset value must be encoded for each saoTypeIdx. To reduce this value, a method of using the ambient value has been proposed. In JCTVC-G222, if the left region and the current region have the same Edge Offset (EO) depth, category 1 and category 2 use the edge offset of the left region as the prediction of the edge offset of the current region, and category 3 and category 4 In this paper, a method of using opposite sign values of the offset values of category 2 and category 1, respectively, is presented. In the present invention, a method of performing prediction in a left region regardless of a category is proposed.

Figure P1020120004756

Description

Edge offset 에서 의 rediction simplification method in Sample Adaptive Offset (SAO)

The present invention relates to an edge offset prediction simplification method in a sample adaptive offset (SAO).

In Sample Adaptive Offset (SAO), an offset value must be encoded for each saoTypeIdx. To reduce this value, a method of using the ambient value has been proposed. In JCTVC-G222, if the left region and the current region have the same Edge Offset (EO) depth, category 1 and category 2 use the edge offset of the left region as the prediction of the edge offset of the current region, and category 3 and category 4 In this paper, a method of using opposite sign values of the offset values of category 2 and category 1, respectively, is presented. In the present invention, a method of performing prediction in a left region regardless of a category is proposed.

The present invention provides a method for edge offset prediction simplification in Sample Adaptive Offset (SAO).

According to an embodiment of the present invention, an edge offset prediction simplification method in a sample adaptive offset (SAO) is provided.

According to the present invention, edge offset prediction in Sample Adaptive Offset (SAO) can be simplified.

1 is an edge offset category of HM5.

I. SAO edge offset prediction [JCTVC-G222]

Edge offsets should be encoded for each edge offset region. In this case, the bit overhead is large. When the depth of the current edge offset region is equal to the depth of the left region, it is assumed that the statistics have similar statistics, and the SAO edge offset value is predicted from the left region.

Process is as follows.

If the edge offset depths of the left region and the current region are the same, calculate the following and encode Δ.

Category 1 & category 2-> Offset curr = Offset Left + Δ

2.Category 3-> Offset curr = -Offset category1 + Δ

3.Category 4-> Offset curr = -Offset category2 + Δ

II. Proposal method

1 SAO edge offset prediction simplification

In the existing JCTVC-G680, category 3 and category 4 use the offset values of category 1 and category 2 as predictions. In the present invention, a method of predicting in a region correlated with each other regardless of category is used.

Method 1.

1. If the edge offset depths of the left region and the current region are the same, calculate the following and encode Δ.

Offset curr = Offset Left + Δ

Where Offset curr is the edge offset of the current region and Offset left is the edge offset of the left region

Method 2.

2. If the edge offset depths of the left region and the current region are the same, calculate the following and encode Δ.

Offset curr = Offset Left + Δ

Where Offset curr is the edge offset of the current region and Offset left is the edge offset of the left region

3. If the left region and edge offset depth are not the same, and the upper and current region have the same edge offset depth, calculate the following and encode Δ.

Offset curr = Offset upper + Δ,

Where Offset curr is the edge offset of the current region and Offset upper is the edge offset of the upper region

III. Encoding Process

1. A new coding unit (CU) of the current frame is input.

A. One Inter CU consists of several Inter PUs and has two Prediction Modes, MODE_SKIP and MODE_INTER. In case of MODE_SKIP, motion information of a PU having a partition mode (PartMode) of PART_2Nx2N is allocated.

B. In the case of a MODE_INTER CU, four types of PU partitions may exist, and PredMode == MODE_INTER and PartMode == PART_2Nx2N or PART_2NxN, PART_Nx2N, or PART_NxN are signaled in the CU level syntax.

2. Perform motion prediction on the current Inter PU.

A. If a CU is partitioned into multiple PUs, a PU to be currently encoded is input.

B. Perform motion prediction using the previous frame, or the before and after frames for the current PU. Through motion prediction, motion information {motion vector, reference picture index, prediction direction index} for the current PU is obtained.

3. Obtain the motion prediction value (MVP) of the current Inter PU.

A. The current PU motion information is not sent as it is, but the difference with the predicted values obtained from neighboring blocks in time and space is sent to increase the compression efficiency. There are two kinds of prediction modes: merge mode and AMVP mode.

B. Create Merge candidate list and AMVP candidate list to find the motion prediction value.

C. Merge mode obtains merge candidates from the motion information of blocks adjacent to the current PU in time and space. If there are candidates such as motion information of the current PU among the candidates, a flag indicating that the merge mode is used and an index of the candidates are transmitted.

i. The available temporal motion vector prediction value is obtained using the calculated reference picture index (refIdxLX).

ii. Create a merge candidate list (MergeCandList).

iii. If there is a candidate having the same motion information as the current PU, Merge_Flag = 1 is set, and the index (Merge_Idx) of the candidate is encoded.

D. The AMVP mode obtains AMVP candidates from the motion information of blocks adjacent to the current PU in time and space.

i. The luma motion vector prediction value (mvpLX) is obtained.

1) Spatial Motion Vector Candidate (MVP) is extracted from adjacent PUs.

2) Extract the temporal motion vector candidate of the co-located block with RefIdxLX obtained from the motion estimation process.

3) Create MVP list (mvpListLX). The priority of the motion vector is as follows. However, it is limited to the available vectors.

A) Left adjacent block (mvLXA)

B) Upper Adjacent Block (mvLXB)

C) Motion Vector of Temporal Co-located Block (mvLXCol)

4) If several motion vectors have the same value, all motion vectors except the highest priority are deleted from the list.

5) Assign the motion vector of the best predictor among the motion candidates in mvListLX to mvpLX. Best Predictor Cost Function J Mot Candidate block that minimizes SAD .

4. Encode the motion information of the current PU.

A. In the Merge mode, if there is a candidate with the same motion information as the current PU among Merger candidates, the current PU is declared as the Merge mode, and Merge_Flag indicating that the Merge mode is used and Merge_Idx indicating which one of the Merge candidates are sent. . After the motion compensation, a difference signal (residual signal) between the current PU and the PU predicted in the merge mode is obtained. When there is no remaining signal to send, send in Merge_SKIP mode.

B. In the AMVP mode, a candidate whose cost function is minimized is determined by comparing the AMVP candidates with motion vector information of a PU to be currently encoded. The residual signal is obtained after motion compensation using the difference between the candidate motion information minimizing the cost function and the AMVP candidate. Entropy-encodes the difference (MVD) from the motion vector of the PU to the best predictor motion vector.

5. Through moving compensation, the residual signal is obtained by calculating the difference between the pixel value of the current block and the pixel value of the prediction block in units of pixels.

6. Transform and encode the residual signal.

A. The transcoding kernel can use 2x2, 4x4, 8x8, 16x16, 32x32, 64x64, and may limit the kernel used for conversion in advance.

B. For n * n blocks, the conversion factor C is calculated as

C (n, n) = T (n, n) x B (n, n) x T (n, n) T

C. Quantize the transform coefficients.

7. The RDO decides whether to send the residual signal or the conversion factor.

A. If the prediction is good, the residual signal can be transmitted without conversion coding.

B. Compare the cost functions before and after the conversion encoding and choose how the cost is minimized.

C. Signal the type of signal (Residual or transform coefficient) to be transmitted for the current block.

8. Scan the conversion factor.

9. Entropy encode the scanned transform coefficients and the Inter prediction mode.

10. The prediction signal and the residual signal are summed to obtain a reconstructed signal, and deblocking filtering is performed on this signal.

11. Apply Sample Adaptive Offset (SAO) to the reconstructed signal that has undergone deblocking filtering.

A. If the left and current regions have the same edge offset depth, use the edge offset value in the left region as the prediction of the current region, and if the upper region and the current region have the same edge offset depth, use the edge region in the upper region. Used as a prediction of.

IV. Decoding process

1. Entropy decode the received bitstream.

A) Find the block type from the VLC table and get the prediction mode of the current block.

B) Determine whether the signal transmitted for the current block is residual or conversion factor.

C) Obtain the residual signal or transform coefficient for the current block.

2. Determine the scan method according to the partition type of Inter Prediction.

3. Inverse scan the entropy decoded residual signal or transform coefficient to generate a two-dimensional block.

A. For residual signals, create residual blocks.

B. In the case of transform coefficients, generate transform blocks.

4. In case of transform coefficient, inverse quantization and inverse transform are performed to obtain residual block.

A. B (n, n) = T (n, n) x C (n, n) x T (n, n) T.

B. Residual signal is obtained through inverse transformation.

5. Perform Inter prediction.

A. For Merge Mode

i. If PredMode == MODE_SKIP && Merge_Flag == 1, Luma motion vector (mvLX) and reference picture index (refIdxLX) should be obtained through Merge mode.

ii. Merge candidates are extracted from adjacent PU partitions to obtain these informations.

iii. A reference picture index (refIdxLX) is obtained to find a temporary merge candidate of the current PU.

iv. Using the calculated reference picture index redIdxLX, the available Temporal Motion Vector Prediction Value (MVP) is obtained.

v. If the number of candidates (NumMergeCand) in the MergeCandList is '1', Merge_Idx = 1 is set. Otherwise, Merge_Idx is set to the received Merge index value. The motion vector (mvLX) and the reference picture index (refIdxLX) of the candidate indicated by this index value are extracted and used for the motion compensation.

B. For AMVP Mode

i. If not in the merge mode, the reference picture index (refIdxLX) of the current PU is extracted.

ii. The luma motion vector prediction value (mvpLX) is obtained using the reference picture index.

1) Spatial Motion Vector Candidate (MVP) is extracted from adjacent PUs.

2) Extract the Temporal MVP of the co-located block indicated by the reference picture index.

3) Create MVP list (mvpListLX). The priority of the motion vector is as follows. However, it is limited to the available vectors.

A) Left adjacent block (mvLXA)

B) Upper Adjacent Block (mvLXB)

C) Motion Vector of Temporal Co-located Block (mvLXCol)

4) If several motion vectors have the same value, all motion vectors except the highest priority are deleted from the list.

5) If the number of Mvp candidates (NumMVPCand (LX)) in mvpListLX is '1', then mvpIdx = 0 is set, and if it is not '1' (that is, if there are more than one candidates), mpvIdx is set to the received index value. do.

6) Of the motion candidates in mvListLX, the motion vector pointed to by mvpIdx is assigned to mvpLX.

7) Calculate the motion vector mvLX.

A) mvLX [0] = mvdLX [0] + mvpLX [0]; x direction

B) mvLX [1] = mvdLX [1] + mvpLX [1]; y direction

6. Residual signal is added to the previous frame signal to generate the playback signal.

A. A reproduction signal is generated by adding the motion compensation prediction signal in the previous frame and the residual signal of the current PU decoded by using the calculated motion vector.

7. Add the Prediction signal and the residual signal to get the reconstructed signal and perform deblocking filtering on this signal.

8. Apply a Sample Adaptive Offset (SAO) to the reconstructed signal that has undergone deblocking filtering.

A. If the left and current regions have the same edge offset depth, use the edge offset value in the left region as the prediction of the current region, and if the upper region and the current region have the same edge offset depth, use the edge region in the upper region. Used as a prediction of.

Claims (1)

Edge offset prediction simplification method in Sample Adaptive Offset (SAO).
KR1020120004756A 2012-01-16 2012-01-16 Sample adaptive offset(sao) edge offset prediction simplification KR20130084053A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120004756A KR20130084053A (en) 2012-01-16 2012-01-16 Sample adaptive offset(sao) edge offset prediction simplification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120004756A KR20130084053A (en) 2012-01-16 2012-01-16 Sample adaptive offset(sao) edge offset prediction simplification

Publications (1)

Publication Number Publication Date
KR20130084053A true KR20130084053A (en) 2013-07-24

Family

ID=48994820

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120004756A KR20130084053A (en) 2012-01-16 2012-01-16 Sample adaptive offset(sao) edge offset prediction simplification

Country Status (1)

Country Link
KR (1) KR20130084053A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10417821B2 (en) 2015-05-07 2019-09-17 Institut Mines Telecom Method of simplifying a geometry model
GB2575119B (en) * 2018-06-29 2021-11-24 Canon Kk Methods and devices for performing sample adaptive offset (SAO) filtering

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10417821B2 (en) 2015-05-07 2019-09-17 Institut Mines Telecom Method of simplifying a geometry model
GB2575119B (en) * 2018-06-29 2021-11-24 Canon Kk Methods and devices for performing sample adaptive offset (SAO) filtering

Similar Documents

Publication Publication Date Title
KR102085183B1 (en) Method and apparatus for encoding and decoding motion information
KR102531738B1 (en) Method for encoding/decoding video and apparatus thereof
EP3606076B1 (en) Encoding and decoding motion information
EP2984838B1 (en) Backward view synthesis prediction
KR20160106025A (en) Apparatus for encoding a moving picture
KR20170058838A (en) Method and apparatus for encoding/decoding of improved inter prediction
KR101984605B1 (en) A method and an apparatus for processing a video signal
EP2988508A1 (en) Methods and apparatuses for encoding and decoding motion vector
US11671584B2 (en) Inter-prediction method and video decoding apparatus using the same
KR20130067280A (en) Decoding method of inter coded moving picture
US11962764B2 (en) Inter-prediction method and video decoding apparatus using the same
KR20130083314A (en) Lcu boundary methods deblocking filtering
KR20130084053A (en) Sample adaptive offset(sao) edge offset prediction simplification
KR20130002221A (en) Method of determining mvp candidate in inter prediction
KR20130084054A (en) Sample adaptive offset (sao) edge offset
KR20130084052A (en) Sample adaptive offset(sao) diagonal edge offset
KR20130039429A (en) Mvd bi-predictive temporal motion vector derivation
KR20230074682A (en) Method for encoding/decoding video and apparatus thereof
KR20130083313A (en) Asymmetric motion partition methods deblocking filtering
KR20130050851A (en) Transform Coding Method Using Subblocks
KR20130039778A (en) How to improve the encoding efficiency of feference IDN

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination