WO2023277659A1 - 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 - Google Patents
영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 Download PDFInfo
- Publication number
- WO2023277659A1 WO2023277659A1 PCT/KR2022/009548 KR2022009548W WO2023277659A1 WO 2023277659 A1 WO2023277659 A1 WO 2023277659A1 KR 2022009548 W KR2022009548 W KR 2022009548W WO 2023277659 A1 WO2023277659 A1 WO 2023277659A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- samples
- chroma
- collocated
- sao
- reconstructed
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 206
- 241000023320 Luma <angiosperm> Species 0.000 claims abstract description 211
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical compound COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 claims abstract description 211
- 230000003044 adaptive effect Effects 0.000 claims abstract description 27
- 230000005540 biological transmission Effects 0.000 abstract description 8
- 238000001914 filtration Methods 0.000 description 124
- 239000000523 sample Substances 0.000 description 110
- 238000013139 quantization Methods 0.000 description 28
- 238000012545 processing Methods 0.000 description 23
- 230000008569 process Effects 0.000 description 20
- 239000013598 vector Substances 0.000 description 20
- 238000010586 diagram Methods 0.000 description 17
- 238000004422 calculation algorithm Methods 0.000 description 11
- 230000009466 transformation Effects 0.000 description 9
- 230000006835 compression Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 7
- 230000002123 temporal effect Effects 0.000 description 7
- 238000003491 array Methods 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- HMUNWXXNJPVALC-UHFFFAOYSA-N 1-[4-[2-(2,3-dihydro-1H-inden-2-ylamino)pyrimidin-5-yl]piperazin-1-yl]-2-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)ethanone Chemical compound C1C(CC2=CC=CC=C12)NC1=NC=C(C=N1)N1CCN(CC1)C(CN1CC2=C(CC1)NN=N2)=O HMUNWXXNJPVALC-UHFFFAOYSA-N 0.000 description 4
- LDXJRKWFNNFDSA-UHFFFAOYSA-N 2-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)-1-[4-[2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidin-5-yl]piperazin-1-yl]ethanone Chemical compound C1CN(CC2=NNN=C21)CC(=O)N3CCN(CC3)C4=CN=C(N=C4)NCC5=CC(=CC=C5)OC(F)(F)F LDXJRKWFNNFDSA-UHFFFAOYSA-N 0.000 description 4
- 230000002146 bilateral effect Effects 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 4
- YLZOPXRUQYQQID-UHFFFAOYSA-N 3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)-1-[4-[2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidin-5-yl]piperazin-1-yl]propan-1-one Chemical compound N1N=NC=2CN(CCC=21)CCC(=O)N1CCN(CC1)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F YLZOPXRUQYQQID-UHFFFAOYSA-N 0.000 description 3
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- AFCARXCZXQIEQB-UHFFFAOYSA-N N-[3-oxo-3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)propyl]-2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidine-5-carboxamide Chemical compound O=C(CCNC(=O)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F)N1CC2=C(CC1)NN=N2 AFCARXCZXQIEQB-UHFFFAOYSA-N 0.000 description 3
- 238000009877 rendering Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- NIPNSKYNPDTRPC-UHFFFAOYSA-N N-[2-oxo-2-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)ethyl]-2-[[3-(trifluoromethoxy)phenyl]methylamino]pyrimidine-5-carboxamide Chemical compound O=C(CNC(=O)C=1C=NC(=NC=1)NCC1=CC(=CC=C1)OC(F)(F)F)N1CC2=C(CC1)NN=N2 NIPNSKYNPDTRPC-UHFFFAOYSA-N 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000001939 inductive effect Effects 0.000 description 2
- 239000013074 reference sample Substances 0.000 description 2
- 230000011664 signaling Effects 0.000 description 2
- VZSRBBMJRBPUNF-UHFFFAOYSA-N 2-(2,3-dihydro-1H-inden-2-ylamino)-N-[3-oxo-3-(2,4,6,7-tetrahydrotriazolo[4,5-c]pyridin-5-yl)propyl]pyrimidine-5-carboxamide Chemical compound C1C(CC2=CC=CC=C12)NC1=NC=C(C=N1)C(=O)NCCC(N1CC2=C(CC1)NN=N2)=O VZSRBBMJRBPUNF-UHFFFAOYSA-N 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000010454 slate Substances 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/625—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present disclosure relates to a video encoding/decoding method, a bitstream transmission method, and a recording medium storing the bitstream, and relates to CC-SAO applied to various color formats.
- An object of the present disclosure is to provide a video encoding/decoding method and apparatus having improved encoding/decoding efficiency.
- an object of the present disclosure is to provide an image encoding/decoding method applying CC-SAO to various color formats.
- an object of the present disclosure is to provide an image encoding/decoding method capable of efficiently inducing collocated samples.
- an object of the present disclosure is to provide a video encoding/decoding method capable of improving compression efficiency for CC-SAO.
- an object of the present disclosure is to provide a non-transitory computer readable recording medium storing a bitstream generated by a video encoding method according to the present disclosure.
- an object of the present disclosure is to provide a non-transitory computer-readable recording medium for storing a bitstream received and decoded by an image decoding apparatus according to the present disclosure and used for image restoration.
- an object of the present disclosure is to provide a method for transmitting a bitstream generated by a video encoding method according to the present disclosure.
- a video decoding method is a video decoding method performed by a video decoding apparatus, and determines whether cross component sample adaptive offset (CC-SAO) is activated based on a value of a first syntax element. step; Deriving collocated luma samples and collocated chroma samples corresponding to each other from the reconstructed samples based on the chroma format of the reconstructed samples, based on the value of the first syntax element indicating that the CC-SAO is activated. ; and determining an offset to be applied to the reconstructed samples based on the collocated luma sample and the collocated chroma samples.
- CC-SAO cross component sample adaptive offset
- An image encoding method is an image encoding method performed by an image encoding apparatus, wherein collocated luma samples and collocated chroma samples corresponding to each other are obtained from reconstructed samples based on chroma formats of the reconstructed samples. inducing them; and determining a cross component sample adaptive offset (CC-SAO) offset to be applied to the reconstructed samples based on the collocated luma sample and the collocated chroma samples. and encoding a first syntax element indicating whether the CC-SAO is activated.
- CC-SAO cross component sample adaptive offset
- a computer readable recording medium may store a bitstream generated by an image encoding method or apparatus of the present disclosure.
- a transmission method may transmit a bitstream generated by an image encoding method or apparatus of the present disclosure.
- a video encoding/decoding method and apparatus having improved encoding/decoding efficiency may be provided.
- collocated samples can be efficiently derived.
- bit efficiency for performing CC-SAO can be improved.
- FIG. 1 is a diagram schematically illustrating a video coding system to which an embodiment according to the present disclosure may be applied.
- FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure may be applied.
- FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure may be applied.
- FIG. 4 is a diagram illustrating various examples of color formats.
- FIG. 5 is a diagram schematically illustrating a filtering unit in an image encoding device.
- FIG. 6 is a flowchart illustrating a video/video encoding method based on in-loop filtering according to an embodiment of the present disclosure.
- FIG. 7 is a diagram schematically illustrating a filtering unit in a video decoding apparatus.
- FIG. 8 is a flowchart illustrating a video/video decoding method based on in-loop filtering according to an embodiment of the present disclosure.
- 9 is an exemplary view for explaining pixel patterns along an edge direction of an edge offset.
- FIG. 10 is an exemplary diagram for explaining the division of a pixel intensity range of a band offset.
- FIG. 11 is an exemplary diagram for explaining a decoding process of CC-SAO according to an embodiment of the present disclosure.
- FIG. 12 is an exemplary diagram for explaining a positional relationship between luma samples and chroma samples in a 4:2:0 chroma format.
- FIG. 13 is a flowchart illustrating an image encoding method according to an embodiment of the present disclosure.
- FIG. 14 is a flowchart illustrating an image decoding method according to an embodiment of the present disclosure.
- 15 is an exemplary diagram for explaining a positional relationship between luma samples and chroma samples in a 4:4:4 chroma format.
- 16 is an exemplary diagram for explaining a positional relationship between luma samples and chroma samples in a 4:2:2 chroma format.
- 17 is a flowchart illustrating an image encoding/decoding method according to another embodiment of the present disclosure.
- FIG. 18 is a flowchart illustrating an image encoding/decoding method according to another embodiment of the present disclosure.
- 19 is a flowchart illustrating an image encoding/decoding method according to another embodiment of the present disclosure.
- 20 is a flowchart illustrating an image encoding method according to another embodiment of the present disclosure.
- 21 is a flowchart illustrating an image decoding method according to another embodiment of the present disclosure.
- FIG. 22 is a diagram exemplarily illustrating a content streaming system to which an embodiment according to the present disclosure may be applied.
- first and second are used only for the purpose of distinguishing one element from another, and do not limit the order or importance of elements unless otherwise specified. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment may be referred to as a first component in another embodiment. can also be called
- components that are distinguished from each other are intended to clearly explain each characteristic, and do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form a single hardware or software unit, or a single component may be distributed to form a plurality of hardware or software units. Accordingly, even such integrated or distributed embodiments are included in the scope of the present disclosure, even if not mentioned separately.
- components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment comprising a subset of elements described in one embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to the components described in various embodiments are also included in the scope of the present disclosure.
- the present disclosure relates to encoding and decoding of an image, and terms used in the present disclosure may have common meanings commonly used in the technical field to which the present disclosure belongs unless newly defined in the present disclosure.
- a “picture” generally means a unit representing one image in a specific time period
- a slice/tile is a coding unit constituting a part of a picture
- one picture is one It can be composed of one or more slices/tiles.
- a slice/tile may include one or more coding tree units (CTUs).
- pixel or “pel” may mean a minimum unit constituting one picture (or image).
- sample may be used as a term corresponding to a pixel.
- a sample may generally represent a pixel or a pixel value, may represent only a pixel/pixel value of a luma component, or only a pixel/pixel value of a chroma component.
- a “unit” may represent a basic unit of image processing.
- a unit may include at least one of a specific region of a picture and information related to the region. Unit may be used interchangeably with terms such as "sample array", “block” or “area” depending on the case.
- an MxN block may include samples (or a sample array) or a set (or array) of transform coefficients consisting of M columns and N rows.
- “current block” may mean one of “current coding block”, “current coding unit”, “encoding object block”, “decoding object block”, or “processing object block”.
- “current block” may mean “current prediction block” or “prediction target block”.
- transform inverse transform
- quantization inverse quantization
- “current block” may mean “current transform block” or “transform target block”.
- filtering filtering target block.
- a “current block” may mean a block including both a luma component block and a chroma component block or a “luma block of the current block” unless explicitly described as a chroma block.
- the luma component block of the current block may be explicitly expressed by including an explicit description of the luma component block, such as “luma block” or “current luma block”.
- the chroma component block of the current block may be explicitly expressed by including an explicit description of the chroma component block, such as “chroma block” or “current chroma block”.
- “/” and “,” may be interpreted as “and/or”.
- “A/B” and “A, B” could be interpreted as “A and/or B”.
- “A/B/C” and “A, B, C” may mean “at least one of A, B and/or C”.
- FIG. 1 is a diagram schematically illustrating a video coding system to which an embodiment according to the present disclosure may be applied.
- a video coding system may include an encoding device 10 and a decoding device 20.
- the encoding device 10 may transmit encoded video and/or image information or data to the decoding device 20 through a digital storage medium or a network in a file or streaming form.
- the encoding device 10 may include a video source generator 11, an encoder 12, and a transmitter 13.
- the decoding device 20 may include a receiving unit 21, a decoding unit 22, and a rendering unit 23.
- the encoder 12 may be referred to as a video/image encoder, and the decoder 22 may be referred to as a video/image decoder.
- the transmission unit 13 may be included in the encoding unit 12 .
- the receiver 21 may be included in the decoder 22 .
- the rendering unit 23 may include a display unit, and the display unit may be configured as a separate device or an external component.
- the video source generator 11 may acquire video/images through a process of capturing, synthesizing, or generating video/images.
- the video source generating unit 11 may include a video/image capture device and/or a video/image generating device.
- a video/image capture device may include, for example, one or more cameras, a video/image archive containing previously captured video/images, and the like.
- Video/image generating devices may include, for example, computers, tablets and smart phones, etc., and may (electronically) generate video/images.
- a virtual video/image may be generated through a computer or the like, and in this case, a video/image capture process may be replaced by a process of generating related data.
- the encoder 12 may encode the input video/video.
- the encoder 12 may perform a series of procedures such as prediction, transformation, and quantization for compression and encoding efficiency.
- the encoder 12 may output encoded data (encoded video/image information) in the form of a bitstream.
- the transmitter 13 may transmit the encoded video/image information or data output in the form of a bitstream to the receiver 21 of the decoding device 20 through a digital storage medium or network in the form of a file or streaming.
- Digital storage media may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- the transmission unit 13 may include an element for generating a media file through a predetermined file format, and may include an element for transmission through a broadcasting/communication network.
- the receiving unit 21 may extract/receive the bitstream from the storage medium or network and transfer it to the decoding unit 22 .
- the decoder 22 may decode video/images by performing a series of procedures such as inverse quantization, inverse transform, and prediction corresponding to operations of the encoder 12.
- the rendering unit 23 may render the decoded video/image.
- the rendered video/image may be displayed through the display unit.
- FIG. 2 is a diagram schematically illustrating an image encoding apparatus to which an embodiment according to the present disclosure may be applied.
- the image encoding apparatus 100 includes an image division unit 110, a subtraction unit 115, a transform unit 120, a quantization unit 130, an inverse quantization unit 140, and an inverse transform unit ( 150), an adder 155, a filtering unit 160, a memory 170, an inter prediction unit 180, an intra prediction unit 185, and an entropy encoding unit 190.
- the inter prediction unit 180 and the intra prediction unit 185 may collectively be referred to as a “prediction unit”.
- the transform unit 120, the quantization unit 130, the inverse quantization unit 140, and the inverse transform unit 150 may be included in a residual processing unit.
- the residual processing unit may further include a subtraction unit 115 .
- All or at least some of the plurality of components constituting the image encoding apparatus 100 may be implemented as one hardware component (eg, an encoder or a processor) according to embodiments.
- the memory 170 may include a decoded picture buffer (DPB) and may be implemented by a digital storage medium.
- DPB decoded picture buffer
- the image divider 110 may divide an input image (or picture or frame) input to the image encoding apparatus 100 into one or more processing units.
- the processing unit may be called a coding unit (CU).
- the coding unit recursively converts a coding tree unit (CTU) or a largest coding unit (LCU) according to a Quad-tree/binary-tree/ternary-tree (QT/BT/TT) structure ( It can be obtained by dividing recursively.
- one coding unit may be divided into a plurality of deeper depth coding units based on a quad tree structure, a binary tree structure, and/or a ternary tree structure.
- a quad tree structure may be applied first and a binary tree structure and/or ternary tree structure may be applied later.
- a coding procedure according to the present disclosure may be performed based on a final coding unit that is not further divided.
- the largest coding unit may be directly used as the final coding unit, and a coding unit of a lower depth obtained by dividing the largest coding unit may be used as the final cornet unit.
- the coding procedure may include procedures such as prediction, transformation, and/or reconstruction, which will be described later.
- the processing unit of the coding procedure may be a prediction unit (PU) or a transform unit (TU).
- the prediction unit and the transform unit may be divided or partitioned from the final coding unit, respectively.
- the prediction unit may be a unit of sample prediction
- the transform unit may be a unit for deriving transform coefficients and/or a unit for deriving a residual signal from transform coefficients.
- a prediction unit performs prediction on a processing target block (current block), and generates a predicted block including prediction samples for the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied in units of current blocks or CUs.
- the prediction unit may generate various types of information related to prediction of the current block and transmit them to the entropy encoding unit 190 .
- Prediction-related information may be encoded in the entropy encoding unit 190 and output in the form of a bit stream.
- the intra predictor 185 may predict a current block by referring to samples in the current picture.
- the referenced samples may be located in the neighborhood of the current block or may be located apart from each other according to an intra prediction mode and/or an intra prediction technique.
- Intra prediction modes may include a plurality of non-directional modes and a plurality of directional modes.
- the non-directional mode may include, for example, a DC mode and a planar mode.
- the directional modes may include, for example, 33 directional prediction modes or 65 directional prediction modes according to the degree of detail of the prediction direction. However, this is an example, and more or less directional prediction modes may be used according to settings.
- the intra prediction unit 185 may determine a prediction mode applied to the current block by using a prediction mode applied to neighboring blocks.
- the inter prediction unit 180 may derive a predicted block for a current block based on a reference block (reference sample array) specified by a motion vector on a reference picture.
- motion information may be predicted in units of blocks, subblocks, or samples based on correlation of motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- a neighboring block may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture.
- a reference picture including the reference block and a reference picture including the temporal neighboring block may be the same or different.
- the temporal neighboring block may be called a collocated reference block, a collocated CU (colCU), and the like.
- a reference picture including the temporal neighboring block may be referred to as a collocated picture (colPic).
- the inter-prediction unit 180 constructs a motion information candidate list based on neighboring blocks, and provides information indicating which candidate is used to derive the motion vector and/or reference picture index of the current block. can create Inter prediction may be performed based on various prediction modes. For example, in the case of skip mode and merge mode, the inter prediction unit 180 may use motion information of neighboring blocks as motion information of the current block.
- the residual signal may not be transmitted unlike the merge mode.
- motion vector prediction (MVP) mode motion vectors of neighboring blocks are used as motion vector predictors, and motion vector differences and motion vector predictor indicators ( indicator), the motion vector of the current block can be signaled.
- the motion vector difference may refer to a difference between a motion vector of a current block and a motion vector predictor.
- the prediction unit may generate a prediction signal based on various prediction methods and/or prediction techniques described below.
- the predictor may apply intra-prediction or inter-prediction to predict the current block as well as apply both intra-prediction and inter-prediction at the same time.
- a prediction method that simultaneously applies intra prediction and inter prediction for prediction of a current block may be called combined inter and intra prediction (CIIP).
- the prediction unit may perform intra block copy (IBC) to predict the current block.
- Intra-block copying can be used for video/video coding of content such as games, for example, such as screen content coding (SCC).
- IBC is a method of predicting a current block using a restored reference block in a current picture located at a distance from the current block by a predetermined distance.
- the position of the reference block in the current picture can be encoded as a vector (block vector) corresponding to the predetermined distance.
- IBC basically performs prediction within the current picture, but may be performed similarly to inter prediction in that a reference block is derived within the current picture. That is, IBC may use at least one of the inter prediction techniques described in this disclosure.
- the prediction signal generated through the prediction unit may be used to generate a reconstruction signal or a residual signal.
- the subtraction unit 115 subtracts the prediction signal (predicted block, prediction sample array) output from the prediction unit from the input image signal (original block, original sample array) to obtain a residual signal (residual signal, residual block, residual sample array). ) can be created.
- the generated residual signal may be transmitted to the conversion unit 120 .
- the transform unit 120 may generate transform coefficients by applying a transform technique to the residual signal.
- the transform technique uses at least one of a Discrete Cosine Transform (DCT), a Discrete Sine Transform (DST), a Karhunen-Loeve Transform (KLT), a Graph-Based Transform (GBT), or a Conditionally Non-linear Transform (CNT).
- DCT Discrete Cosine Transform
- DST Discrete Sine Transform
- KLT Karhunen-Loeve Transform
- GBT Graph-Based Transform
- CNT Conditionally Non-linear Transform
- GBT means a conversion obtained from the graph when relation information between pixels is expressed as a graph.
- CNT means a transformation obtained based on generating a prediction signal using all previously reconstructed pixels.
- the transformation process may be applied to square pixel blocks having the same size or may be applied to non-square blocks of variable size.
- the quantization unit 130 may quantize the transform coefficients and transmit them to the entropy encoding unit 190 .
- the entropy encoding unit 190 may encode the quantized signal (information on quantized transform coefficients) and output the encoded signal as a bitstream.
- Information about the quantized transform coefficients may be referred to as residual information.
- the quantization unit 130 may rearrange block-type quantized transform coefficients into a one-dimensional vector form based on a coefficient scan order, and the quantized transform coefficients based on the quantized transform coefficients of the one-dimensional vector form. Information about transform coefficients may be generated.
- the entropy encoding unit 190 may perform various encoding methods such as exponential Golomb, context-adaptive variable length coding (CAVLC), and context-adaptive binary arithmetic coding (CABAC).
- the entropy encoding unit 190 may encode together or separately information necessary for video/image reconstruction (eg, values of syntax elements, etc.) in addition to quantized transform coefficients.
- Encoded information (eg, encoded video/video information) may be transmitted or stored in a network abstraction layer (NAL) unit unit in the form of a bitstream.
- the video/video information may further include information on various parameter sets such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/image information may further include general constraint information.
- the signaling information, transmitted information, and/or syntax elements mentioned in this disclosure may be encoded through the above-described encoding procedure and included in the bitstream.
- the bitstream may be transmitted through a network or stored in a digital storage medium.
- the network may include a broadcasting network and/or a communication network
- the digital storage medium may include various storage media such as USB, SD, CD, DVD, Blu-ray, HDD, and SSD.
- a transmission unit (not shown) that transmits the signal output from the entropy encoding unit 190 and/or a storage unit (not shown) that stores the signal output from the entropy encoding unit 190 may be provided as internal/external elements of the image encoding apparatus 100, or may be transmitted. The part may be provided as a component of the entropy encoding unit 190.
- the quantized transform coefficients output from the quantization unit 130 may be used to generate a residual signal.
- a residual signal residual block or residual samples
- a residual signal residual block or residual samples
- the adder 155 adds the reconstructed residual signal to the prediction signal output from the inter prediction unit 180 or the intra prediction unit 185 to obtain a reconstructed signal (reconstructed picture, reconstructed block, reconstructed sample array) can create
- a predicted block may be used as a reconstruction block.
- the adder 155 may be called a restoration unit or a restoration block generation unit.
- the generated reconstruction signal may be used for intra prediction of the next processing target block in the current picture, or may be used for inter prediction of the next picture after filtering as described later.
- the filtering unit 160 may improve subjective/objective picture quality by applying filtering to the reconstructed signal.
- the filtering unit 160 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 170, specifically the DPB of the memory 170. can be stored in
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- the filtering unit 160 may generate various types of filtering-related information and transmit them to the entropy encoding unit 190, as will be described later in the description of each filtering method.
- Information on filtering may be encoded in the entropy encoding unit 190 and output in the form of a bit stream.
- the modified reconstructed picture transmitted to the memory 170 may be used as a reference picture in the inter prediction unit 180 .
- the image encoding apparatus 100 can avoid prediction mismatch between the image encoding apparatus 100 and the video decoding apparatus, and can also improve encoding efficiency.
- the DPB in the memory 170 may store a modified reconstructed picture to be used as a reference picture in the inter prediction unit 180.
- the memory 170 may store motion information of a block in a current picture from which motion information is derived (or encoded) and/or motion information of blocks in a previously reconstructed picture.
- the stored motion information may be transmitted to the inter prediction unit 180 to be used as motion information of a spatial neighboring block or motion information of a temporal neighboring block.
- the memory 170 may store reconstructed samples of reconstructed blocks in the current picture and transfer them to the intra predictor 185 .
- FIG. 3 is a diagram schematically illustrating an image decoding apparatus to which an embodiment according to the present disclosure may be applied.
- the image decoding apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inverse transform unit 230, an adder 235, a filtering unit 240, and a memory 250. ), an inter predictor 260 and an intra predictor 265 may be included.
- the inter prediction unit 260 and the intra prediction unit 265 may be collectively referred to as a "prediction unit”.
- the inverse quantization unit 220 and the inverse transform unit 230 may be included in the residual processing unit.
- All or at least some of the plurality of components constituting the image decoding apparatus 200 may be implemented as one hardware component (eg, a decoder or a processor) according to embodiments.
- the memory 170 may include a DPB and may be implemented by a digital storage medium.
- the video decoding apparatus 200 may restore the video by performing a process corresponding to the process performed in the video encoding apparatus 100 of FIG. 2 .
- the video decoding apparatus 200 may perform decoding using a processing unit applied in the video encoding apparatus.
- a processing unit of decoding may thus be a coding unit, for example.
- a coding unit may be a coding tree unit or may be obtained by dividing a largest coding unit.
- the restored video signal decoded and output through the video decoding apparatus 200 may be reproduced through a reproducing apparatus (not shown).
- the image decoding device 200 may receive a signal output from the image encoding device of FIG. 2 in the form of a bitstream.
- the received signal may be decoded through the entropy decoding unit 210 .
- the entropy decoding unit 210 may parse the bitstream to derive information (eg, video/image information) necessary for image restoration (or picture restoration).
- the video/video information may further include information on various parameter sets such as an adaptation parameter set (APS), a picture parameter set (PPS), a sequence parameter set (SPS), or a video parameter set (VPS).
- the video/image information may further include general constraint information.
- the video decoding apparatus may additionally use the information about the parameter set and/or the general restriction information to decode video.
- the signaling information, received information, and/or syntax elements mentioned in this disclosure may be obtained from the bitstream by being decoded through the decoding procedure.
- the entropy decoding unit 210 decodes information in a bitstream based on a coding method such as exponential Golomb coding, CAVLC, or CABAC, and quantizes a value of a syntax element required for image reconstruction and a transform coefficient related to a residual. values can be output.
- the CABAC entropy decoding method receives bins corresponding to each syntax element in a bitstream, and receives decoding target syntax element information and decoding information of neighboring blocks and decoding target blocks or information of symbols/bins decoded in the previous step.
- a context model is determined using , and the probability of occurrence of a bin is predicted according to the determined context model, and a symbol corresponding to the value of each syntax element is generated by performing arithmetic decoding of the bin.
- the CABAC entropy decoding method may update the context model by using information of the decoded symbol/bin for the context model of the next symbol/bin after determining the context model.
- prediction-related information is provided to the prediction unit (inter prediction unit 260 and intra prediction unit 265), and entropy decoding is performed by the entropy decoding unit 210.
- Dual values that is, quantized transform coefficients and related parameter information may be input to the inverse quantization unit 220 .
- information on filtering may be provided to the filtering unit 240.
- a receiving unit (not shown) for receiving a signal output from the image encoding device may be additionally provided as an internal/external element of the image decoding device 200, or the receiving unit may be provided as a component of the entropy decoding unit 210. It could be.
- the video decoding apparatus may include an information decoder (video/video/picture information decoder) and/or a sample decoder (video/video/picture sample decoder).
- the information decoder may include an entropy decoding unit 210, and the sample decoder includes an inverse quantization unit 220, an inverse transform unit 230, an adder 235, a filtering unit 240, a memory 250, At least one of an inter prediction unit 260 and an intra prediction unit 265 may be included.
- the inverse quantization unit 220 may inversely quantize the quantized transform coefficients and output the transform coefficients.
- the inverse quantization unit 220 may rearrange the quantized transform coefficients in the form of a 2D block. In this case, the rearrangement may be performed based on a coefficient scanning order performed by the video encoding device.
- the inverse quantization unit 220 may perform inverse quantization on quantized transform coefficients using a quantization parameter (eg, quantization step size information) and obtain transform coefficients.
- a quantization parameter eg, quantization step size information
- the inverse transform unit 230 may obtain a residual signal (residual block, residual sample array) by inverse transforming transform coefficients.
- the prediction unit may perform prediction on the current block and generate a predicted block including predicted samples of the current block.
- the prediction unit may determine whether intra prediction or inter prediction is applied to the current block based on the information about the prediction output from the entropy decoding unit 210, and determine a specific intra/inter prediction mode (prediction technique).
- the prediction unit can generate a prediction signal based on various prediction methods (methods) described later is the same as mentioned in the description of the prediction unit of the image encoding apparatus 100.
- the intra predictor 265 may predict the current block by referring to samples in the current picture.
- the description of the intra predictor 185 may be equally applied to the intra predictor 265 .
- the inter prediction unit 260 may derive a predicted block for a current block based on a reference block (reference sample array) specified by a motion vector on a reference picture.
- motion information may be predicted in units of blocks, subblocks, or samples based on correlation of motion information between neighboring blocks and the current block.
- the motion information may include a motion vector and a reference picture index.
- the motion information may further include inter prediction direction (L0 prediction, L1 prediction, Bi prediction, etc.) information.
- a neighboring block may include a spatial neighboring block present in the current picture and a temporal neighboring block present in the reference picture.
- the inter predictor 260 may configure a motion information candidate list based on neighboring blocks and derive a motion vector and/or reference picture index of the current block based on the received candidate selection information. Inter prediction may be performed based on various prediction modes (methods), and the prediction-related information may include information indicating an inter prediction mode (method) for the current block.
- the adder 235 restores the obtained residual signal by adding it to the prediction signal (predicted block, prediction sample array) output from the prediction unit (including the inter prediction unit 260 and/or the intra prediction unit 265). Signals (reconstructed picture, reconstructed block, reconstructed sample array) can be generated. When there is no residual for the block to be processed, such as when the skip mode is applied, a predicted block may be used as a reconstruction block. The description of the adder 155 may be equally applied to the adder 235 .
- the adder 235 may be called a restoration unit or a restoration block generation unit.
- the generated reconstruction signal may be used for intra prediction of the next processing target block in the current picture, or may be used for inter prediction of the next picture after filtering as described below.
- the filtering unit 240 may improve subjective/objective picture quality by applying filtering to the reconstructed signal.
- the filtering unit 240 may generate a modified reconstructed picture by applying various filtering methods to the reconstructed picture, and store the modified reconstructed picture in the memory 250, specifically the DPB of the memory 250.
- the various filtering methods may include, for example, deblocking filtering, sample adaptive offset, adaptive loop filter, bilateral filter, and the like.
- a (modified) reconstructed picture stored in the DPB of the memory 250 may be used as a reference picture in the inter prediction unit 260 .
- the memory 250 may store motion information of a block in a current picture from which motion information is derived (or decoded) and/or motion information of blocks in a previously reconstructed picture.
- the stored motion information may be transmitted to the inter prediction unit 260 to be used as motion information of spatial neighboring blocks or motion information of temporal neighboring blocks.
- the memory 250 may store reconstructed samples of reconstructed blocks in the current picture and transfer them to the intra prediction unit 265 .
- the embodiments described in the filtering unit 160, the inter prediction unit 180, and the intra prediction unit 185 of the video encoding apparatus 100 are the filtering unit 240 of the video decoding apparatus 200, The same or corresponding to the inter prediction unit 260 and the intra prediction unit 265 may be applied.
- FIG. 4 is a diagram illustrating various examples of color formats. Specifically, (a) of FIG. 4 shows 4:2:0 (chroma format), (b) of FIG. 4 shows 4:2:2 (chroma format), and (c) of FIG. 4 shows 4: Represents 4:4 (chroma format).
- a source or coded picture/video may include a luma component array, and may further include two chroma component (cb, cr) arrays in some cases. That is, one pixel of a picture/video may include a luma sample and a chroma sample (cb, cr).
- the color format may indicate a configuration format of a luma component and chroma components (cb, cr), and may be referred to as a chroma format.
- the color format (or chroma format) may be predetermined or adaptively signaled.
- the chroma format may be signaled based on at least one of chroma_format_idc and separate_colour_plane_flag as shown in Table 1 below.
- each of the two chroma arrays may have a height equal to half the height of the luma array and a width equal to half the width of the luma array.
- each of the two chroma arrays may have a height equal to the height of the luma array and a width half the width of the luma array.
- the height and width of two chroma arrays can be determined based on the value of separate_colour_plane_flag as follows.
- each of the two chroma arrays may have a height equal to the height of the luma array and a width equal to the width of the luma array.
- SubWidthC and SubHeightC may be ratios between luma samples and chroma samples.
- the chroma format may be 4:4:4.
- the width of the luma sample block is 16
- the width of the corresponding chroma sample block may be 16/SubWidthC.
- syntax and bitstreams related to chroma samples can be parsed only when the value of chromaArrayType is not equal to 0.
- FIG. 5 shows the filtering units 160 and 500 in the video encoding apparatus 100
- FIG. 6 shows an in-loop filtering-based video/video encoding method
- FIG. 7 shows the filtering units 240 and 700 in the video decoding apparatus 200
- FIG. 8 shows an in-loop filtering-based video/video decoding method.
- Data encoded by the filtering unit 500 of FIG. 5 and the encoding method of FIG. 6 may be stored in the form of a bitstream.
- pictures constituting the video/video may be encoded/decoded according to a series of decoding orders.
- a picture order corresponding to the output order of decoded pictures may be set differently from the decoding order, and based on this, not only forward prediction but also backward prediction may be performed during inter prediction.
- a picture decoding procedure may include a picture reconstruction procedure and an in-loop filtering procedure for a reconstructed picture.
- a modified reconstruction picture may be generated through an in-loop filtering procedure, and the modified reconstruction picture may be output as a decoded picture.
- the output picture may be stored in the decoded picture buffer or the memory 250 of the video decoding apparatus 200 and used as a reference picture in an inter prediction procedure when decoding a picture thereafter.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, an adaptive loop filter (ALF) procedure, and/or a bi-lateral filter procedure.
- SAO sample adaptive offset
- ALF adaptive loop filter
- one or some of the deblocking filtering procedure may be sequentially applied, or all may be applied. They can also be applied sequentially.
- SAO sample adaptive offset
- ALF adaptive loop filter
- bi-lateral filter procedure may be sequentially applied, or all may be applied. They can also be applied sequentially.
- SAO SAO procedure may be performed after a deblocking filtering procedure is applied to a reconstructed picture.
- ALF adaptive loop filter
- bi-lateral filter procedure may be sequentially applied, or all may be applied.
- an SAO procedure may be performed after a deblocking filtering procedure is applied to a reconstructed picture.
- the ALF procedure may be performed after the deblocking filtering procedure is applied to the reconstructed picture. This may also be performed in the encoding device as well.
- the picture encoding procedure encodes information for picture reconstruction (eg, partitioning information, prediction information, residual information, etc.) and outputs it in the form of a bitstream, as well as generating a reconstructed picture for a current picture, It may include a procedure for applying filtering.
- a modified reconstruction picture may be generated through an in-loop filtering procedure, and may be stored in the decoded picture buffer or the memory 170 .
- the stored picture may be used as a reference picture in an inter prediction procedure when encoding a later picture, similar to the case in the video decoding apparatus 200 .
- (in-loop) filtering-related information may be encoded in the entropy encoding unit 190 of the video encoding apparatus 100 and output in the form of a bit stream, and the video decoding apparatus ( 200) may perform an in-loop filtering procedure based on the filtering-related information, using the same method as the encoding device.
- the video encoding apparatus 100 and the video decoding apparatus 200 may derive the same prediction result, and the picture It is possible to increase coding reliability and reduce the amount of data to be transmitted for picture coding.
- the filtering unit 500 may include a deblocking filtering processing unit 505 , an SAO processing unit 510 and/or an ALF processing unit 515 .
- An image/video encoding method based on in-loop filtering performed by the image encoding apparatus 100 and the filtering unit 500 may be performed as follows.
- the image encoding apparatus 100 may generate a reconstructed picture for the current picture (S605).
- the video encoding apparatus 100 may generate a reconstructed picture through procedures such as partitioning of an input original picture, intra/inter prediction, and residual processing.
- the image encoding apparatus 100 generates prediction samples for a current block through intra or inter prediction, generates residual samples based on the prediction samples, transforms/quantizes the residual samples, and then inverses them again.
- Quantization/inverse transform processing may be used to derive (modified) residual samples.
- the reason for performing inverse quantization/inverse transformation after transformation/quantization is to derive the same residual samples as the residual samples derived from the image decoding apparatus 200 as described above.
- the image encoding apparatus 100 may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and the (modified) residual samples. Also, the image encoding apparatus 100 may generate the reconstructed picture based on the reconstructed block.
- the image encoding apparatus 100 may perform an in-loop filtering procedure on the reconstructed picture (S610).
- a modified reconstructed picture may be generated through the in-loop filtering procedure.
- the modified reconstructed picture may be stored in the decoded picture buffer or memory 170 as a decoded picture, and may be used as a reference picture in an inter prediction procedure when encoding a picture thereafter.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, and/or an adaptive loop filter (ALF) procedure.
- S610 may be performed by the filtering unit 500 of the video encoding device 100.
- the deblocking filtering process may be performed by the deblocking filtering processor 505, the SAO process by the SAO processor 510, and the ALF process by the ALF processor 515.
- Some of the various filtering procedures may be omitted in consideration of image characteristics, complexity, and efficiency, and in this case, related components of FIG. 5 may also be omitted.
- the image encoding apparatus 100 may encode image information including information for picture reconstruction and information related to (in-loop) filtering, and output the encoded image information in a bitstream form (S615).
- the output bitstream may be transmitted to the image decoding apparatus 200 through a storage medium or network.
- S615 may be performed by the entropy encoding unit 190 of the video encoding apparatus 100.
- the information for picture reconstruction may include partitioning information described above/below, prediction information, residual information, and the like.
- the filtering-related information includes, for example, flag information indicating whether all in-loop filtering is applied, flag information indicating whether each filtering procedure is applied, SAO type information, SAO offset value information, and SAO band location information. information, information on ALF filtering shape and/or information on ALF filtering coefficients, and the like. Meanwhile, as described above, when some filtering methods are omitted, information (parameters) related to the omitted filtering may naturally be omitted.
- the filtering unit 700 may include a deblocking filtering processing unit 705 , an SAO processing unit 710 and/or an ALF processing unit 715 .
- An image/video decoding method based on in-loop filtering performed by the image decoding apparatus 200 and the filtering unit 700 may be performed as follows.
- the video decoding apparatus 200 may perform an operation corresponding to the operation performed by the video encoding apparatus 100 .
- the video decoding apparatus 200 may receive encoded video information in the form of a bitstream.
- the image decoding apparatus 200 may obtain image information including information for picture reconstruction and information related to (in-loop) filtering from the received bitstream (S805). S805 may be performed by the entropy decoding unit 210 of the image decoding apparatus 200.
- the information for picture reconstruction may include partitioning information described above/below, prediction information, residual information, and the like.
- the filtering-related information includes, for example, flag information indicating whether all in-loop filtering is applied, flag information indicating whether each filtering procedure is applied, SAO type information, SAO offset value information, and SAO band location information. information, ALF filtering shape information, ALF filtering coefficient information, bilateral filter shape information, and/or bilateral filter weight information. Meanwhile, as described above, when some filtering methods are omitted, information (parameters) related to the omitted filtering may naturally be omitted.
- the video decoding apparatus 200 may generate a reconstructed picture for a current picture based on the picture reconstruction information (S810). As described above, the video decoding apparatus 200 may generate a reconstructed picture through procedures such as intra/inter prediction and residual processing for a current picture. Specifically, the video decoding apparatus 200 may generate prediction samples for a current block through intra or inter prediction based on prediction information included in the picture restoration information, and include them in the picture restoration information. Residual samples for the current block may be derived based on the residual information that is used (based on inverse quantization/inverse transformation). The video decoding apparatus 200 may generate a reconstructed block including reconstructed samples for the current block based on the prediction samples and the residual samples. Also, the video decoding apparatus 200 may generate the reconstructed picture based on the reconstructed block.
- the picture reconstruction information S810
- the video decoding apparatus 200 may generate a reconstructed picture through procedures such as intra/inter prediction and residual processing for a current picture.
- the video decoding apparatus 200 may generate prediction
- the video decoding apparatus 200 may perform an in-loop filtering procedure on the reconstructed picture (S815).
- a modified reconstructed picture may be generated through the in-loop filtering procedure.
- the modified reconstructed picture may be stored as an output and/or decoded picture buffer or memory 250 as a decoded picture, and may be used as a reference picture in an inter prediction procedure when decoding a picture thereafter.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, and/or an adaptive loop filter (ALF) procedure.
- S815 may be performed by the filtering unit 700 of the video decoding apparatus 200.
- the deblocking filtering process may be performed by the deblocking filtering processor 705, the SAO process by the SAO processor 710, and the ALF process by the ALF processor 715.
- Some of the various filtering procedures may be omitted in consideration of image characteristics, complexity, and efficiency, and in this case, related components of FIG. 7 may also be omitted.
- the image encoding apparatus 100 and the image decoding apparatus 200 may perform a picture restoration procedure.
- a reconstructed block may be generated based on intra prediction/inter prediction in units of blocks, and a reconstructed picture including the reconstructed blocks may be generated.
- the current picture/slice is an I picture/slice
- blocks included in the current picture/slice can be reconstructed based only on intra prediction.
- the current picture/slice is a P or B picture/slice
- blocks included in the current picture/slice can be reconstructed based on intra prediction or inter prediction.
- intra prediction may be applied to some blocks in the current picture/slice, and inter prediction may be applied to the remaining blocks.
- An in-loop filtering procedure may be performed on the reconstructed picture generated through the above-described procedures.
- a modified reconstructed picture may be generated through an in-loop filtering procedure, and the video decoding apparatus 200 may output the modified reconstructed picture as a decoded picture.
- the video encoding apparatus 100/video decoding apparatus 200 may store the output picture in the decoded picture buffer or the memory 170 or 250 and use it as a reference picture in an inter prediction procedure when encoding/decoding a later picture.
- the in-loop filtering procedure may include a deblocking filtering procedure, a sample adaptive offset (SAO) procedure, and/or an adaptive loop filter (ALF) procedure.
- SAO sample adaptive offset
- ALF adaptive loop filter
- one or some of the deblocking filtering procedure may be sequentially applied, or all may be applied. They can also be applied sequentially.
- SAO sample adaptive offset
- ALF adaptive loop filter
- bi-lateral filter procedure may be sequentially applied, or all may be applied. They can also be applied sequentially.
- SAO SAO procedure may be performed after a deblocking filtering procedure is applied to a reconstructed picture.
- ALF adaptive loop filter
- the ALF procedure may be performed after the deblocking filtering procedure is applied to the reconstructed picture. This may also be performed in the image encoding device 100 as well.
- Deblocking filtering is a filtering technique that removes distortion at the boundary between blocks in a reconstructed picture.
- the deblocking filtering procedure may derive a target boundary from a reconstructed picture, determine a boundary strength (bS) of the target boundary, and perform deblocking filtering on the target boundary based on the bS.
- the bS may be determined based on prediction modes of two blocks adjacent to the target boundary, difference in motion vectors, whether reference pictures are the same, whether non-zero significant coefficients exist, and the like.
- SAO is a method of compensating for an offset difference between a reconstructed picture and an original picture in units of samples, and may be applied based on types such as band offset and edge offset, for example.
- samples may be classified into different categories according to each SAO type, and an offset value may be added to each sample based on the category.
- Filtering information for SAO may include information on whether SAO is applied, SAO type information, SAO offset value information, and the like.
- SAO may be applied to a reconstructed picture after applying the deblocking filtering.
- ALF Adaptive Loop Filter
- the video encoding apparatus 100 may determine whether to apply ALF, an ALF shape, and/or an ALF filtering coefficient by comparing the reconstructed picture and the original picture, and signal them to the video decoding apparatus 200. That is, filtering information for ALF may include information on whether ALF is applied or not, ALF filter shape information, ALF filtering coefficient information, and the like. ALF may be applied to a reconstructed picture after applying the deblocking filtering.
- a sample adaptive offset may be applied using offsets for each CTB designated by the video encoding apparatus 100 and may be applied to a reconstructed signal after deblocking filtering.
- the image encoding apparatus 100 may first determine whether to apply the SAO process to the current slice. If SAO is applied to the current slice, each CTB can be classified as one of the five SAO types shown in Table 2 below.
- the concept of SAO may be to reduce distortion by classifying pixels into categories and summing an offset to pixels of each category.
- the SAO operation may include an edge offset using edge attributes for pixel classification of SAO types 1 to 4 and a band offset using pixel strength for pixel classification of SAO type 5.
- Each applicable CTB may have SAO parameters including sao_merge_left_flag, sao_merge_up_flag, SAO type and four offsets, etc. If the value of sao_merge_left_flag is equal to the first value (e.g., 1), SAO can be applied to the current CTB by reusing the SAO type and offsets of the left CTB. If the value of sao_merge_up_flag is equal to the first value (e.g., 1), SAO can be applied to the current CTB by reusing the SAO type and offsets of the upper CTB.
- FIG. 9 shows a 0-degree 1-D 3-pixel pattern
- FIG. 9 (b) shows a 90-degree 1-D 3-pixel pattern
- (c) represents a 135-degree 1-D 3-pixel pattern
- (d) of FIG. 9 represents a 45-degree 1-D 3-pixel pattern.
- Each CTB can be classified into one of five categories according to Table 3 below.
- the value of the current pixel p is greater than the values of neighboring pixels (local maximun), it can be classified as category 4, and the value of the current pixel p is greater than the value of any one of the neighboring pixels. If it is the same as the value of another pixel (edge), it can be classified as category 3. In addition, if the value of the current pixel p is smaller than the value of any one pixel among neighboring pixels and equal to the value of another pixel (edge), it can be classified as category 2, and the value of the current pixel p is the neighboring pixel If it is smaller than the values of (local minimun), it can be classified as category 1.
- all pixels in one CTB area can be classified into 32 single bands by using the five most significant bits of the pixel value as a band index. That is, the pixel intensity range can be divided into 32 equal segments from 0 to the maximum intensity value (e.g., 255 for 8-bit pixels).
- the maximum intensity value e.g., 255 for 8-bit pixels.
- Four adjacent bands are grouped together, and each group can be displayed in the leftmost position as illustrated in FIG. 10 .
- the image encoding apparatus 100 may obtain the maximum distortion reduction group by compensating the offset of each band by searching all locations.
- CC-SAO cross-component sample adaptive offset
- a CC-SAO coding tool has been proposed to improve compression efficiency.
- An example of the decoding process of CC-SAO is shown in FIG. 11 .
- CC-SAO similar to SAO, reconstructed samples can be classified into different categories, one offset can be derived for each category, and the derived offsets can be summed to reconstructed samples of the corresponding category.
- SAO which uses only a single luma/chroma component of the current sample as an input
- CC-SAO uses all three components to classify the current sample into different categories.
- output samples of the deblocking filter can be used as input to CC-SAO.
- band offset can be used to improve the quality of reconstructed samples, in order to achieve a better complexity/performance trade-off.
- three candidate samples may be selected to classify the corresponding sample into different categories.
- the three candidate samples may be one collocated Y sample, one collocated U sample, and one collocated V sample.
- Sample values of these three selected samples can be classified into three different bands ⁇ band Y , band U , band V ⁇ .
- Joint index i may be used to indicate the category of the corresponding sample.
- One offset may be signaled and added to reconstructed samples belonging to a corresponding category. Classification into different bands ⁇ band Y , band U , band V ⁇ , derivation of joint index i, and summation of offsets can be expressed as Equation 1 below.
- ⁇ Y col , U col , V col ⁇ may indicate three selected collocated samples, and the three selected coolocated samples may be used to classify the current sample.
- ⁇ N Y , N U , N V ⁇ may indicate the number of equally divided bands applied to the entire range of ⁇ Y col , U col , V col ⁇ , respectively, and BD may indicate an internal coding bit depth.
- C rec may indicate reconstructed samples before CC-SAO is applied, and C' rec may indicate reconstructed samples after CC-SAO is applied. may represent the value of the CC-SAO offset applied to the i-th band offset (BO) category.
- collocated luma samples e.g., 4 of Collocated & negighboring Y
- 9 candidate positions e.g., 0 to 8 of Collocated & negighboring Y
- the location of collocated chroma samples e.g., 4 of Collocated U and Collocated V
- each classifier eg, location of Y col , N Y , N U , N V and offsets
- the maximum value of ⁇ N Y , N U , N V ⁇ can be set to ⁇ 16, 4, 4 ⁇ , and offsets can be limited to have a range within [-15, 15].
- Maximum classifiers per frame can be limited to 4.
- CC-SAO filtering may be performed on the premise of a 4:2:0 chroma format. That is, it can be said that the conventional CC-SAO is optimized for the 4:2:0 chroma format.
- one chrominance pixel may correspond to four luminance pixels (luma samples). That is, in the 4:2:0 chroma format, one chroma sample is adjacent to two neighboring luma samples, and CC-SAO filtering can be performed using one of the nine neighboring luma samples corresponding to the current chroma sample. there is.
- a 4:2:2 chroma format and a 4:4:4 chroma format may also exist, as illustrated in FIGS. 4(b) and 4(c).
- the 4:2:2 chroma format and the 4:4:4 chroma format since the distribution positions and numbers of chroma samples and their corresponding neighboring luma samples are different from those in the 4:2:0 chroma format, the 4:2:0 chroma format If the optimized CC-SAO method is applied as it is, the algorithm efficiency may be reduced.
- Embodiments according to the present disclosure propose methods for efficiently applying CC-SAO according to various color formats (chroma formats) and various chroma sample types in order to solve the problems of the conventional CC-SAO. That is, embodiments of the present disclosure propose methods for applying CC-SAO in various chroma formats and various chroma sample positions.
- FIG. 13 is a flowchart illustrating an image encoding method according to an embodiment of the present disclosure
- FIG. 14 is a flowchart illustrating an image decoding method according to an embodiment of the present disclosure.
- the video encoding apparatus 100 may derive collocated luma samples and collocated chroma samples from the reconstructed samples based on the chroma format of the reconstructed samples (S1305).
- Collocated luma samples and collocated chroma samples may correspond to each other.
- the collocated luma samples may be collocated Y samples, and the collocated chroma samples may be collocated U samples and collocated V samples.
- Reconstructed samples may be modified reconstructed samples to which deblocking filtering is applied.
- the image encoding apparatus 100 may determine an offset based on collocated luma samples and collocated chroma samples (S1310).
- the offset may be an offset to be used for CC-SAO (CC-SAO offset). That is, the offset may be an offset to be applied to reconstructed samples.
- the image encoding apparatus 100 may perform CC-SAO using an offset. Specifically, the image encoding apparatus 100 may output reconstructed samples (modified reconstructed samples) to which CC-SAO is applied by adding the offset determined in step S1310 to the values of the reconstructed samples.
- the image encoding apparatus 100 may encode parameters related to performing CC-SAO (CC-SAO parameters) in a bitstream form (S1315).
- the image encoding apparatus 100 may encode the first syntax element in the form of a bitstream (S1315).
- a first syntax element (e.g., sps_sao_enabled_flag) may indicate whether CC-SAO is activated.
- the value of the first syntax element is equal to the first value (e.g., 1)
- this may indicate that CC-SAO is activated.
- the value of the first syntax element is equal to the second value (e.g., 0)
- this may indicate that CC-SAO is not activated.
- the video decoding apparatus 200 may derive collocated luma samples and collocated chroma samples from the reconstructed samples based on the chroma format of the reconstructed samples (S1415). As described above, collocated luma samples and collocated chroma samples may correspond to each other, and reconstructed samples may be modified reconstructed samples to which deblocking filtering is applied.
- the image decoding apparatus 200 may determine an offset based on collocated luma samples and collocated chroma samples (S1420).
- the offset may be an offset to be used for CC-SAO (CC-SAO offset). That is, the offset may be an offset to be applied to reconstructed samples.
- the video decoding apparatus 200 may perform CC-SAO using an offset and/or a CC-SAO parameter. Specifically, the image decoding apparatus 200 may output reconstructed samples (modified reconstructed samples) to which CC-SAO is applied by adding the offset determined in step S1420 to the values of the reconstructed samples.
- the video decoding apparatus 200 obtains a first syntax element (e.g., sps_sao_enabled_flag) from the bitstream (S1405), and determines whether CC-SAO is activated based on the value of the first syntax element. It can (S1410). For example, when the value of the first syntax element is equal to the first value (e.g., 1), this may indicate that CC-SAO is activated. In this case, the video decoding apparatus 200 may determine that CC-SAO is activated and perform CC-SAO by performing processes below step S1415.
- a first syntax element e.g., sps_sao_enabled_flag
- the video decoding apparatus 200 may determine that CC-SAO is not activated and may not perform processes below step S1415.
- Embodiment 1 is an embodiment for deriving collocated luma samples (S1305 and S1415).
- positions of chroma samples do not coincide with positions of corresponding luma samples. Therefore, in the conventional CC-SAO, one of 9 neighboring luma samples (candidate luma samples) collocated with chroma samples is used for CC-SAO, and the luma sample used for CC-SAO (collocated luma sample) Information is transmitted to the video decoding device 200 .
- CC-SAO in order to select a luma sample (collocated luma sample) to be used for CC-SAO among 9 candidate luma samples, all 9 candidate luma samples are tested (CC-SAO test) Since must be performed, the complexity of the image encoding apparatus 100 increases.
- the CC-SAO filter information transmitted to the video decoding apparatus 200 increases.
- Embodiment 1 may correspond to a method of performing CC-SAO using only one luma sample at a fixed location or performing CC-SAO using one luma sample value. Therefore, according to the first embodiment, since the CC-SAO test does not have to be performed, the complexity of the image encoding apparatus 100 can be improved, and since information on collocated luma samples does not need to be transmitted, compression efficiency can be improved.
- Example 1 can be divided into the following examples.
- Embodiment 1-1 is an embodiment optimized for a 4:4:4 chroma format.
- luma samples e.g., 4 of Collocated & neighboring Y
- chroma samples e.g., 4 and 4 of Collocated U
- the video encoding apparatus 100 may derive a luma sample positioned at the same position as the collocated chroma samples as the collocated luma sample. In this case, a CC-SAO test may not be performed on candidate luma samples, and information on collocated luma samples may not be transmitted.
- the image decoding apparatus 200 may derive a luma sample positioned at the same location as the collocated chroma samples as a collocated luma sample based on the fact that the chroma format of the reconstructed samples is a 4:4:4 chroma format.
- Example 1-2 is an example of selecting one luma sample positioned at a fixed position among candidate luma samples as a collocated luma sample.
- the method of Example 1-2 can be applied not only to the 4:4:4 chroma format, but also to the 4:2:2 chroma format and the 4:2:0 chroma format.
- the video encoding apparatus 100 may derive a luma sample positioned at a predetermined position among candidate luma samples as a collocated luma sample.
- the candidate luma samples may be luma samples located around from locations of collocated chroma samples.
- the predetermined chroma format may be one or more of a 4:4:4 chroma format, a 4:2:2 chroma format, and a 4:2:0 chroma format.
- a given chroma format may be a 4:2:2 chroma format or a 4:2:0 chroma format.
- a predetermined number of candidate luma samples e.g., 0 to 8 of Collocated & neighboring Y
- collocated chroma samples e.g., 4 of Collocated U and Collocated V
- a luma sample e.g., 4 of Collocated & neighboring Y located at the position of can be derived as a collocated luma sample.
- the image decoding apparatus 200 may derive a luma sample positioned at a predetermined position among candidate luma samples as a collocated luma sample based on the fact that the chroma format of the reconstructed samples is a predetermined chroma format.
- the candidate luma samples may be luma samples located around from locations of collocated chroma samples.
- the predetermined chroma format may be one or more of a 4:4:4 chroma format, a 4:2:2 chroma format, and a 4:2:0 chroma format.
- a given chroma format may be a 4:2:2 chroma format or a 4:2:0 chroma format.
- a predetermined number of candidate luma samples e.g., 0 to 8 of Collocated & neighboring Y
- collocated chroma samples e.g., 4 of Collocated U and Collocated V
- a luma sample e.g., 4 of Collocated & neighboring Y located at the position of can be derived as a collocated luma sample.
- Embodiment 1-3 calculates one luma sample value by filtering (or calculating filtering) all or some of the candidate luma samples, and performs CC-SAO by using one luma sample value as a collocated luma sample value.
- the method of Example 1-3 can be applied not only to the 4:4:4 chroma format, but also to the 4:2:2 chroma format and the 4:2:0 chroma format.
- One luma sample value may be derived by averaging all or some values of candidate luma samples. That is, the image encoding apparatus 100 and the image decoding apparatus 200 may derive one luma sample value from all or some of candidate luma sample values. For example, in the example of FIG. 16, when luma sample No. 4 is selected as the luma sample to be used for CC-SAO, 'one luma sample value' is obtained by using any one of Equations 2 to 7 below. can be derived.
- Equation 2 is a method of deriving one luma sample value using the values of candidate luma samples #4, #1, #3, #5, and #7.
- Equation 3 is the value of candidate luma samples #0 to #7. It shows how to derive one luma sample value using
- Equation 4 shows a method of deriving one luma sample value using the values of candidate luma samples #4, #7, #3, #5, #6, and #8. , a method of deriving one luma sample value using values of candidate luma samples #1, #2, #7, and #8.
- Equation 6 shows a method of deriving a value of one luma sample using the values of candidate luma samples 4, 3, and 5, and Equation 7 represents candidate luma samples 4, 5, 7, and 8. It shows a method of deriving one luma sample value using the values of .
- Y out represents a luma sample value (one luma sample value) to be applied to CC-SAO
- Y 0 to Y 7 may represent luma samples at corresponding number positions illustrated in FIG. 16 . .
- Embodiment 2 is an embodiment of the process of deriving collocated luma samples (S1305 and S1415). Specifically, Example 2 is an example of deriving collocated luma samples from among less than 9 candidate luma samples. The method of Example 2 can be applied not only to the 4:4:4 chroma format, but also to the 4:2:2 chroma format and the 4:2:0 chroma format.
- CC-SAO As described above, according to the conventional CC-SAO, in order to select a luma sample (collocated luma sample) to be used for CC-SAO among 9 candidate luma samples, all 9 candidate luma samples are tested (CC-SAO). Since the SAO test) must be performed, the complexity of the image encoding apparatus 100 increases. In addition, according to the conventional CC-SAO, since information on candidate luma samples must be transmitted to the video decoding apparatus 200 for each CTU, the CC-SAO filter information transmitted to the video decoding apparatus 200 increases. .
- Example 2 may correspond to a method of performing CC-SAO using less than 9 candidate luma samples. Therefore, according to the second embodiment, the complexity of the video encoding apparatus 100 can be improved because the number of candidate luma samples for which the CC-SAO test is performed is reduced, and the compression efficiency is reduced because the amount of positional information of candidate luma samples is reduced. this can be improved.
- Example 2 can be divided into the following examples.
- the image encoding apparatus 100 may determine chroma formats of reconstructed samples. For example, the video encoding apparatus 100 may determine whether a chroma format of reconstructed samples is a predetermined chroma format.
- a given chroma format may include one or more of a 4:4:4 chroma format, a 4:2:2 chroma format, and/or a 4:2:0 chroma format.
- the image encoding apparatus 100 may derive collocated luma samples from less than 9 candidate luma samples based on the fact that the chroma format of reconstructed samples is a predetermined chroma format.
- less than 9 candidate luma samples may be selected from luma samples located around from locations of collocated chroma samples.
- less than 9 candidate luma samples are three candidate luma samples (e.g., 3, 4, and 5 or 1, 4, and 7 in FIG. number) can be
- less than 5 candidate luma samples may be 5 candidate luma samples (e.g., 1, 3, 4, 5, and 7 in FIG. 16) located around from the positions of the collocated chroma samples. .
- less than 9 candidate luma samples are 6 candidate luma samples (e.g., 3 to 8 or 1, 2, 4, 5 in FIG. , 7 and 8).
- less than 9 candidate luma samples are 4 candidate luma samples (e.g., 3 to 8 or 4, 5, 7 and 8 in FIG. number) can be
- the video decoding apparatus 200 may determine the chroma format of reconstructed samples. For example, the video decoding apparatus 200 may determine whether a chroma format of reconstructed samples is a predetermined chroma format.
- a given chroma format may include one or more of a 4:4:4 chroma format, a 4:2:2 chroma format, and/or a 4:2:0 chroma format.
- the video decoding apparatus 200 may derive collocated luma samples from less than 9 candidate luma samples based on the fact that the chroma format of reconstructed samples is a predetermined chroma format.
- less than 9 candidate luma samples may be selected from luma samples located around from locations of collocated chroma samples. Examples of less than 9 candidate luma samples may be the same as examples of the image encoding apparatus 100 .
- Example 2-2 one luma sample value is calculated by filtering (or filtering calculation) all or some of the candidate luma samples, and CC-SAO is performed by using the calculated luma sample value as a value of a collocated luma sample. It is an embodiment that performs.
- the number of candidate luma samples may be less than 9.
- the method of Example 2-2 can be applied not only to the 4:4:4 chroma format, but also to the 4:2:2 chroma format and the 4:2:0 chroma format.
- One luma sample value may be derived by averaging all or some of the values of less than 9 candidate luma samples. That is, the image encoding apparatus 100 and the image decoding apparatus 200 may derive one luma sample value from all or some of values of less than 9 candidate luma samples. For example, in the example of FIG. 16, when luma sample No. 4 is selected as the luma sample to be used for CC-SAO, 'one luma sample value' is obtained by using any one of Equations 8 to 12 below. can be derived.
- Equation 8 is a method of deriving one luma sample value using the values of 5 candidate luma samples (No. 4, No. 1, No. 3, No. 5, and No. 7), and Equation 9 represents the 8 candidate luma samples.
- a method of deriving one luma sample value using the values of (numbers 0 to 7) is shown.
- Equation 10 shows a method of deriving one luma sample value using the values of 6 candidate luma samples (Nos. 4, 7, 3, 5, 6, and 8), and Equation 11
- a method of deriving one luma sample value using the values of three candidate luma samples (Nos. 4, 3, and 5) is shown.
- Equation 12 shows a method of deriving one luma sample value using values of five candidate luma samples (Nos. 4, 1, 3, 5, and 7).
- Y out represents a luma sample value (one luma sample value) to be applied to CC-SAO
- Y 0 to Y 7 represent (candidate) luma samples at corresponding number positions illustrated in FIG. 16 .
- Example 3 is an example of appropriately setting coordinates of luma samples and chroma samples when CC-SAO is applied to 4:2:2 chroma format and 4:4:4 chroma format (Example 3-1), and chroma format
- COMPONENT_Y, COMPONENT_Cb, and COMPONENT_Cr represent CC-SAO applications of Y/Cb/Cr images, and (x,y) represent the coordinates of blocks to which CC-SAO is applied.
- colU and colV represent positions of chroma samples, and colY represents positions of luma samples.
- the location of the luma sample may be selected from among 9 candidate luma samples, and candPosYY and candPosYX represent the coordinates of the selected luma sample.
- bandNumY, bandNumU, and bandNumV indicate the number of bands of each YUV for band offset setting.
- bands to which each YUV belongs can be calculated as bandY, bandU, and bandV.
- a pixel value dst[x] to which CC-SAO is applied may be determined by adding an offset suitable for classIdx to a value of dst[x], which is a pixel value before CC-SAO filtering.
- the conventional CC-SAO algorithm always fixes that the chroma image is half of the luma image.
- a 4:2:2 chroma format in which the luma image is twice the chroma image (2x width x 1x vertical) and the luma image is 1x the chroma image (1x width x 1x vertical) If applied to the 4:4:4 chroma format as it is, an algorithm error will occur.
- Embodiment 3-1 may correspond to an embodiment in which an error of the CC-SAO algorithm does not occur even for a 4:2:2 chroma format and a 4:4:4 chroma format by modifying the CC-SAO algorithm.
- the image encoding apparatus 100 and the image decoding apparatus 200 may determine chroma formats of reconstructed samples (S1705).
- the image encoding apparatus 100 and the image decoding apparatus 200 may derive collocated luma samples or collocated chroma samples by changing positions of luma samples or chroma samples based on the determination result of step S1705.
- the image encoding apparatus 100 and the image decoding apparatus 200 may set the values of position shift variables (shift_x, shift_y) of luma samples or chroma samples based on the determination result of step S1705 (see Table 5).
- the image encoding apparatus 100 and the image decoding apparatus 200 may derive collocated luma samples by changing the positions of the luma samples based on the values of the position shift variables (shift_x, shift_y), and the position of the chroma samples.
- the position of collocated chroma samples can be changed by changing .
- the image encoding apparatus 100 and the image decoding apparatus 200 may determine chroma formats of reconstructed samples (S1805).
- the image encoding apparatus 100 and the image decoding apparatus 200 determine the horizontal position of chroma samples based on the fact that the chroma format of the reconstructed samples is not a 4:4:4 chroma format (NO in S1805) of the collocated luma sample. By changing the position in the horizontal direction (x >> shift_x in Table 5), collocated chroma samples can be derived (S1810).
- the image encoding apparatus 100 and the image decoding apparatus 200 may determine chroma formats of reconstructed samples (S1805).
- the image encoding apparatus 100 and the image decoding apparatus 200 determine the horizontal position of the luma sample based on the fact that the chroma format of the reconstructed samples is not a 4:4:4 chroma format (NO in S1805) of the collocated chroma samples.
- collocated luma samples can be derived (S1810).
- the image encoding apparatus 100 and the image decoding apparatus 200 may determine chroma formats of reconstructed samples (S1905).
- the image encoding apparatus 100 and the image decoding apparatus 200 collocate the vertical positions of chroma samples based on the fact that the chroma format of the reconstructed samples is a 4:2:0 chroma format (YES in S1905).
- the direction position shift_y ? (y & 0x1) in Table 5: 1
- collocated chroma samples can be derived (S1910).
- the image encoding apparatus 100 and the image decoding apparatus 200 may determine chroma formats of reconstructed samples (S1905).
- the image encoding apparatus 100 and the image decoding apparatus 200 determine the vertical position of the luma sample based on the fact that the chroma format of the reconstructed samples is a 4:2:0 chroma format (YES in S1905), and the vertical position of the collocated chroma samples. By changing to the direction position ( ⁇ shift_y in Table 5), collocated luma samples can be derived (S1910).
- Example 3-2 may correspond to an example of determining whether to activate CC-SAO based on a chroma format.
- CC-SAO Due to the characteristics of CC-SAO using cross-component properties, CC-SAO cannot be applied to monochrome images (4:0:0, black and white images). However, in the conventional CC-SAO, since the first syntax element (e.g., sps_sao_enabled_flag) is always transmitted regardless of the chroma format as shown in Table 6 below, bit efficiency may be reduced.
- the first syntax element e.g., sps_sao_enabled_flag
- Example 3-2 a method for not transmitting the first syntax element when the chroma format is monochrome (Table 7) and a method for determining whether to transmit the first syntax element considering whether or not SAO is operating (whether activated) (Table 8) is suggested.
- the video encoding apparatus 100 may determine chroma formats of reconstructed samples (S2005).
- the image encoding apparatus 100 may encode the first syntax element (e.g., sps_sao_enabled_flag) based on the fact that the chroma format of the reconstructed samples is not a monochrome format (YES in S2005) (S2010).
- the image encoding apparatus 100 may not encode the first syntax element (e.g., sps_sao_enabled_flag) based on the fact that the chroma format of reconstructed samples is a monochrome format (NO in S2005).
- the video decoding apparatus 200 may determine chroma formats of reconstructed samples (S2105).
- the image decoding apparatus 200 may obtain a first syntax element (e.g., sps_sao_enabled_flag) from the bitstream based on the fact that the chroma format of the reconstructed samples is not a monochrome format (YES in S2105) (S2110).
- the image decoding apparatus 200 may not decode the first syntax element (e.g., sps_sao_enabled_flag) based on the fact that the chroma format of reconstructed samples is a monochrome format (NO in S2105).
- the value of the first syntax element (e.g., sps_sao_enabled_flag) may be inferred or set as the first value (S2115).
- the first value may be a value indicating that CC-SAO is not activated (false).
- the image encoding apparatus 100 may encode the first syntax element (e.g., sps_sao_enabled_flag) based on the fact that the chroma format of reconstructed samples is not a monochrome format (YES in S2005) while SAO is activated (S2010).
- the image encoding apparatus 100 may not encode the first syntax element (e.g., sps_sao_enabled_flag) based on the fact that SAO is not activated or the chroma format of reconstructed samples is a monochrome format (NO in S2005).
- the first syntax element e.g., sps_sao_enabled_flag
- the video decoding apparatus 200 may obtain a first syntax element (e.g., sps_sao_enabled_flag) from the bitstream based on the fact that the chroma format of the reconstructed samples is not a monochrome format (YES in S2105) while SAO is activated (S2110).
- a first syntax element e.g., sps_sao_enabled_flag
- the image encoding apparatus 100 may not decode the first syntax element (e.g., sps_sao_enabled_flag) based on the fact that SAO is not activated or the chroma format of reconstructed samples is a monochrome format (NO in S2105).
- the value of the first syntax element e.g., sps_sao_enabled_flag
- the first value may be a value indicating that CC-SAO is not activated (false).
- FIG. 22 is a diagram exemplarily illustrating a content streaming system to which an embodiment according to the present disclosure may be applied.
- a content streaming system to which an embodiment of the present disclosure is applied may largely include an encoding server, a streaming server, a web server, a media storage, a user device, and a multimedia input device.
- the encoding server compresses content input from multimedia input devices such as smart phones, cameras, camcorders, etc. into digital data to generate a bitstream and transmits it to the streaming server.
- multimedia input devices such as smart phones, cameras, and camcorders directly generate bitstreams
- the encoding server may be omitted.
- the bitstream may be generated by an image encoding method and/or an image encoding apparatus to which an embodiment of the present disclosure is applied, and the streaming server may temporarily store the bitstream in a process of transmitting or receiving the bitstream.
- the streaming server transmits multimedia data to a user device based on a user request through a web server, and the web server may serve as a medium informing a user of what kind of service is available.
- the web server transmits it to the streaming server, and the streaming server can transmit multimedia data to the user.
- the content streaming system may include a separate control server, and in this case, the control server may play a role of controlling commands/responses between devices in the content streaming system.
- the streaming server may receive content from a media storage and/or encoding server. For example, when receiving content from the encoding server, the content may be received in real time. In this case, in order to provide smooth streaming service, the streaming server may store the bitstream for a certain period of time.
- Examples of the user devices include mobile phones, smart phones, laptop computers, digital broadcasting terminals, personal digital assistants (PDAs), portable multimedia players (PMPs), navigation devices, slate PCs, Tablet PC, ultrabook, wearable device (e.g., smartwatch, smart glass, HMD (head mounted display)), digital TV, desktop There may be computers, digital signage, and the like.
- PDAs personal digital assistants
- PMPs portable multimedia players
- navigation devices slate PCs
- Tablet PC ultrabook
- wearable device e.g., smartwatch, smart glass, HMD (head mounted display)
- digital TV desktop There may be computers, digital signage, and the like.
- Each server in the content streaming system may be operated as a distributed server, and in this case, data received from each server may be distributed and processed.
- the scope of the present disclosure is software or machine-executable instructions (eg, operating systems, applications, firmware, programs, etc.) that cause operations in accordance with the methods of various embodiments to be executed on a device or computer, and such software or It includes a non-transitory computer-readable medium in which instructions and the like are stored and executable on a device or computer.
- An embodiment according to the present disclosure may be used to encode/decode an image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
case COMPONENT_Y: { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { const Pel *colY = srcY + x + srcStrideY * candPosYY + candPosYX; const Pel *colU = srcU + (x >> 1); const Pel *colV = srcV + (x >> 1); const int bandY = (*colY * bandNumY) >> bitDepth; const int bandU = (*colU * bandNumU) >> bitDepth; const int bandV = (*colV * bandNumV) >> bitDepth; const int bandIdx = bandY * bandNumU * bandNumV + bandU * bandNumV + bandV; const int classIdx = bandIdx; dst[x] = ClipPel<int>(dst[x] + offset[classIdx], clpRng); } srcY += srcStrideY; srcU += srcStrideU * (y & 0x1); srcV += srcStrideV * (y & 0x1); dst += dstStride; } } break; case COMPONENT_Cb: case COMPONENT_Cr: { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { const Pel *colY = srcY + (x << 1) + srcStrideY * candPosYY + candPosYX; const Pel *colU = srcU + x; const Pel *colV = srcV + x; const int bandY = (*colY * bandNumY) >> bitDepth; const int bandU = (*colU * bandNumU) >> bitDepth; const int bandV = (*colV * bandNumV) >> bitDepth; const int bandIdx = bandY * bandNumU * bandNumV + bandU * bandNumV + bandV; const int classIdx = bandIdx; dst[x] = ClipPel<int>(dst[x] + offset[classIdx], clpRng); } srcY += srcStrideY << 1; srcU += srcStrideU; srcV += srcStrideV; dst += dstStride; } } break; |
Const int shift_x = sps_chroma_format_idc == 3 ? 0 : 1; Const int shift_y = sps_chroma_format_idc >= 2 ? 0 : 1; case COMPONENT_Y: { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { const Pel *colY = srcY + x + srcStrideY * candPosYY + candPosYX; const Pel *colU = srcU + (x >> shift_x); const Pel *colV = srcV + (x >> shift_x); const int bandY = (*colY * bandNumY) >> bitDepth; const int bandU = (*colU * bandNumU) >> bitDepth; const int bandV = (*colV * bandNumV) >> bitDepth; const int bandIdx = bandY * bandNumU * bandNumV + bandU * bandNumV + bandV; const int classIdx = bandIdx; dst[x] = ClipPel<int>(dst[x] + offset[classIdx], clpRng); } srcY += srcStrideY; srcU += srcStrideU * (shift_y ? (y & 0x1) : 1); srcV += srcStrideV * (shift_y ? (y & 0x1) : 1); dst += dstStride; } } break; case COMPONENT_Cb: case COMPONENT_Cr: { for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { const Pel *colY = srcY + (x << shift_x) + srcStrideY * candPosYY + candPosYX; const Pel *colU = srcU + x; const Pel *colV = srcV + x; const int bandY = (*colY * bandNumY) >> bitDepth; const int bandU = (*colU * bandNumU) >> bitDepth; const int bandV = (*colV * bandNumV) >> bitDepth; const int bandIdx = bandY * bandNumU * bandNumV + bandU * bandNumV + bandV; const int classIdx = bandIdx; dst[x] = ClipPel<int>(dst[x] + offset[classIdx], clpRng); } srcY += srcStrideY << shift_y; srcU += srcStrideU; srcV += srcStrideV; dst += dstStride; } } break; |
Claims (16)
- 영상 복호화 장치에 의해 수행되는 영상 복호화 방법으로서,제1신택스 요소의 값에 기반하여, CC-SAO(cross component sample adaptive offset)의 활성화 여부를 판단하는 단계;상기 제1신택스 요소의 값이 상기 CC-SAO가 활성화됨을 나타냄에 기반하여, 복원(reconstructed) 샘플들의 크로마 포맷에 기반하여 상기 복원 샘플들로부터 서로 대응하는 collocated 루마 샘플과 collocated 크로마 샘플들을 유도하는 단계; 및상기 collocated 루마 샘플과 상기 collocated 크로마 샘플들에 기반하여, 상기 복원 샘플들에 적용될 오프셋을 결정하는 단계를 포함하는 영상 복호화 방법.
- 제1항에 있어서,상기 복원 샘플들의 크로마 포맷이 4:4:4 크로마 포맷임에 기반하여, 상기 collocated 크로마 샘플들과 같은 위치에 자리하는 루마 샘플이 상기 collocated 루마 샘플로 유도되는 영상 복호화 방법.
- 제1항에 있어서,상기 복원 샘플들의 크로마 포맷이 소정의 크로마 포맷임에 기반하여, 상기 collocated 크로마 샘플들의 위치로부터 주변에 자리하는 루마 샘플들 중에서 소정의 위치에 자리하는 루마 샘플이 상기 collocated 루마 샘플로 유도되는 영상 복호화 방법.
- 제3항에 있어서,상기 소정의 크로마 포맷은,4:2:2 크로마 포맷 또는 4:2:0 크로마 포맷인 영상 복호화 방법.
- 제1항에 있어서,상기 복원 샘플들의 크로마 포맷이 소정의 크로마 포맷임에 기반하여, 상기 collocated 크로마 샘플들의 위치로부터 주변에 자리하는 루마 샘플들 중에서, 9개 미만의 루마 샘플들로부터 상기 collocated 루마 샘플이 유도되는 영상 복호화 방법.
- 제5항에 있어서,상기 소정의 크로마 포맷은,4:4:4 크로마 포맷, 4:2:2 크로마 포맷 또는 4:2:0 크로마 포맷 중에서 어느 하나인 영상 복호화 방법.
- 제1항에 있어서,상기 collocated 크로마 샘플들은,상기 복원 샘플들의 크로마 포맷에 기반하여, 상기 복원 샘플들 중에서 크로마 샘플들의 위치를 변경하여 유도되는 영상 복호화 방법.
- 제7항에 있어서,상기 collocated 크로마 샘플들은,상기 복원 샘플들의 크로마 포맷이 4:4:4 크로마 포맷이 아님에 기반하여, 상기 크로마 샘플들의 수평 방향 위치를 상기 collocated 루마 샘플의 수평 방향 위치로 변경하여 유도되는 영상 복호화 방법.
- 제7항에 있어서,상기 collocated 크로마 샘플들은,상기 복원 샘플들의 크로마 포맷이 4:2:0 크로마 포맷임에 기반하여, 상기 크로마 샘플들의 수직 방향 위치를 상기 collocated 루마 샘플의 수직 방향 위치로 변경하여 유도되는 영상 복호화 방법.
- 제1항에 있어서,상기 collocated 루마 샘플은,상기 복원 샘플들의 크로마 포맷에 기반하여, 상기 복원 샘플들 중에서 루마 샘플의 위치를 변경하여 유도되는 영상 복호화 방법.
- 제10항에 있어서,상기 collocated 루마 샘플은,상기 복원 샘플들의 크로마 포맷이 4:4:4 크로마 포맷이 아님에 기반하여, 상기 루마 샘플의 수평 방향 위치를 상기 collocated 크로마 샘플들의 수평 방향 위치로 변경하여 유도되는 영상 복호화 방법.
- 제10항에 있어서,상기 collocated 루마 샘플은,상기 복원 샘플들의 크로마 포맷이 4:2:0 크로마 포맷임에 기반하여, 상기 루마 샘플의 수직 방향 위치를 상기 collocated 크로마 샘플들의 수직 방향 위치로 변경하여 유도되는 영상 복호화 방법.
- 제1항에 있어서,상기 제1신택스 요소는,상기 복원 샘플들의 크로마 포맷이 monochrome 크로마 포맷이 아님에 기반하여 비트스트림으로부터 획득되며,상기 복원 샘플들의 크로마 포맷이 monochrome 크로마 포맷임에 기반하여, 상기 CC-SAO가 활성화되지 않음을 지시하는 값으로 추론되는 영상 복호화 방법.
- 영상 부호화 장치에 의해 수행되는 영상 부호화 방법으로서,복원(reconstructed) 샘플들의 크로마 포맷에 기반하여 상기 복원 샘플들로부터 서로 대응하는 collocated 루마 샘플과 collocated 크로마 샘플들을 유도하는 단계; 및상기 collocated 루마 샘플과 상기 collocated 크로마 샘플들에 기반하여, 상기 복원 샘플들에 적용될 CC-SAO(cross component sample adaptive offset) 오프셋을 결정하는 단계; 및상기 CC-SAO의 활성화 여부를 나타내는 제1신택스 요소를 부호화하는 단계를 포함하는 영상 부호화 방법.
- 영상 부호화 방법에 의해 생성된 비트스트림을 전송하는 방법으로서, 상기 영상 부호화 방법은,복원(reconstructed) 샘플들의 크로마 포맷에 기반하여 상기 복원 샘플들로부터 서로 대응하는 collocated 루마 샘플과 collocated 크로마 샘플들을 유도하는 단계; 및상기 collocated 루마 샘플과 상기 collocated 크로마 샘플들에 기반하여, 상기 복원 샘플들에 적용될 CC-SAO(cross component sample adaptive offset) 오프셋을 결정하는 단계; 및상기 CC-SAO의 활성화 여부를 나타내는 제1신택스 요소를 부호화하는 단계를 포함하는 방법.
- 영상 부호화 방법에 의해 생성된 비트스트림을 저장하는 컴퓨터 판독 가능한 기록 매체로서, 상기 영상 부호화 방법은,복원(reconstructed) 샘플들의 크로마 포맷에 기반하여 상기 복원 샘플들로부터 서로 대응하는 collocated 루마 샘플과 collocated 크로마 샘플들을 유도하는 단계; 및상기 collocated 루마 샘플과 상기 collocated 크로마 샘플들에 기반하여, 상기 복원 샘플들에 적용될 CC-SAO(cross component sample adaptive offset) 오프셋을 결정하는 단계; 및상기 CC-SAO의 활성화 여부를 나타내는 제1신택스 요소를 부호화하는 단계를 포함하는 컴퓨터 판독 가능한 기록 매체.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22833712.7A EP4366311A1 (en) | 2021-07-02 | 2022-07-01 | Image encoding/decoding method, method for transmitting bitstream, and recording medium storing bitstream |
CN202280045883.0A CN117581548A (zh) | 2021-07-02 | 2022-07-01 | 图像编码/解码方法、用于发送比特流的方法以及存储比特流的记录介质 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2021-0086963 | 2021-07-02 | ||
KR20210086963 | 2021-07-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023277659A1 true WO2023277659A1 (ko) | 2023-01-05 |
Family
ID=84691987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2022/009548 WO2023277659A1 (ko) | 2021-07-02 | 2022-07-01 | 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4366311A1 (ko) |
CN (1) | CN117581548A (ko) |
WO (1) | WO2023277659A1 (ko) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020236038A1 (en) * | 2019-05-21 | 2020-11-26 | Huawei Technologies Co., Ltd. | Method and apparatus of cross-component prediction |
US20210084340A1 (en) * | 2019-09-16 | 2021-03-18 | Tencent America LLC | Method and apparatus for cross-component filtering |
US20210168385A1 (en) * | 2019-11-29 | 2021-06-03 | Tencent America LLC | Signaling of video coding tools supporting various chroma formats |
US20210176501A1 (en) * | 2019-12-05 | 2021-06-10 | Mediatek Inc. | Methods and Apparatuses of Syntax Signaling Constraint for Cross-Component Adaptive Loop Filter in Video Coding System |
-
2022
- 2022-07-01 WO PCT/KR2022/009548 patent/WO2023277659A1/ko active Application Filing
- 2022-07-01 CN CN202280045883.0A patent/CN117581548A/zh active Pending
- 2022-07-01 EP EP22833712.7A patent/EP4366311A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020236038A1 (en) * | 2019-05-21 | 2020-11-26 | Huawei Technologies Co., Ltd. | Method and apparatus of cross-component prediction |
US20210084340A1 (en) * | 2019-09-16 | 2021-03-18 | Tencent America LLC | Method and apparatus for cross-component filtering |
US20210168385A1 (en) * | 2019-11-29 | 2021-06-03 | Tencent America LLC | Signaling of video coding tools supporting various chroma formats |
US20210176501A1 (en) * | 2019-12-05 | 2021-06-10 | Mediatek Inc. | Methods and Apparatuses of Syntax Signaling Constraint for Cross-Component Adaptive Loop Filter in Video Coding System |
Non-Patent Citations (1)
Title |
---|
C.-W. KUO (KWAI), X. XIU (KWAI), Y.-W. CHEN (KWAI), H.-J. JHU (KWAI), W. CHEN (KWAI), X. WANG (KWAI): "EE2-5.1: Cross-component Sample Adaptive Offset", 23. JVET MEETING; 20210707 - 20210716; TELECONFERENCE; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ), 1 July 2021 (2021-07-01), XP030295926 * |
Also Published As
Publication number | Publication date |
---|---|
CN117581548A (zh) | 2024-02-20 |
EP4366311A1 (en) | 2024-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020009556A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020218793A1 (ko) | Bdpcm에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020046091A1 (ko) | 다중 변환 선택에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020017892A1 (ko) | 서브블록 단위의 시간적 움직임 벡터 예측을 위한 방법 및 그 장치 | |
WO2020180119A1 (ko) | Cclm 예측에 기반한 영상 디코딩 방법 및 그 장치 | |
WO2020116961A1 (ko) | 이차 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020167097A1 (ko) | 영상 코딩 시스템에서 인터 예측을 위한 인터 예측 타입 도출 | |
WO2020149616A1 (ko) | 영상 코딩 시스템에서 cclm 예측 기반 영상 디코딩 방법 및 그 장치 | |
WO2020055208A1 (ko) | 인트라 예측을 수행하는 영상 예측 방법 및 장치 | |
WO2021015512A1 (ko) | Ibc를 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021034100A1 (ko) | 영상 코딩 시스템에서 무손실 코딩을 적용하는 영상 디코딩 방법 및 그 장치 | |
WO2020185005A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2023277659A1 (ko) | 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 | |
WO2020130577A1 (ko) | 이차 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2020009366A1 (ko) | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 | |
WO2023014076A1 (ko) | 영상 부호화/복호화 방법, 비트스트림을 전송하는 방법 및 비트스트림을 저장한 기록 매체 | |
WO2024043745A1 (ko) | Mrl(multi reference line)을 이용한 인트라 예측 모드에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
WO2023153797A1 (ko) | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 | |
WO2024080849A1 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 | |
WO2023204624A1 (ko) | Cccm(convolutional cross-component model) 예측에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
WO2023128704A1 (ko) | Cclm(cross-component linear model) 인트라 예측에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
WO2023191404A1 (ko) | 적응적 mts에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
WO2023182634A1 (ko) | 영상 디코딩 방법 및 그 장치 | |
WO2024080706A1 (ko) | 영상 인코딩/디코딩 방법 및 장치, 그리고 비트스트림을 저장한 기록 매체 | |
WO2023171988A1 (ko) | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22833712 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18569068 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280045883.0 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022833712 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022833712 Country of ref document: EP Effective date: 20240202 |