WO2017065525A2 - 영상을 부호화 또는 복호화하는 방법 및 장치 - Google Patents
영상을 부호화 또는 복호화하는 방법 및 장치 Download PDFInfo
- Publication number
- WO2017065525A2 WO2017065525A2 PCT/KR2016/011497 KR2016011497W WO2017065525A2 WO 2017065525 A2 WO2017065525 A2 WO 2017065525A2 KR 2016011497 W KR2016011497 W KR 2016011497W WO 2017065525 A2 WO2017065525 A2 WO 2017065525A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- value
- block
- information
- prediction
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 99
- 238000013139 quantization Methods 0.000 claims abstract description 53
- 230000033001 locomotion Effects 0.000 claims description 174
- 239000013598 vector Substances 0.000 claims description 90
- 238000006073 displacement reaction Methods 0.000 claims description 69
- 230000002457 bidirectional effect Effects 0.000 claims description 39
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 7
- 230000009466 transformation Effects 0.000 abstract description 25
- 239000000284 extract Substances 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 42
- 230000008569 process Effects 0.000 description 32
- 230000008859 change Effects 0.000 description 16
- 238000012805 post-processing Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 230000002123 temporal effect Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000007906 compression Methods 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000007704 transition Effects 0.000 description 2
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000011664 signaling Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/109—Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
- H04N19/126—Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/18—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/192—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
- H04N19/194—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive involving only two passes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
- H04N19/45—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder performing compensation of the inverse transform mismatch, e.g. Inverse Discrete Cosine Transform [IDCT] mismatch
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/537—Motion estimation other than block-based
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/57—Motion estimation characterised by a search window with variable size or shape
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present invention relates to a method and apparatus for encoding and decoding an image, and more particularly, to an image encoding and decoding method and apparatus for more efficiently performing prediction, motion compensation, transform, and entropy coding.
- the image data is encoded by a codec according to a predetermined data compression standard, for example, the Moving Picture Expert Group (MPEG) standard, and then stored in a recording medium in the form of a bitstream or transmitted through a communication channel.
- MPEG Moving Picture Expert Group
- An object of the present invention is to provide an image encoding or image decoding method and an image encoding apparatus or an image decoding apparatus that increase image compression efficiency.
- a method of decoding an image comprising: extracting quantization coefficients generated through core transform, secondary transform, and quantization from a bitstream; Generating inverse quantization coefficients by performing inverse quantization on the quantization coefficients; Generating a second inverse transformed coefficient by performing a second inverse transform corresponding to the second transform on the low frequency component of the inverse quantization coefficient; And performing a core inverse transform corresponding to the core transform on the secondary inverse transformed coefficient.
- the image decoding method may further include obtaining information about whether to perform the secondary inverse transform from the bitstream, wherein the secondary inverse transform is performed based on the information about whether to perform the secondary inverse transform. Can be.
- the secondary inverse transform is performed in units of subgroups in a transform block, and the size of the subgroup may be determined based on the size of the transform block or a quantization parameter.
- the type of the secondary inverse transform performed on the subgroup may be determined based on the position of the subgroup.
- an entropy decoding method uses a first probability model and a second probability model to update previous probability information used for arithmetic decoding of a previous bin from a received bitstream, thereby being used for arithmetic decoding of a current bin.
- the size of the window of the first probability model is smaller than the size of the window of the second probability model.
- the size of the window of the first probability model and the size of the window of the second probability model may be obtained from the bitstream.
- one of the window size of the first probability model or the window size of the second probability model may have a fixed value.
- An image decoding method includes extracting prediction mode information of a current block to be decoded from a received bitstream; Generating a first prediction block for the current block according to the extracted prediction mode information; Extracting operation information using each pixel constituting the first prediction block and neighboring pixels of each pixel from the bitstream; Generating a second prediction block by changing a pixel value of each pixel through operation of each pixel constituting the first prediction block and pixels located above and to the left of each pixel according to the extracted operation information; Extracting and reconstructing a residual corresponding to a difference between the current block and the second prediction block from the bitstream; And decoding the current block by adding the residual and the second prediction block, wherein the operation information includes parameter index information, and the parameter index information includes each pixel constituting the first prediction block.
- the parameter index information may be defined for coding units having 2N ⁇ 2N and N ⁇ N size.
- the parameter index information may be included in the prediction mode information.
- the parameter index information may be obtained from the bitstream at a coding unit or prediction unit level.
- an image decoding method includes: receiving a bitstream; Determining whether to perform pixel-by-pixel motion compensation for the current block based on information on whether to limit the pixel-by-pixel motion compensation extracted from the bitstream; When performing the pixel-by-pixel motion compensation, a first motion vector and a second motion vector indicating a first corresponding region and a second corresponding region most similar to the current block in the first reference picture and the second reference picture from the bitstream.
- the horizontal displacement vector and the vertical displacement vector may include the horizontal displacement vector and the vertical displacement of the first corresponding pixel of the first reference picture in a window area of a predetermined size.
- the difference value between the second displacement value displaced using the horizontal and vertical gradient values of the corresponding pixel may be determined as a horizontal and vertical displacement vector to minimize the difference.
- an image encoding method and apparatus provided according to various embodiments and an image decoding method and apparatus corresponding thereto may provide an effect of performing prediction, motion compensation, transformation, and entropy coding more efficiently.
- FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment.
- FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
- FIG. 3 is a block diagram illustrating a schematic configuration of an image encoding and decoding apparatus for performing core transformation and secondary inverse transformation, according to an embodiment.
- FIG. 4 is a diagram illustrating a subgroup that is an execution unit of a secondary inverse transform, according to an exemplary embodiment.
- FIG. 5 is a diagram illustrating a range in which secondary inverse transform is performed, according to an exemplary embodiment.
- FIG. 6 is a flowchart illustrating an image decoding method, according to an exemplary embodiment.
- FIG. 7 is a block diagram illustrating a configuration of an entropy encoder according to an embodiment.
- FIG. 9 is a diagram for comparing updated probabilities for each of the first probability model and the second probability model, according to an exemplary embodiment.
- FIG. 10 is a block diagram illustrating a configuration of an entropy decoding unit according to an embodiment.
- FIG. 11 is a flowchart illustrating an entropy decoding method according to an embodiment.
- FIG. 12 is a block diagram illustrating a configuration of an image encoding apparatus including a post processor according to an embodiment.
- FIG. 13 is a diagram illustrating an example of a 16x16 intra prediction mode.
- 15 is a reference diagram for explaining a post-processing operation of a first prediction block, according to an embodiment.
- 16 is another reference diagram for describing an operation of a post-processing unit, according to an exemplary embodiment.
- 19 is a flowchart illustrating an image decoding method, according to an exemplary embodiment.
- 20 is a block diagram illustrating a configuration of a motion compensator according to an exemplary embodiment.
- 21 is a reference diagram for explaining a block-based bidirectional motion prediction and compensation process according to an embodiment.
- 22 is a reference diagram for explaining a process of performing pixel-by-pixel motion compensation, according to an exemplary embodiment.
- FIG. 23 is a diagram for describing a process of determining a horizontal displacement vector and a vertical displacement vector, according to an exemplary embodiment.
- 24 is a reference diagram for explaining a process of calculating horizontal and vertical gradient values, according to an exemplary embodiment.
- 25 is a reference diagram for explaining a process of calculating horizontal and vertical gradient values, according to another exemplary embodiment.
- FIG. 27 is a flowchart illustrating an image encoding method, according to an exemplary embodiment.
- FIG. 28 is a flowchart illustrating an image decoding method, according to an exemplary embodiment.
- part refers to a hardware component, such as software, FPGA or ASIC, and “part” plays certain roles. However, “part” is not meant to be limited to software or hardware.
- the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
- a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
- the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
- the "image” may be a static image such as a still image of a video or may represent a dynamic image such as a video, that is, the video itself.
- FIGS. 1 to 28 an image encoding apparatus, an image decoding apparatus, an image encoding method, and an image decoding method according to an embodiment will be described with reference to FIGS. 1 to 28.
- FIGS. 1 to 2 a schematic configuration of an image encoding apparatus and an image decoding apparatus according to an embodiment will be described below with reference to FIGS. 1 to 2.
- a method of encoding and decoding an image using a core transform and a secondary inverse transform according to an embodiment will be described below with reference to FIGS. 3 to 6.
- the entropy encoding and decoding method using the first probability model and the second probability model will be described below with reference to FIGS. 7 through 11.
- the post-processing method of the prediction block according to an embodiment will be described below with reference to FIGS. 12 to 19.
- a method of performing block unit and pixel unit motion compensation according to an embodiment will be described below with reference to FIGS. 20 through 28.
- FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment.
- the image encoding apparatus 100 may include a transformer 125, a quantizer 130, an entropy decoder 135, an inverse quantizer 145, an inverse transformer 150, and a deblocking unit 155.
- An SAO performer 160 an intra predictor 120, a reconstructed picture buffer 110, and an inter predictor 115.
- the image encoding apparatus 100 performs operations required to encode image data. That is, the intra predictor 120 performs intra prediction on each coding unit of the intra mode among the current images 105, and the inter prediction unit 115 performs current prediction on the prediction unit of the coding unit of the inter mode. Inter prediction is performed by using the reference image obtained from 105 and the reconstructed picture buffer 110.
- the current image 105 may be divided into maximum coding units and then sequentially encoded. In this case, encoding may be performed on the coding unit in which the largest coding unit is to be divided into a tree structure.
- Residual data is generated by subtracting the prediction data for the coding unit of each mode output from the intra prediction unit 120 or the inter prediction unit 115 from the data for the encoding unit of the current image 105, and The dual data is output as transform coefficients quantized for each transform unit through the transform unit 125 and the quantization unit 130.
- the quantized transform coefficients are reconstructed into residual data of the spatial domain through the inverse quantization unit 145 and the inverse transform unit 150.
- the residual data of the reconstructed spatial domain is added to the prediction data of the coding unit of each mode output from the intra predictor 120 or the inter predictor 115, thereby adding the residual data of the spatial domain to the coding unit of the current image 105.
- the data is restored.
- the reconstructed spatial area data is generated as a reconstructed image through the deblocking unit 155 and the SAO performing unit 160.
- the generated reconstructed image is stored in the reconstructed picture buffer 110.
- the reconstructed images stored in the reconstructed picture buffer 410 may be used as reference images for inter prediction of another image.
- the transform coefficients quantized by the transformer 125 and the quantizer 130 may be output as the bitstream 140 through the entropy encoder 135.
- An inter predictor 115, an intra predictor 120, a transformer 125, a quantizer 130, an entropy encoder 135, and inverse quantization that are components of the image encoding apparatus 100 according to an exemplary embodiment.
- the unit 145, the inverse transform unit 150, the deblocking unit 155, and the SAO performer 160 may perform an operation based on each coding unit among coding units having a tree structure for each maximum coding unit.
- the intra prediction unit 120 and the inter prediction unit 115 determine a partition mode and a prediction mode of each coding unit among coding units having a tree structure in consideration of the maximum size and the maximum depth of the current maximum coding unit.
- the transform unit 125 may determine whether to split the transform unit according to the quad tree in each coding unit among the coding units having the tree structure.
- FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
- the image decoding apparatus 200 may include an entropy decoder 215, an inverse quantizer 220, an inverse transformer 225, a deblocking unit 245, an SAO performer 250, and an intra predictor ( 240, a reconstructed picture buffer 230, and an inter predictor 235.
- the entropy decoding unit 215 obtains encoded image data to be decoded from the bitstream 205 and encoding information necessary for decoding.
- the encoded image data is a quantized transform coefficient
- the inverse quantizer 220 and the inverse transform unit 225 reconstruct the residual data from the quantized transform coefficients.
- the intra predictor 240 performs intra prediction on the coding unit of the intra mode.
- the inter prediction unit 235 performs inter prediction using a reference image obtained from the reconstructed picture buffer 230 on a coding unit of an inter mode among the current images.
- the data of the spatial domain of the coding unit of the current image 205 is restored and reconstructed.
- the data of the space area may be output as a reconstructed image 260 through the deblocking unit 245 and the SAO performing unit 250.
- the reconstructed images stored in the reconstructed picture buffer 230 may be output as reference images.
- the entropy decoder 215, the inverse quantizer 220, the inverse transformer 225, the intra predictor 240, the inter predictor 235, and the D which are components of the image decoding apparatus 200, may be decoded.
- the blocking unit 245 and the SAO performing unit 250 may perform an operation based on each coding unit among coding units having a tree structure for each maximum coding unit.
- the intra predictor 240 and the inter predictor 235 determine a partition mode and a prediction mode for each coding unit among the coding units having a tree structure, and the inverse transformer 225 has a quad tree structure for each coding unit. It is possible to determine whether to divide the conversion unit according to.
- FIG. 3 is a block diagram illustrating a schematic configuration of an image encoding and decoding apparatus that performs secondary transform / inverse transform, according to an embodiment.
- the image encoding apparatus includes a core transformer 310, a secondary transformer 320, and a quantizer 330.
- the image decoder includes an inverse quantizer 340, a secondary inverse transformer 350, and
- the core inverse transform unit 360 is included.
- the core converter 310 and the secondary converter 320 of FIG. 3 may be included in the converter 125 of FIG. 1, and the quantizer 330 may correspond to the quantizer 130 of FIG. 1.
- the inverse quantization unit 340 of FIG. 3 corresponds to the inverse quantization unit 220 of FIG. 2, and the secondary inverse transformation unit 350 and the core inverse transformation unit 360 may be included in the inverse transformation unit 225 of FIG. 2. have.
- the image encoding apparatus divides each frame into blocks, and the core transform unit 310 performs a core transform on each block in the frame.
- the core transformer 310 may generate a core transform coefficient by performing transformation on the residual block for each transformation unit.
- the transformation unit may have a tree structure within the range of the maximum size and the minimum size, and is indicated by a flag indicating that the current block is divided into sub-blocks for each transformation unit. Can be.
- the core transform unit 310 may perform the transformation based on a discrete cosine transform (DCT) and / or a discrete sine transform (DST).
- DCT discrete cosine transform
- DST discrete sine transform
- the core transform coefficients generated through the core transform in the core transform unit 310 may be classified into a low frequency component region having a few large transform coefficients and a high frequency component region having some small transform coefficients.
- the secondary transformer 320 may improve quantization performance by performing a secondary transform on the low frequency component region.
- the secondary transform according to an embodiment may be performed based on DCT and / or DST, like the core transform, or may be of any type such as an orthogonal transform and a rotational transform.
- the secondary transform is performed on the low frequency component region unlike the core transform performed on the entire coefficient in the block. Therefore, the secondary transform is not performed on the coefficients of the high frequency component region among the core transform coefficients generated by the core transform unit 310.
- the secondary transform unit 320 may perform the secondary transform in units of subgroups smaller than the size of the transform unit.
- the secondary transform unit 320 may perform secondary transform on the transform coefficients of the low frequency component included in the upper left 8x8 subgroup that is part of the transform unit.
- the quantization unit 330 may quantize coefficients transformed through the core transformer 310 and the secondary transformer 320.
- the quantized coefficient is provided to the inverse quantization unit 340, and according to an embodiment, the inverse quantization unit 340 generates inverse quantization coefficients by performing inverse quantization on the quantized coefficients.
- the inverse quantization unit 340 may perform inverse quantization in units of subgroups (Q_sub-groups) smaller than the size of a transform unit.
- the secondary inverse quantization unit 350 generates a secondary inverse transform coefficient by performing a secondary inverse transform corresponding to the secondary transform on the low frequency component of the inverse quantization coefficient.
- the core inverse transform unit 360 according to an embodiment performs a core inverse transform corresponding to the core transform on the secondary inverse transformed coefficient.
- the secondary inverse transform unit 350 may perform the secondary inverse transform based on the inverse DCT and / or inverse DST. Since the secondary inverse transform is performed on the low frequency components of the inverse quantization coefficients, the secondary inverse transform is not performed on the high frequency components of the inverse quantization coefficients.
- the secondary inverse transform unit 350 may perform the secondary inverse transform in a subgroup (2nd_tr_sub-group) unit smaller than the size of the transform unit.
- the secondary inverse transform unit 350 may perform secondary inverse transform on the inverse quantization coefficients of the low frequency component included in the left upper 8x8 subgroup that is part of the transform unit.
- the core inverse transform unit 360 performs inverse transform based on the inverse DCT and / or inverse DST, like the secondary inverse transform unit 350, but unlike the inverse secondary transform, the inverse transform is performed on all coefficients in the transform unit. To perform.
- FIG. 4 is a diagram illustrating a subgroup that is an execution unit of a secondary inverse transform, according to an exemplary embodiment.
- the secondary inverse transform may be performed in the subgroup units 412 and 422 located on the upper left side of the transform units 410 and 420. Since the secondary inverse transform is performed in the subgroup units 412 and 422, high parallelism of the decoding operation may be maintained. However, in order to maintain such parallelism, a constraint may be added to the size of the subgroup unit in which the secondary inverse transform is performed. According to an embodiment, the subgroup unit 412 in which the secondary inverse transform is performed should be smaller than the group unit in which quantization coefficients are obtained from the bitstream (coeff_coding_sub_group; 414) or the group unit in which inverse quantization is performed (Q_sub-group; 414). Constraints may be added.
- the subgroup unit 422 in which the secondary inverse transform is performed has the same size as the group unit in which the quantization coefficient is obtained from the bitstream (coeff_coding_sub_group) or the group unit in which inverse quantization is performed (Q_sub-group). You may.
- FIG. 5 is a diagram illustrating a range in which secondary inverse transform is performed, according to an exemplary embodiment.
- the secondary inverse transform may be performed on the subgroup units 512 located on the upper left side of the transform unit 510.
- the subgroup units 512 may be an area corresponding to the low frequency component of the inverse quantization coefficient.
- the secondary inverse transform may be performed on the subgroup units 522 and 524 in the transform unit 520, and the secondary inverse transform is performed on each subgroup based on the position of each subgroup.
- the type of can be determined. For example, inverse DCT may be performed on the subgroup units 522 and inverse DST may be performed on the subgroup units 524.
- the size of the subgroup unit does not have to have a fixed size.
- the image encoding apparatus 100 may compare the costs of the bitstreams of the transform coefficients generated by applying the secondary transform and adaptively apply the secondary transform to have a minimum cost. Therefore, when the secondary transform is applied to some transform units, information on whether to perform the secondary inverse transform needs to be signaled to the image decoding apparatus 200.
- the image decoding apparatus 200 may obtain and use information on whether to perform secondary inverse transform from a bitstream.
- Information on whether to perform the secondary inverse transform may be included in a bitstream associated with various data units.
- the image decoding apparatus 200 may include a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header. parameter index information included in the segment header) may be used.
- the image decoding apparatus 200 may obtain and use a syntax corresponding to information on whether to perform secondary inverse transform from the bitstream for each largest coding unit, reference coding unit, prediction unit, transform unit, or subgroup unit in the transform unit. have.
- FIG. 6 is a flowchart illustrating an image decoding method, according to an exemplary embodiment.
- the image decoding apparatus 200 In operation S620, the image decoding apparatus 200 generates inverse quantization coefficients by performing inverse quantization on the quantization coefficients.
- the image decoding apparatus 200 generates a secondary inverse transform by performing a secondary inverse transform corresponding to the secondary transform on the low frequency component of the inverse quantization coefficient.
- the image decoding apparatus 200 performs core inverse transform corresponding to the core transform on the secondary inverse transformed coefficient.
- the entropy encoding unit 135 of the image encoding apparatus 100 may include encoding information generated for each coding unit, for example, a quantized transform coefficient, Entropy codes syntax elements such as a prediction mode, a quantization parameter, and a motion vector of a prediction unit. Specifically, the entropy encoder 135 performs context-based binary arithmetic coding (hereinafter referred to as "CABAC") on the syntax elements. Also, the entropy decoding unit 215 of the image decoding apparatus 200 according to an embodiment performs entropy decoding to obtain syntax elements of information to be decoded.
- CABAC context-based binary arithmetic coding
- FIG. 7 is a block diagram illustrating a configuration of an entropy encoder according to an embodiment.
- the entropy encoder 700 of FIG. 7 corresponds to the entropy encoder 135 of FIG. 1.
- the entropy encoder 700 may include a binarizer 710, a context modeler 720, and a binary arithmetic coder 730. Include.
- the binary arithmetic coding unit 730 includes a regular coding engine 732 and a bypass coding engine 734.
- the context modeler 720 provides the regular coding unit 732 with a probabilistic model for the current coded symbol.
- the context modeler 720 determines a probability of a predetermined binary value based on previously encoded symbols, and calculates a probability of generating a binary value for encoding a binary value of a current encoded symbol.
- Will output The conventional context modeler has provided a probability model for a coded symbol through a context index (ctxIdx) representing information about occurrence probability of Most Probable Symbol (MPS) and information about which binary value of 0 and 1 corresponds to the MPS.
- ctxIdx context index representing information about occurrence probability of Most Probable Symbol (MPS) and information about which binary value of 0 and 1 corresponds to the MPS.
- the context modeler 720 previously does not distinguish between the MPS and the Least Probable Symbol (LPS), and previously P (1) representing a probability of occurrence of a predetermined binary value, for example, “1”.
- the binary arithmetic encoder 730 determines the probability of the predetermined binary value based on the encoded symbol.
- the context modeler 720 updates the probability of the predetermined binary value using a plurality of probability models according to what the binary value of the current coded symbol is. A process of updating the probability of a specific predetermined binary value will be described later.
- the regular coding unit 732 performs binary arithmetic coding based on a probability of a predetermined binary value provided from the context modeler 720 and a binary value of the current coding symbol. That is, the regular coding unit 732 may determine the occurrence probability P (1) of “1” and the occurrence probability P (0) of “0” based on the probability of a predetermined binary value provided from the context modeler 720. Based on the determined occurrence probabilities P (0) and P (1) of 0 and 1, binary arithmetic coding is performed by changing a section of a range representing a probability section according to a binary value of a current coding symbol.
- FIG. 8 is a diagram briefly illustrating an update process of a probability model.
- the probability state index pStateIdx in the conventional probability function is determined, if the value of the coded symbol is set to MPS, the LPS probability becomes lower in the current probability state (state ⁇ ).
- the probability state index (pStateIdx) is updated so that the probability state transitions to ⁇ + 1), and if the value of the coded symbol is not MPS, that is, the value designated as LPS, the LPS probability is increased in the current probability state (state ⁇ ).
- the probability state index pStateIdx is updated so that the probability state transitions to the backward state (state ⁇ -k (k> 0)).
- the probability state index pStateIdx is updated to A after encoding the current encoded symbol. If the value of the next coded symbol is a value designated by MPS, the probability state index pStateIdx is updated from A to B again.
- the probability function may have an exponential form as shown.
- this exponential probability function the probability distribution of LPS close to zero is very dense, and the probability distribution of LPS close to 1/2 is very sparse. Therefore, if the probability of occurrence of binary values of 0 and 1 is similar when the probability function of exponential form is similar, that is, if the probability of occurrence of binary values of 0 and 1 is close to 1/2, the probability is sparsely distributed.
- the prediction error of may increase.
- a probability value close to zero since a probability value close to zero must be expressed in detail, a bit depth for representing such a probability value may be increased. Accordingly, the size of a look-up table for storing a probability model having an exponential power function may be increased.
- the multiplication operation amount increases when the compact probability value is used, which may be a burden on the hardware.
- the context modeler 720 determines a probability of generating binary values of 0 and 1 based on a probability function having a uniform distribution.
- the context modeler 720 may update the probability of a predetermined binary value using a plurality of probability models.
- Equation 1 P_new is the probability of the updated LPS, P_old is the probability of the LPS used for the arithmetic coding of the current coded symbol, and W (W is an integer) is the number of previously encoded symbols and is referred to as the window size. do. 1 / W is a scaling factor, and y has a value of 0 if the current coded symbol has a binary value corresponding to the value of MPS and a value of 1 if the current coded symbol has a binary value corresponding to the value of LPS. .
- the scaling factor 1 / W determines how sensitively the entire CABAC encoding process is sensitive and does not respond to noise or errors. Determining the appropriate scaling factor 1 / W is a difficult and wasteful process.
- the context modeler 720 generates a plurality of updated probabilities using a first probability model and a second probability model having different scaling factors at the time of probability update, and generates the first probability model and the first probability model. 2 probability model is used to determine the last updated probability using each updated probability.
- the first probability model and the second probability model may have different window sizes.
- Equations 2 and 3 the first probability model and the second probability model used in the context modeler 720 according to an embodiment are represented by Equations 2 and 3 below.
- Equation 2 P0_new is the probability of the predetermined binary value updated through the first probability model using the window size W0, and P0_old is the probability of the predetermined binary value used for the arithmetic coding of the current coded symbol.
- P1_new is a probability of a predetermined binary value updated through the second probability model using the window size W1
- P1_old is a probability of a predetermined binary value used for arithmetic coding of the current coded symbol.
- the magnitude of W0 of the first probability model and the magnitude of W1 of the second probability model may be different from each other. For example, W0 may be less than W1.
- P0_new, P0_old, P1_new, or P1_old represent a predetermined binary value, that is, a probability of occurrence of 0 or 1.
- the probability used in one embodiment represents the probability of occurrence of a particular binary value, eg, "1", rather than MPS and LPS.
- a predetermined binary value is 1, that is, a case where P_new or P_old indicates a probability of occurrence of "1".
- the present invention is not limited thereto, and even when P_new or P_old is set to indicate a probability of occurrence of "0", the probability update process according to the embodiment may be similarly applied.
- the context modeler 720 calculates the final update probability P_new using Equation 4 below.
- Entropy reset means that binary arithmetic coding discards the current probability value and newly performs CABAC encoding based on a predetermined probability value.
- the probability value set as an initial value in the probability update process performed after such a reset process is not an optimal value, and converges to a certain probability value as a result of several update processes.
- the size W0 of the window of the first probability model 910 is set to be smaller than the size W1 of the window of the second probability model 920.
- the window size of the first probability model 910 may be 16, and the window size of the second probability model 920 may be 256.
- FIG. 10 is a block diagram illustrating a configuration of an entropy decoding unit according to an embodiment.
- the entropy decoder 1000 of FIG. 10 may correspond to the entropy decoder 215 of FIG. 2.
- the entropy decoding unit 1000 includes a context modeler 1010, a regular decoding unit 1020, a bypass decoding unit 1030, and an inverse binarization unit 1040.
- the entropy decoding apparatus 1000 performs an inverse process of the entropy encoding process performed by the entropy encoder 700 described above.
- the symbol encoded by the bypass coding is output to the bypass decoding unit 1030 and decoded, and the symbol encoded by the regular coding is decoded by the regular decoding unit 1020.
- the regular decoding unit 1020 performs arithmetic decoding on the binary value of the current coded symbol by using a probability of a predetermined binary value determined based on previous coded symbols decoded before the current coded symbol provided by the context modeler 1010. .
- a binary value representing a representative value of a predetermined probability interval is transmitted, so the regular decoding unit 1020 decodes the symbols encoded using 0 and 1 occurrence probability. can do.
- the first probability model and the second probability model may be adaptively selected based on the content.
- the first probability model and the second probability model can be determined based on the slice or picture.
- the size W0 of the window of the first probability model and the size W1 of the window of the second probability model are adaptively determined and signaled from the image encoding apparatus 100 to the image decoding apparatus 200 through a bitstream. Can be.
- the values of W0 and W1 may be signaled in units of a picture, a group of pictures (GOP), a slice, and a maximum coding unit.
- one value of W0 and W1 may be set to a fixed value and only the other value may be signaled through the bitstream.
- the value of the fixed window size may be dependent on the picture or slice type.
- both W0 and W1 may have a fixed value, in which case a separate syntax element for indicating the values of W0 and W1 need not be signaled.
- the values of W0 and W1 may be values obtained from the bitstream or values determined based on the slice or picture type.
- FIG. 11 is a flowchart illustrating an entropy decoding method according to an embodiment.
- step S1110 the entropy decoding unit 1000 updates previous probability information used for arithmetic decoding of a previous bin using the first probability prediction model and the second probability prediction model from the received bitstream. To obtain probability information used for arithmetic decoding of the current bin.
- the entropy decoder 1000 performs arithmetic decoding based on the obtained probability information to obtain a bin.
- the entropy decoder 1000 inverse binarizes the obtained bin to obtain a syntax element.
- FIG. 12 is a block diagram illustrating a configuration of an image encoding apparatus including a post processor according to an embodiment.
- the image encoding apparatus 1200 may include a predictor 1210, a transform and quantizer 1220, an entropy coding unit 1230, and a post processor 1240.
- FIG. 13 is a diagram illustrating an example of a 16x16 intra prediction mode
- FIG. 14 is a diagram illustrating an example of a 4x4 intra prediction mode.
- four modes of the 16 ⁇ 16 intra prediction mode include a vertical mode, a horizontal mode, a direct current (DC) mode, and a plane mode.
- the 4 ⁇ 4 intra prediction mode includes a vertical mode, a horizontal mode, a direct current mode, a diagonal down-left mode, and a diagonal down- mode.
- the pixel values of the pixels A to D adjacent to the top of the 4x4 current block are predicted as the pixel values of the 4x4 current block. That is, the value of pixel A is the four pixel values contained in the first column of the 4x4 current block, the value of pixel B is the four pixel values contained in the second column of the 4x4 current block, and the value of the pixel C is Is predicted as four pixel values included in the third column of the 4x4 current block, and the value of pixel D is predicted as four pixel values included in the fourth column of the 4x4 current block.
- the prediction block generated through intra prediction extending the value of the neighboring pixel in a predetermined direction has a certain direction according to the prediction mode.
- the directionality inside the prediction block may improve the prediction efficiency when the pixels of the current block to be encoded have a constant directionality, but may decrease the prediction efficiency when the pixels of the current block do not have the directionality.
- the post-processing unit 940 of the image encoding apparatus according to the present invention is a post-processing operation on a prediction block generated through intra prediction, and uses each pixel in the prediction block and at least one neighboring pixel. By changing the pixel value of each pixel in the prediction block to generate a new prediction block to improve the prediction efficiency of the image.
- the post processor 1240 changes the pixel value of each pixel constituting the first prediction block by performing calculation using each pixel constituting the first prediction block generated by the intra predictor 1213 and at least one neighboring pixel. Generate a second prediction block.
- the intra predictor 1213 generates a first prediction block by applying a general intra prediction method.
- reference numerals 1510 to 1560 indicate a process of changing each pixel value in the first prediction block processed by the post processor 1240 in time order.
- the post-processor 1240 calculates an average value by weighting pixel values of pixels to be changed in the first prediction block and neighboring pixels located above and to the left of the first prediction block.
- the pixel value of each pixel of the first prediction block is changed.
- the pixel value of the pixel 1521 of the first prediction block 1510 to be changed is P [1] [1]
- the pixel value of the pixel 1522 located above is P [0] [. 1]
- the value obtained by changing the pixel value P [1] [1] of the pixel 1521 is P '[1] [1. ]
- P '[1] [1] can be calculated as shown in Equation 5 below.
- Equation 5 ⁇ is a first weight parameter given to P [1] [1], and ⁇ is given to P [0] [1], which is a pixel value of a pixel located above P [1] [1]. Is a second weight parameter, and ⁇ is a third weight parameter given to P [1] [0], which is the pixel value of the pixel located on the left of P [1] [1].
- the post-processor 1240 may change a pixel to be changed from an uppermost left side to a lower right direction with respect to each pixel in the first prediction block, and pixels located at upper and left sides thereof.
- the pixel values of each pixel of the first prediction block are changed by weighting each pixel value.
- the post-processing operation of the predictive block is not limited to the left-most side from the right-most side, but is from the right-most side to the left-most side, or from the right-most side to the left-most side, or from the left-most side to the top-right side. It may be performed sequentially for each pixel of the first prediction block.
- the first operation is performed by using a pixel to be changed and pixels located below and to the right of the first prediction block. Change the pixel value of each pixel of the prediction block.
- reference numeral 1610 denotes a first pixel of a first prediction block to be changed
- reference numeral 1611 denotes a second pixel located above the first pixel 1610
- reference numeral 1612 denotes a first pixel ( And a third pixel located on the left side of 1610.
- the size of the first prediction block is mxn (m and n are positive integers), the i th column (i is an integer from 0 to m-1) to be changed inside the first prediction block, and j (j is 0 to n P [i] [j] for the pixel value of the first pixel 1610 located in the -th row) and P [pixel] for the second pixel 1612 located to the left of the first pixel 1610.
- i] [j-1] when the pixel value of the third pixel 16163 located above the first pixel 1610 is P [i-1] [j], The pixel value of the pixel 1610 is changed to P '[i] [j].
- Equation 6 ⁇ is a first weight parameter given to P [i] [j], and ⁇ is P [i-1] [j] which is a pixel value of a pixel located above P [i] [j]. Is a second weight parameter imparted to, and ⁇ is a third weight parameter imparted to P [i] [j-1], which is a pixel value of the pixel located to the left of P [i] [j].
- the value of the weight parameter ⁇ or ⁇ may be smaller.
- the relationship between the weight parameters ⁇ , ⁇ , and ⁇ may be determined based on the relationship of Equation 7 below.
- Equation 7 P represents an integer of 0 or more.
- the post processor 1240 generates the second prediction block by changing the pixel value by applying the above-described equation (2) from the upper left to the lower right to all the pixels in the first prediction block 1600.
- the image encoding apparatus 1200 may compare the cost of a bitstream of encoding second prediction blocks generated by applying various weights, and use the weight to generate a second prediction block having a minimum cost.
- Information ie, first to third weight parameters
- the information about the first to third weight parameters may be described as parameter index information as shown in Table 1 below.
- parameter index information may be defined for each of coding units having a size of 2Nx2N and NxN.
- the parameter index information may be signaled from the image encoding apparatus to the image decoding apparatus in various ways.
- the parameter index information may be embedded in the information about the prediction mode.
- the parameter index information may be signaled by being included in information indicating a block size, information indicating a partition type of a block, information indicating a luminance or color difference component, and the like.
- the parameter index information may be explicitly signaled at the level of a coding unit or a prediction unit.
- the image encoding apparatus 1200 may generate a second prediction block by applying different weights to each block, and simplify the calculation.
- the same weight information may be applied to blocks included in the same coding unit.
- 17 is a block diagram illustrating a configuration of an image decoding apparatus including a post processor according to an exemplary embodiment.
- the image decoding apparatus 1700 may include an entropy decoder 1710, a reordering unit 1720, an inverse quantization unit 1730, an inverse transform unit 1740, an adder 1745, and a motion compensator 1750. , An intra predictor 1760, a filter 1770, and a post processor 1780.
- the entropy decoder 1710 receives the compressed bitstream and performs entropy decoding to extract the prediction mode information of the current block and the post-processing operation mode information used for generating the second prediction block.
- the image decoding apparatus 1700 may obtain and use information about a weight parameter used for generating the second prediction block from the bitstream.
- Information about the weight parameter (or parameter index information) may be included in a bitstream associated with various data units.
- the image decoding apparatus 1700 may use parameter index information included in a sequence parameter set, a picture parameter set, a video parameter set, a slice header, and a slice segment header.
- the image decoding apparatus 1700 may obtain and use syntax corresponding to the parameter index information for each maximum coding unit, reference coding unit, and prediction unit from the bitstream.
- the entropy decoder 1710 performs entropy decoding on texture data to extract quantized transform coefficients of the current block.
- the inverse quantization unit 1730 and the inverse transform unit 1740 perform inverse quantization and inverse transformation on the quantized transform coefficients to reconstruct the residual corresponding to the difference between the current block and the second prediction block during encoding.
- the motion compensator 1750 and the intra predictor 1760 generate and output the prediction block according to the prediction mode of the current block.
- the post processor 1780 is generated by the intra predictor 1760 according to post processing information extracted from the bitstream.
- the second prediction block is generated by changing each pixel value of the first prediction block.
- the operation of the post processor 1780 of the decoding apparatus 1700 is the same as that of the post processor 1240 of FIG. 12 except that a second prediction block is generated according to post processing operation information extracted from the bitstream. .
- the adder 1745 adds the prediction block generated by the motion compensator 1750 and the intra predictor 1760 and the reconstructed residual to decode the current block.
- the adder 1745 adds the second prediction block generated by the post processor 1780 and the reconstructed residual to the current block. Decrypt the block.
- the decoded block is stored in a predetermined memory via the filter 1770 and used as reference data when decoding the next block.
- FIG. 18 is a flowchart illustrating an image encoding method, according to an exemplary embodiment.
- a first prediction block for a current block encoded in step S1810 is generated.
- the first prediction block is an intra prediction block generated by applying a general intra prediction method.
- a bitstream is generated by transforming, quantizing, and entropy encoding a residual, which is a difference between a current block and a second prediction block.
- the decoding apparatus may generate the second prediction block for the current block.
- operation information using each pixel constituting the first prediction block and neighboring pixels of each pixel is extracted from the bitstream.
- a pixel value of each pixel is changed by calculating pixels of each pixel constituting the first prediction block and pixels located above and to the left of each pixel, according to the extracted operation information, to generate a second prediction block.
- step S1960 the residual and the second prediction block are added to decode the current block.
- FIG. 20 is a block diagram illustrating a configuration of a motion compensator according to an exemplary embodiment.
- the motion compensator 2000 of FIG. 20 may be included in the inter predictor 115 of FIG. 1.
- the motion compensator 2000 of FIG. 20 may be included in the inter predictor 235 of FIG. 2.
- the motion compensator 2000 includes a block motion compensator 2010, a pixel motion compensator 2020, and a prediction value generator 2030.
- the block unit motion compensator 2010 is a current block encoded using bidirectional motion vectors determined by a motion estimator (not shown) included in the inter predictor 115 of FIG. 1 or the inter predictor 235 of FIG. 2. Perform block-wise bidirectional motion compensation for.
- the pixel-by-pixel motion compensator 2020 additionally performs pixel-by-pixel motion compensation for each pixel of the current block that is bi-directionally motion-compensated on a block-by-block basis using pixels of reference pictures indicated by the bi-directional motion vectors.
- the prediction value generator 2030 generates a final bidirectional motion prediction value for the current block by using the block-wise bidirectional motion compensation result and the pixel-by-pixel motion compensation result.
- a block-based bidirectional motion prediction and compensation process and a pixel-based bidirectional motion compensation process according to an embodiment will be described in detail.
- 21 is a reference diagram for explaining a block-based bidirectional motion prediction and compensation process according to an embodiment.
- a motion estimator (not shown) included in the inter predictors 115 and 235 may include a current coded of the current picture 2100 in the first reference picture 2110 and the second reference picture 2120.
- Bi-directional motion prediction is performed to search for the area most similar to block 2101.
- the first reference picture 2110 is a picture before the current picture 2100 and the second reference picture 2120 is a picture after the current picture 2100.
- the first corresponding region 2112 most similar to the current block 2101 in the first reference picture 2110 and the second corresponding region most similar to the current block 2101 in the second reference picture 2120 ( 2122) is determined.
- the first motion vector MV1 is determined based on a position difference between the block 2111 and the first corresponding region 2112 at the same position as the current block 2101 of the first reference picture 2110, and the second motion vector MV1 is determined.
- the second motion vector MV2 is determined based on the position difference between the block 2121 and the second corresponding region 2122 at the same position as the current block 2101 of the reference picture 2120.
- the block unit motion compensation unit 2010 performs block unit bidirectional motion compensation for the current block 2101 by using the first motion vector MV1 and the second motion vector MV2. For example, a pixel value of the first reference picture 2110 located at (i, j) (i, j is an integer) is set to P0 (i, j) and a second reference picture 2120 located at (i, j).
- the block unit motion compensator 2010 may average or weight the pixels of the first and second corresponding regions 2112 and 2122 indicated by the first and second motion vectors MV1 and MV2. Block-based motion compensation is performed on the current block 2101 using the sum.
- the pixel-by-pixel motion compensator 2020 may perform pixel-by-pixel operation on the current block 2101 based on an optical flow of pixels of the first reference picture 2110 and the second reference picture 2120. Perform motion compensation.
- Optical flow refers to a pattern of apprarent motion of objects, surfaces, caused by relative movement between an observer (eye or camera) and a scene.
- the optical flow can be represented by calculating the motion between the frames obtained at any time t and t + ⁇ t.
- the pixel value located at (x, y) of the t time frame is defined as I (x, y, t).
- I (x, y, t) is a value that changes in time and space.
- Equation 8 The derivative of I (x, y, t) with respect to time t is given by Equation 8 below.
- Equation 8 Equation 8 may be expressed as Equation 9 below.
- the magnitude of the displacement vector Vx in the x-axis direction and the displacement vector Vy in the y-axis direction is preferably smaller than the pixel accuracy used for bidirectional motion prediction.
- the magnitudes of Vx and Vy are preferably smaller than 1/4.
- the pixel-by-pixel motion compensator 2020 calculates a displacement vector Vx in the x-axis direction and a displacement vector Vy in the y-axis direction according to Equation 5, and uses the displacement vectors Vx and Vy in pixel units. Perform motion compensation.
- the pixel-by-pixel motion compensator 2020 calculates displacement vectors Vx and Vy according to Equation 5 by using pixels of the first reference picture and the second reference picture determined as a result of the bidirectional motion prediction in blocks.
- 22 is a reference diagram for explaining a process of performing pixel-by-pixel motion compensation, according to an exemplary embodiment.
- the horizontal gradient value GradX0 (i, j) of the first corresponding pixel is GradX0 (i, j)
- the horizontal gradient value of the second corresponding pixel is GradX1 (i, j).
- the vertical gradient value is defined as GradY1 (i, j).
- the temporal distance between the current picture to which the current block 2200 belongs and the first reference picture to which the first corresponding region 2210 belongs is d0
- the current picture and the second reference picture to which the second corresponding region 2220 belongs belongs. The temporal distance is defined as d1.
- Equation 10 can be approximated by the amount of change over time of the pixel value P0 (i, j) of the first corresponding pixel and the pixel value P1 (i, j) of the second corresponding pixel.
- Equation 13 since the displacement vector Vx in the x-axis direction and the displacement vector Vy in the y-axis direction can be changed according to the position of the current pixel P (i, j), that is, it is dependent on (i, j), so Vx ( i, j), Vy (i, j).
- the pixel of the first corresponding region 2210 of the first reference picture most similar to the current pixel P (i, j), which is bi-directional motion compensated on a pixel-by-pixel basis, may be formed. It is assumed that the first corresponding pixel P0 (i, j) is not the first corresponding pixel P0 (i, j) but the first displacement-corresponding pixel PA which is moved by the predetermined displacement vector Vd. Since it is assumed that there is a constant motion in the video sequence as described above, the pixel most similar to the current pixel P (i, j) in the second corresponding region 2220 of the second reference picture is the second corresponding pixel P1 (i, j).
- the values of the first displacement-corresponding pixel PA and the second displacement-corresponding pixel PB include the displacement vector Vx in the x-axis direction, the displacement vector Vy in the y-axis direction, the horizontal gradient value GradX0 (i, j) of the first corresponding pixel, Using the vertical gradient value GradY0 (i, j), the horizontal gradient value GradX1 (i, j) and the vertical gradient value GradY1 (i, j), the following equations (14) and (15) are used. Can be.
- Equation 13 means when ⁇ ij is 0, that is, when the values of the first displacement-corresponding pixel PA and the second displacement-corresponding pixel PB are the same.
- the inclination value of each corresponding pixel is determined by calculating an amount of change of the pixel value at the fractional pixel position in the horizontal and vertical directions of the first corresponding pixel and the second corresponding pixel, or calculated using a predetermined filter. Can be.
- the pixel-by-pixel motion compensator 2020 is configured such that ⁇ ij is minimized in a window ⁇ ij 2202 having a predetermined size including pixels around the current pixel P (i, j) that is bidirectionally motion-compensated.
- the displacement vector Vx in the axial direction and the displacement vector Vy in the y-axis direction are determined.
- the case where? Ij is 0 is most preferable, but there are no displacement vectors Vx in the x-axis direction and a displacement vector Vy in the y-axis direction that satisfy the case where? Ij is 0 for all pixels in the window ( ⁇ ij) 2202. Since it may not, the displacement vector Vx in the x-axis direction and the displacement vector Vy in the y-axis direction are determined so that ⁇ ij is minimized.
- a window ⁇ ij 2300 having a predetermined size is represented by (2M + 1) * (2N + 1) (M, N being centered on the bi-predicted pixel P (i, j) of the current block. Integer).
- P (i ', j') (iM ⁇ i' ⁇ i + M and (i ', j') j ⁇ ij when jM ⁇ j' ⁇ j + M)
- the pixel value of the first corresponding pixel of the first reference picture 2310 corresponding to the bidirectionally predicted pixel P (i ', j') of the current block is P0 (i ', j') and the bidirectionally predicted pixel of the current block.
- P1 (i ', j') as the pixel value of the second corresponding pixel of the second reference picture 2320 corresponding to P (i ', j')
- the first displacement-corresponding pixel PA ' is represented by the following equation; P0 (i ', j') + Vx * GradX0 (i ', j') + Vy * GradY0 (i ', j'), and the second displacement-corresponding pixel PB 'is represented by the following equation; P1 (i ', j')-Vx * GradX1 (i ', j')-Vy * GradY1 (i ', j').
- the displacement vector Vx in the x-axis direction and the displacement vector Vy in the y-axis direction such that the difference value? I'j 'between the first displacement-corresponding pixel PA' and the second displacement-corresponding pixel PB 'are minimized are represented by the following equation (17). As can be determined using the local maximum or local value of ⁇ (Vx, Vy) that is the sum of squares of the difference value ⁇ i'j '.
- ⁇ (Vx, Vy) is a function that takes Vx and Vy as parameters, and the maximum or minimum value is divided by phi (Vx, Vy) with respect to Vx and Vy, as shown in Equations 18 and 19, to be 0. Can be determined by calculation.
- Equation 20 Two linear equations having Vx (i, j) and Vy (i, j) as variables can be obtained as shown in Equation 20 below.
- Equation 20 s1 to s6 are as shown in Equation 21 below.
- det1 s3 * s5-s2 * s6
- det2 s1 * s6-s3 * s4
- det s1 * s5-s2 * s4.
- the pixel value of P0 (i, j), and the pixel value of the first corresponding pixel of the first reference picture corresponding to the pixel at position (i, j) of the current block, the pixel value of P0 (i, j), of the first reference picture The second correspondence of the second reference picture corresponding to the pixel at the position (i, j) of the current block as the horizontal gradient value of the first corresponding pixel as GradX0 (i, j), the vertical gradient value as GradY0 (i, j)
- the pixel value of the pixel is P1 (i, j)
- the horizontal gradient value of the second corresponding pixel of the second reference picture is GradX1 (i, j)
- the vertical gradient value is GradY1 (i, j)
- the prediction value generator 2030 generates a bidirectional motion prediction value as shown in Equation 22 below.
- Equation 22 (P0 (i, j) + P1 (i, j)) / 2 corresponds to a block-wise bidirectional motion compensation value, and (Vx * (GradX0 (i, j) -GradX1 (i, j)) + Vy * (GradY0 (i, j) -GradY1 (i, j)) / 2 corresponds to the pixel-by-pixel motion compensation value calculated according to one embodiment.
- Equation 16 is calculated on the assumption that the temporal distance d0 between the current picture and the first reference picture and the temporal distance d1 between the current picture and the second reference picture are both 1. If d0 and d1 are not 1, the magnitude of the displacement vector Vd should be scaled in inverse proportion to d0 and d1. That is, the displacement vector of the first reference picture pointing to the first displacement corresponding pixel in the first corresponding pixel is (Vx0, Vy0), and the displacement vector of the second reference picture pointing to the second displacement corresponding pixel in the second corresponding pixel is referred to as (Vx1).
- Vx and Vy can be calculated by calculating partial and partial values of the function ⁇ (Vx, Vy) with respect to Vx and Vy.
- Vx (i, j) det1 / det
- Vy (i, j) det2 / det
- det1 s3 * s5-s2 * s6
- det2 s1 * s6-s3 * s4
- det s1 * s5-s2 * s4.
- the values of s1 to s6 are as shown in Equation 24 below.
- Equation 23 is changed as in Equation 25 below, and the prediction value generator 2030 generates a bidirectional motion compensation value according to Equation 25.
- Equation 9 is based on the assumption that the change of the pixel value with time is 0, but the pixel value may change with time. If the change in pixel value according to time is q, Equation 9 is changed to Equation 26 below.
- Equation 27 denotes an average of the difference between pixel values of the first corresponding area and the second corresponding area, and may be calculated by Equation 27 below.
- the prediction value generator 2030 may generate a bidirectional motion compensation value according to Equation 25 described above.
- the horizontal and vertical gradient values may be calculated by calculating a change amount of the pixel value at the fractional pixel position in the horizontal and vertical directions of the first and second corresponding pixels or by using a predetermined filter. have.
- the horizontal gradient value GradX0 (i, j) and the vertical gradient value GradY0 (i, j) of the first corresponding pixel P0 (i, j) 2410 of the first reference picture are the first corresponding pixel P0. It can be calculated by obtaining the amount of change of the pixel value at the adjacent fractional pixel position horizontally adjacent to (i, j) 2410 and the amount of change of the pixel value at the peripheral fraction pixel position adjacent in the vertical direction.
- the fractional pixel P0 (ih, j) 2460 and the fractional pixel P0 (i + h) are separated from the P0 (i, j) in the horizontal direction by h (h is a fractional value less than 1).
- Calculate the horizontal gradient value GradX0 (i, j) by calculating the amount of change in the pixel value of (j) (2470), and fractional pixels P0 (i, jh) (2480) and fractional pixels P0 (i separated by h in the vertical direction.
- j + h) (2490) by calculating the amount of change in the pixel value can be calculated the vertical gradient value GradY0 (i, j).
- fractional pixels P0 (ih, j) 2460, P0 (i + h, j) 2470, P0 (i, jh) 2480 and P0 (i, j + h) 2490 are typically interpolated. Can be calculated using the method. In addition, the gradient value of the second corresponding pixel of the other second reference picture may also be calculated similarly to Equation 29.
- the gradient value at each corresponding pixel may be calculated using a predetermined filter.
- FIG. 25 is a diagram for describing a process of calculating horizontal and vertical gradient values, according to another embodiment.
- FIG. 26 is a table illustrating filter coefficients of a gradient value calculation filter according to another embodiment.
- the gradient value may be determined by applying a predetermined filter to the pixels of the reference picture.
- a predetermined filter is applied to M pixels 2520 to the left and M pixels 2510 to the right with respect to the corresponding pixel P0 2500 for which the current horizontal gradient value is to be obtained.
- a horizontal gradient value of 2500 may be calculated.
- the vertical gradient value can also be calculated by applying the filter coefficients shown in FIG. 26 to the surrounding pixels according to the window size 2N and the interpolation position. In this case, 2M of FIG. 26 may be replaced with 2N.
- the operation may be limited under a predetermined condition.
- Pixel-by-pixel motion compensation according to an embodiment may be performed based on the size of the block. For example, if the size of the block is smaller than the predetermined size, pixel-by-pixel motion compensation may not be performed.
- the image decoding apparatus 200 may perform local illumination compensation and syntax elements indicating whether to perform a motion compensation (eg, AMVP mode or merge mode) using a coded block flag (CBF), derived motion information, or the like. Whether to perform pixel-based motion compensation may be determined based on a syntax element indicating whether or not, a syntax element indicating whether to perform affine motion compensation, and the like. In this case, the CBF, information indicating whether to perform motion compensation using the derived motion information, information indicating whether to perform local illuminance compensation, and information indicating whether to perform affine motion compensation are pixel-wise motion compensation. Information indicating whether or not the information may be limited.
- FIG. 27 is a flowchart illustrating an image encoding method, according to an exemplary embodiment.
- the image encoding apparatus 100 may include a first motion vector and a second motion region indicating a first corresponding region and a second corresponding region that are most similar to a current block in a first reference picture and a second reference picture. Bidirectional motion prediction is performed to determine a motion vector.
- the image encoding apparatus 100 performs block-based bidirectional motion compensation for the current block by using the first motion vector and the second motion vector.
- the image encoding apparatus 100 performs pixel-by-pixel motion compensation for each pixel of the current block by using pixels of the first reference picture and the second reference picture.
- the image encoding apparatus 100 may determine horizontal and vertical gradient values of the first corresponding pixel of the first reference picture corresponding to each pixel of the current block, and the second reference picture corresponding to each pixel of the current block. Correcting pixel-by-pixel motion of each pixel of the current block by using horizontal and vertical gradient values of the second corresponding pixel and a horizontal displacement vector and a vertical displacement vector determined by using pixels of the first reference picture and the second reference picture. You can generate a value.
- the image encoding apparatus 100 In operation S2740, the image encoding apparatus 100 generates a bidirectional motion prediction value for the current block by adding a blockwise bidirectional motion compensation result and a pixelwise motion compensation result.
- the residual signal which is the difference between the predicted bidirectional motion prediction value and the original input signal, is then encoded in a bitstream form through transform, quantization, and entropy encoding.
- the pixel-based motion compensation value is different from the general bidirectional motion prediction value, it is preferable to add predetermined index information indicating whether the pixel-based motion compensation value is used to the encoded bitstream. .
- the image decoding apparatus 200 receives a bitstream.
- the image decoding apparatus 200 determines whether to perform pixel-by-pixel motion compensation on the current block based on information on whether to limit the pixel-by-pixel motion compensation extracted from the bitstream.
- step S2830 when the pixel-by-pixel motion compensation is performed, the first motion vector and the second motion indicating the first corresponding region and the second corresponding region that are most similar to the current block in the first reference picture and the second reference picture from the bitstream. Information about the vector is extracted.
- the image decoding apparatus 200 performs block-based bidirectional motion compensation for the current block by using a first motion vector and the second motion vector.
- the image decoding apparatus 200 performs pixel-by-pixel motion compensation for each pixel of the current block by using pixels of the first reference picture and the second reference picture.
- the pixel-by-pixel motion compensator 2020 of the image decoding apparatus 200 includes horizontal and vertical gradient values of the first corresponding pixel of the first reference picture corresponding to each pixel of the current block, and the angle of the current block.
- a current and horizontal displacement vector determined using the horizontal and vertical gradient values of the second corresponding pixel of the second reference picture corresponding to the pixel, and the pixels of the first reference picture and the second reference picture.
- a pixel-by-pixel motion compensation value of each pixel of the block may be generated.
- the image decoding apparatus 200 generates a bidirectional motion prediction value for the current block by using the blockwise bidirectional motion compensation result and the pixelwise motion compensation result.
- the bidirectional motion prediction value for the current block is added to the residual value of the current block extracted from the bitstream and decoded to restore the current block.
- the above-described embodiments of the present invention can be written as a program that can be executed in a computer, and can be implemented in a general-purpose digital computer that operates the program using a computer-readable recording medium.
- the computer-readable recording medium may include a storage medium such as a magnetic storage medium (eg, a ROM, a floppy disk, a hard disk, etc.) and an optical reading medium (eg, a CD-ROM, a DVD, etc.).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Discrete Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
Abstract
Description
파라미터 인덱스 | 2Nx2N 인트라 부호화 단위 | NxN 인트라 부호화 단위 | ||||
α | β | γ | α | β | γ | |
0 | 8 | 0 | 0 | 8 | 0 | 0 |
1 | 4 | 2 | 2 | 6 | 1 | 1 |
2 | 4 | 4 | 0 | 6 | 2 | 0 |
3 | 4 | 0 | 4 | 6 | 0 | 2 |
Claims (15)
- 비트스트림으로부터 코어 변환, 세컨더리 변환 및 양자화를 통해 생성된 양자화 계수를 추출하는 단계;상기 양자화 계수에 대해 역양자화를 수행하여 역양자화 계수를 생성하는 단계;상기 역양자화 계수의 저주파수 성분에 대해 상기 세컨더리 변환에 대응하는 세컨더리 역변환을 수행하여 세컨더리 역변환된 계수를 생성하는 단계; 및상기 세컨더리 역변환된 계수에 대해 상기 코어 변환에 대응하는 코어 역변환을 수행하는 단계를 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서,상기 비트스트림으로부터 상기 세컨더리 역변환 수행 여부에 관한 정보를 획득하는 단계를 더 포함하고,상기 세컨더리 역변환은 상기 세컨더리 역변환 수행 여부에 관한 정보에 기초하여 수행되는 것을 특징으로 하는, 영상 복호화 방법.
- 제 1 항에 있어서,상기 세컨더리 역변환은 변환 블록 내의 서브 그룹 단위로 수행되고, 상기 서브 그룹의 크기는 상기 변환 블록의 크기 또는 양자화 파라미터에 기초하여 결정되는 것을 특징으로 하는, 영상 복호화 방법.
- 제 3 항에 있어서,상기 서브 그룹의 위치에 기초하여 상기 서브 그룹에 대해 수행되는 상기 세컨더리 역변환의 타입이 결정되는 것을 특징으로 하는, 영상 복호화 방법.
- 수신된 비트스트림으로부터, 제 1 확률 모델 및 제 2 확률 모델을 이용하여 이전 빈의 산술 복호화에 이용된 이전 확률 정보를 업데이트하여 현재 빈의 산술 복호화에 이용되는 확률 정보를 획득하는 단계;상기 획득된 확률 정보에 기초하여 산술 복호화를 수행하여 상기 빈을 획득하는 단계; 및상기 획득된 빈을 역 이진화하여 구문 요소를 획득하는 단계를 포함하고,상기 제 1 확률 모델 및 상기 제 2 확률 모델은 상기 현재 빈의 이전에 복호화된 빈의 개수를 나타내는 윈도우의 크기를 이용하며, 상기 제 1 확률 모델의 윈도우의 크기는 상기 제 2 확률 모델의 윈도우의 크기보다 작은 것을 특징으로 하는, 엔트로피 복호화 방법.
- 제 5 항에 있어서,상기 제 1 확률 모델의 윈도우의 크기 및 상기 제 2 확률 모델의 윈도우의 크기는 상기 비트스트림으로부터 획득되는 것을 특징으로 하는, 엔트로피 복호화 방법.
- 제 5 항에 있어서,상기 제 1 확률 모델의 윈도우의 크기 또는 상기 제 2 확률 모델의 윈도우의 크기 중 하나는 고정된 값을 갖는 것을 특징으로 하는, 엔트로피 복호화 방법.
- 수신된 비트스트림으로부터 복호화되는 현재 블록의 예측 모드 정보를 추출하는 단계;상기 추출된 예측 모드 정보에 따라서 상기 현재 블록에 대한 제 1 예측 블록을 생성하는 단계;상기 비트스트림으로부터 상기 제 1 예측 블록을 구성하는 각 픽셀과 상기 각 픽셀의 주변 픽셀들을 이용한 연산 정보를 추출하는 단계;상기 추출된 연산 정보에 따라서 상기 제 1 예측 블록을 구성하는 각 픽셀과 상기 각 픽셀의 상측 및 좌측에 위치한 픽셀들의 연산을 통해 상기 각 픽셀의 픽셀값을 변경하여 제 2 예측 블록을 생성하는 단계;상기 비트스트림으로부터 상기 현재 블록과 상기 제 2 예측 블록의 차이값에 해당하는 레지듀얼을 추출하여 복원하는 단계; 및상기 레지듀얼과 상기 제 2 예측 블록을 가산하여 상기 현재 블록을 복호화하는 단계를 포함하고,상기 연산 정보는 파라미터 인덱스 정보를 포함하고, 상기 파라미터 인덱스 정보는 상기 제 1 예측 블록을 구성하는 각 픽셀에 적용되는 제 1 가중치 파라미터, 상기 각 픽셀의 상측에 위치한 픽셀에 적용되는 제 2 가중치 파라미터 및 상기 각 픽셀의 좌측에 위치한 픽셀에 적용되는 제 3 가중치 파라미터를 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 제 8 항에 있어서,상기 파라미터 인덱스 정보는 2Nx2N 및 NxN 크기의 부호화 단위에 대해 정의되는 것을 특징으로 하는, 영상 복호화 방법.
- 제 8 항에 있어서,상기 파라미터 인덱스 정보는 상기 예측 모드 정보에 포함되는 것을 특징으로 하는, 영상 복호화 방법.
- 제 8 항에 있어서,상기 파라미터 인덱스 정보는 부호화 단위 또는 예측 단위 레벨에서 상기 비트스트림으로부터 획득되는 것을 특징으로 하는, 영상 복호화 방법.
- 비트스트림을 수신하는 단계;상기 비트스트림으로부터 추출된 픽셀 단위 움직임 보상의 제한 여부에 관한 정보에 기초하여 현재 블록에 대해 픽셀 단위 움직임 보상을 수행할지 여부를 결정하는 단계;상기 픽셀 단위 움직임 보상을 수행하는 경우, 상기 비트스트림으로부터 제 1 참조 픽처 및 제 2 참조 픽처에서 현재 블록과 가장 유사한 제 1 대응 영역 및 제 2 대응 영역을 가리키는 제 1 움직임 벡터 및 제 2 움직임 벡터에 관한 정보를 추출하는 단계;상기 제 1 움직임 벡터 및 상기 제 2 움직임 벡터를 이용하여 상기 현재 블록에 대한 블록 단위 양방향 움직임 보상을 수행하는 단계;상기 제 1 참조 픽처 및 상기 제 2 참조 픽처의 픽셀들을 이용하여 상기 현재 블록의 각 픽셀에 대한 픽셀 단위 움직임 보상을 수행하는 단계; 및상기 블록 단위 양방향 움직임 보상 결과와 상기 픽셀 단위 움직임 보상 결과를 이용하여 상기 현재 블록에 대한 양방향 움직임 예측값을 생성하는 단계를 포함하는 것을 특징으로 하는, 영상 복호화 방법.
- 제 12 항에 있어서,상기 픽셀 단위 움직임 보상의 제한 여부에 관한 정보는, CBF(Coded Block Flag), 유도된 움직임 정보를 사용하여 움직임 보상을 수행할지 여부를 나타내는 정보, 로컬 조도 보상을 수행할지 여부를 나타내는 정보 및 어파인 움직임 보상을 수행할지 여부를 나타내는 정보 중 적어도 하나를 포함하고,상기 CBF가 0이 아니거나, 상기 유도된 움직임 정보를 사용하여 움직임 보상을 수행하거나, 상기 로컬 조도 보상을 수행하거나, 상기 어파인 움직임 보상을 수행하는 경우, 상기 픽셀 단위 움직임 보상을 수행하는 단계는 수행되지 않는 것을 특징으로 하는, 영상 복호화 방법.
- 제 13 항에 있어서,상기 픽셀 단위 움직임 보상을 수행하는 단계는,상기 현재 블록의 각 픽셀과 대응되는 상기 제 1 참조 픽처의 제 1 대응 픽셀의 수평 및 수직 방향 경사도값, 상기 현재 블록의 각 픽셀과 대응되는 제 2 참조 픽처의 제 2 대응 픽셀의 수평 및 수직 방향 경사도값, 및 상기 제 1 참조 픽처 및 제 2 참조 픽처의 픽셀들을 이용하여 결정된 수평 방향 변위 벡터 및 수직 방향 변위 벡터를 이용하여 상기 현재 블록의 각 픽셀의 픽셀 단위 움직임 보정값을 생성하는 것을 특징으로 하는, 영상 복호화 방법.
- 제 14 항에 있어서,상기 수평 방향 변위 벡터 및 수직 방향 변위 벡터는,소정 크기의 윈도우 영역 내의 상기 제 1 참조 픽처의 제 1 대응 픽셀을 상기 수평 방향 변위 벡터 및 수직 방향 변위 벡터와 상기 제 1 대응 픽셀의 수평 및 수직 방향 경사도값을 이용하여 변위시킨 제 1 변위값과, 상기 제 2 참조 픽처의 제 2 대응 픽셀을 상기 수평 방향 변위 벡터 및 수직 방향 변위 벡터와 상기 제 2 대응 픽셀의 수평 및 수직 방향 경사도값을 이용하여 변위시킨 제 2 변위값 사이의 차이값이 최소가 되도록 하는 수평 및 수직 방향 변위 벡터로 결정되는 것을 특징으로 하는, 영상 복호화 방법.
Priority Applications (15)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020247007844A KR20240034898A (ko) | 2015-10-13 | 2016-10-13 | 영상을 부호화 또는 복호화하는 방법 및 장치 |
CN202110146952.4A CN112929660B (zh) | 2015-10-13 | 2016-10-13 | 用于对图像进行编码或解码的方法和设备 |
CN202110145731.5A CN112929659B (zh) | 2015-10-13 | 2016-10-13 | 用于对图像进行编码或解码的方法和设备 |
KR1020227040175A KR102646890B1 (ko) | 2015-10-13 | 2016-10-13 | 영상을 부호화 또는 복호화하는 방법 및 장치 |
US15/767,307 US10939104B2 (en) | 2015-10-13 | 2016-10-13 | Method and device for encoding or decoding image |
KR1020227005128A KR102469145B1 (ko) | 2015-10-13 | 2016-10-13 | 영상을 부호화 또는 복호화하는 방법 및 장치 |
CN202110171292.5A CN112929661B (zh) | 2015-10-13 | 2016-10-13 | 解码方法和编码方法 |
EP19207504.2A EP3629581A1 (en) | 2015-10-13 | 2016-10-13 | Method and device for encoding or decoding image |
CN201680059343.2A CN108141594B (zh) | 2015-10-13 | 2016-10-13 | 用于对图像进行编码或解码的方法和设备 |
KR1020177037707A KR102365164B1 (ko) | 2015-10-13 | 2016-10-13 | 영상을 부호화 또는 복호화하는 방법 및 장치 |
EP16855742.9A EP3316580A4 (en) | 2015-10-13 | 2016-10-13 | Method and device for encoding or decoding image |
MX2018004250A MX2018004250A (es) | 2015-10-13 | 2016-10-13 | Metodo y dispositivo para codificar o decodificar imagen. |
US17/167,866 US11638006B2 (en) | 2015-10-13 | 2021-02-04 | Method and device for encoding or decoding image |
US17/167,850 US11553182B2 (en) | 2015-10-13 | 2021-02-04 | Method and device for encoding or decoding image |
US18/194,817 US12088808B2 (en) | 2015-10-13 | 2023-04-03 | Method and device for encoding or decoding image |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562240725P | 2015-10-13 | 2015-10-13 | |
US62/240,725 | 2015-10-13 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/767,307 A-371-Of-International US10939104B2 (en) | 2015-10-13 | 2016-10-13 | Method and device for encoding or decoding image |
US17/167,850 Continuation US11553182B2 (en) | 2015-10-13 | 2021-02-04 | Method and device for encoding or decoding image |
US17/167,866 Continuation US11638006B2 (en) | 2015-10-13 | 2021-02-04 | Method and device for encoding or decoding image |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2017065525A2 true WO2017065525A2 (ko) | 2017-04-20 |
WO2017065525A3 WO2017065525A3 (ko) | 2017-06-15 |
Family
ID=58517559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2016/011497 WO2017065525A2 (ko) | 2015-10-13 | 2016-10-13 | 영상을 부호화 또는 복호화하는 방법 및 장치 |
Country Status (6)
Country | Link |
---|---|
US (4) | US10939104B2 (ko) |
EP (2) | EP3316580A4 (ko) |
KR (4) | KR20240034898A (ko) |
CN (4) | CN112929661B (ko) |
MX (2) | MX2018004250A (ko) |
WO (1) | WO2017065525A2 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020080901A1 (ko) * | 2018-10-18 | 2020-04-23 | 삼성전자 주식회사 | 엔트로피 부호화/복호화 방법 및 장치 |
CN114342397A (zh) * | 2019-09-06 | 2022-04-12 | 索尼集团公司 | 图像处理装置和方法 |
RU2778099C1 (ru) * | 2019-06-14 | 2022-08-15 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интерпредсказания |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112929661B (zh) * | 2015-10-13 | 2023-04-21 | 三星电子株式会社 | 解码方法和编码方法 |
US10390021B2 (en) * | 2016-03-18 | 2019-08-20 | Mediatek Inc. | Method and apparatus of video coding |
US10560712B2 (en) | 2016-05-16 | 2020-02-11 | Qualcomm Incorporated | Affine motion prediction for video coding |
US10448010B2 (en) | 2016-10-05 | 2019-10-15 | Qualcomm Incorporated | Motion vector prediction for affine motion models in video coding |
US10750203B2 (en) * | 2016-12-22 | 2020-08-18 | Mediatek Inc. | Method and apparatus of adaptive bi-prediction for video coding |
US11284076B2 (en) * | 2017-03-22 | 2022-03-22 | Electronics And Telecommunications Research Institute | Block form-based prediction method and device |
US10855997B2 (en) * | 2017-04-14 | 2020-12-01 | Mediatek Inc. | Secondary transform kernel size selection |
US11877001B2 (en) | 2017-10-10 | 2024-01-16 | Qualcomm Incorporated | Affine prediction in video coding |
US10652571B2 (en) * | 2018-01-25 | 2020-05-12 | Qualcomm Incorporated | Advanced motion vector prediction speedups for video coding |
CN113315975B (zh) | 2018-03-29 | 2022-12-06 | 华为技术有限公司 | 一种双向帧间预测方法及装置 |
MX2020011906A (es) | 2018-05-09 | 2021-02-18 | Interdigital Vc Holdings Inc | Compensacion de movimiento para codificacion y decodificacion de video. |
CN109035164B (zh) * | 2018-07-13 | 2022-03-04 | 北京控制工程研究所 | 一种快速鲁棒图像杂光抑制的方法及系统 |
CN112673627B (zh) * | 2018-08-09 | 2023-08-04 | Lg电子株式会社 | 在图像编码系统中使用仿射合并候选列表的基于仿射运动预测的图像解码方法和装置 |
KR102432406B1 (ko) * | 2018-09-05 | 2022-08-12 | 엘지전자 주식회사 | 비디오 신호의 부호화/복호화 방법 및 이를 위한 장치 |
CN112040247B (zh) * | 2018-09-10 | 2021-09-21 | 华为技术有限公司 | 视频解码方法、视频解码器以及计算机可读存储介质 |
CN109769104B (zh) * | 2018-10-26 | 2021-02-05 | 江苏四友信息技术有限公司 | 无人机全景图像传输方法及装置 |
US11212521B2 (en) * | 2018-11-07 | 2021-12-28 | Avago Technologies International Sales Pte. Limited | Control of memory bandwidth consumption of affine mode in versatile video coding |
KR20230174762A (ko) | 2018-11-16 | 2023-12-28 | 삼성전자주식회사 | 양방향 예측을 이용한 영상의 부호화 및 복호화 방법, 및 영상의 부호화 및 복호화 장치 |
CN116744004A (zh) * | 2018-12-19 | 2023-09-12 | Lg电子株式会社 | 编码设备、解码设备和数据发送设备 |
WO2020141935A1 (ko) * | 2019-01-03 | 2020-07-09 | 엘지전자 주식회사 | 인터 예측을 기반으로 비디오 신호를 처리하기 위한 방법 및 장치 |
CN117793381A (zh) * | 2019-02-01 | 2024-03-29 | Lg电子株式会社 | 解码设备、编码设备和发送设备 |
EP3907998A4 (en) | 2019-02-14 | 2022-03-16 | LG Electronics Inc. | METHOD AND DEVICE FOR INTER-PREDICTION BASED ON A DMVR |
JP7303894B2 (ja) * | 2019-03-08 | 2023-07-05 | 華為技術有限公司 | エンコーダ、デコーダ、及び対応するインター予測方法 |
WO2020181471A1 (zh) * | 2019-03-11 | 2020-09-17 | Oppo广东移动通信有限公司 | 帧内预测方法、装置及计算机存储介质 |
KR102243503B1 (ko) * | 2019-04-05 | 2021-04-22 | 수니코리아 엘엘씨(한국뉴욕주립대학교) | 고효율 머신러닝을 통한 저사양 장치에서의 빠른 영상 인식 방법 및 그 방법을 수행하는 장치 |
EP3932061A4 (en) * | 2019-04-23 | 2022-06-01 | Beijing Bytedance Network Technology Co., Ltd. | USING A SECONDARY TRANSFORMATION IN ENCODED VIDEO |
CN112073729B (zh) * | 2019-06-11 | 2024-04-05 | 北京三星通信技术研究有限公司 | 模型更新方法、装置、电子设备及计算机可读存储介质 |
JP7275326B2 (ja) | 2019-06-14 | 2023-05-17 | ヒョンダイ モーター カンパニー | インター予測を利用した映像符号化方法、映像復号化方法、及び映像復号化装置 |
WO2020251325A1 (ko) * | 2019-06-14 | 2020-12-17 | 현대자동차주식회사 | 인터 예측을 이용하여 비디오를 부호화 및 복호화하는 방법 및 장치 |
CN112135141A (zh) | 2019-06-24 | 2020-12-25 | 华为技术有限公司 | 视频编码器、视频解码器及相应方法 |
KR20220050183A (ko) * | 2019-09-21 | 2022-04-22 | 엘지전자 주식회사 | 변환에 기반한 영상 코딩 방법 및 그 장치 |
US20230007260A1 (en) * | 2019-11-08 | 2023-01-05 | Google Llc | Probability Estimation for Video Coding |
CN112804534B (zh) * | 2019-11-14 | 2022-03-01 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置及其设备 |
CN112887713B (zh) * | 2019-11-30 | 2022-06-28 | 华为技术有限公司 | 图片压缩、解压缩方法和装置 |
US11575896B2 (en) * | 2019-12-16 | 2023-02-07 | Panasonic Intellectual Property Corporation Of America | Encoder, decoder, encoding method, and decoding method |
CN111275121B (zh) * | 2020-01-23 | 2023-07-18 | 北京康夫子健康技术有限公司 | 一种医学影像处理方法、装置和电子设备 |
WO2021188876A1 (en) * | 2020-03-20 | 2021-09-23 | Beijing Dajia Internet Information Technology Co., Ltd. | Spatial neighbor based affine motion derivation |
US11930186B2 (en) * | 2020-11-11 | 2024-03-12 | Tencent America LLC | Method and apparatus for video coding |
US20220150518A1 (en) * | 2020-11-11 | 2022-05-12 | Tencent America LLC | Method and apparatus for video coding |
WO2023195643A1 (ko) * | 2022-04-05 | 2023-10-12 | 삼성전자 주식회사 | 엔트로피 부호화 및 복호화 장치 및 그 방법 |
CN116367150B (zh) * | 2023-04-18 | 2023-11-10 | 甘肃昊润科技信息有限公司 | 一种数据加密传输方法 |
Family Cites Families (47)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6894628B2 (en) * | 2003-07-17 | 2005-05-17 | Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. | Apparatus and methods for entropy-encoding or entropy-decoding using an initialization of context variables |
US8306125B2 (en) * | 2006-06-21 | 2012-11-06 | Digital Video Systems, Inc. | 2-bin parallel decoder for advanced video processing |
JP4660433B2 (ja) | 2006-06-29 | 2011-03-30 | 株式会社東芝 | 符号化回路、復号回路、エンコーダ回路、デコーダ回路、cabac処理方法 |
KR101365445B1 (ko) * | 2007-01-03 | 2014-02-19 | 삼성전자주식회사 | 전역 및 국부 움직임 보상을 순차적으로 이용하는 움직임보상 방법, 복호화 방법, 장치, 인코더 및 디코더 |
CN101005287A (zh) | 2007-01-26 | 2007-07-25 | 北京中星微电子有限公司 | 提高算术编码速度的方法、系统及编码装置 |
JP5278827B2 (ja) * | 2007-09-06 | 2013-09-04 | 日本電気株式会社 | 映像符号化装置、映像復号装置、映像符号化方法、映像復号方法、映像符号化あるいは復号プログラム |
KR101385957B1 (ko) | 2007-10-04 | 2014-04-17 | 삼성전자주식회사 | 복호화기에서의 양자화 계수 조정 방법 및 장치 |
US8542727B2 (en) * | 2007-12-31 | 2013-09-24 | Intel Corporation | Systems and apparatuses for performing CABAC parallel encoding and decoding |
KR101517768B1 (ko) * | 2008-07-02 | 2015-05-06 | 삼성전자주식회사 | 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
US7714754B2 (en) * | 2008-07-14 | 2010-05-11 | Vixs Systems, Inc. | Entropy decoder with pipelined processing and methods for use therewith |
US7932843B2 (en) * | 2008-10-17 | 2011-04-26 | Texas Instruments Incorporated | Parallel CABAC decoding for video decompression |
JP2010237151A (ja) * | 2009-03-31 | 2010-10-21 | Anritsu Corp | Apd測定装置 |
US9100648B2 (en) * | 2009-06-07 | 2015-08-04 | Lg Electronics Inc. | Method and apparatus for decoding a video signal |
KR101675116B1 (ko) | 2009-08-06 | 2016-11-10 | 삼성전자 주식회사 | 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
JPWO2011016250A1 (ja) * | 2009-08-06 | 2013-01-10 | パナソニック株式会社 | 符号化方法、復号方法、符号化装置及び復号装置 |
KR20110045949A (ko) | 2009-10-28 | 2011-05-04 | 삼성전자주식회사 | 회전 변환을 이용한 영상 부호화, 복호화 방법 및 장치 |
US9973768B2 (en) * | 2010-03-16 | 2018-05-15 | Texas Instruments Incorporated | CABAC decoder with decoupled arithmetic decoding and inverse binarization |
CN102934444A (zh) | 2010-04-06 | 2013-02-13 | 三星电子株式会社 | 用于对视频进行编码的方法和设备以及用于对视频进行解码的方法和设备 |
JP5520122B2 (ja) * | 2010-04-14 | 2014-06-11 | キヤノン株式会社 | データ変換装置 |
EP2614592B1 (en) * | 2010-09-09 | 2018-06-27 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Entropy encoding and decoding scheme |
US8344917B2 (en) * | 2010-09-30 | 2013-01-01 | Sharp Laboratories Of America, Inc. | Methods and systems for context initialization in video coding and decoding |
KR20120052882A (ko) * | 2010-11-15 | 2012-05-24 | 한국전자통신연구원 | 병렬 엔트로피 부호화/복호화 방법 및 장치 |
CN102231830B (zh) * | 2010-11-23 | 2013-04-17 | 浙江大学 | 用于上下文算术编解码的运算单元 |
JP5809292B2 (ja) * | 2011-01-14 | 2015-11-10 | ジーイー ビデオ コンプレッション エルエルシー | エントロピー符号化および復号化スキーム |
US10123053B2 (en) * | 2011-05-23 | 2018-11-06 | Texas Instruments Incorporated | Acceleration of bypass binary symbol processing in video coding |
US9379736B2 (en) * | 2011-06-03 | 2016-06-28 | Qualcomm Incorporated | Context-adaptive coding video data |
CN103797803A (zh) * | 2011-06-28 | 2014-05-14 | 三星电子株式会社 | 用于熵编码/解码的方法和设备 |
KR101943049B1 (ko) * | 2011-06-30 | 2019-01-29 | 에스케이텔레콤 주식회사 | 영상 부호화/복호화 방법 및 장치 |
BR112013033899B1 (pt) * | 2011-07-01 | 2019-08-20 | Samsung Electronics Co., Ltd. | Método de decodificação de vídeo |
CN107835420B (zh) * | 2011-10-18 | 2021-05-14 | 株式会社Kt | 视频信号解码方法 |
US9871537B2 (en) * | 2011-10-27 | 2018-01-16 | Qualcomm Incorporated | Mapping states in binary arithmetic coder for video coding |
US9363510B2 (en) * | 2012-03-02 | 2016-06-07 | Qualcomm Incorporated | Scan-based sliding window in context derivation for transform coefficient coding |
US9681133B2 (en) * | 2012-03-29 | 2017-06-13 | Intel Corporation | Two bins per clock CABAC decoding |
DK3416292T3 (da) * | 2012-04-15 | 2020-03-09 | Samsung Electronics Co Ltd | Videoafkodningsapparat ved anvendelse af parameteropdatering til de-binærisering af en entropikodet transformationskoefficient og indkodnings-fremgangsmåde anvendende det samme til binærisering |
RS57336B1 (sr) * | 2012-07-02 | 2018-08-31 | Samsung Electronics Co Ltd | Postupak za entropijsko dekodiranje video zapisa |
KR20140089486A (ko) | 2013-01-04 | 2014-07-15 | 삼성전자주식회사 | 스케일러블 비디오의 부호화 및 복호화를 위한 움직임 보상 방법 및 장치 |
US20150169794A1 (en) * | 2013-03-14 | 2015-06-18 | Google Inc. | Updating location relevant user behavior statistics from classification errors |
CN103746731B (zh) * | 2014-01-21 | 2017-03-15 | 电子科技大学 | 基于概率计算的多输入多输出检测器及检测方法 |
US9948934B2 (en) * | 2014-07-02 | 2018-04-17 | Apple Inc. | Estimating rate costs in video encoding operations using entropy encoding statistics |
AU2015343932A1 (en) * | 2014-11-04 | 2017-06-01 | Samsung Electronics Co., Ltd. | Probability updating method for binary arithmetic coding/decoding, and entropy coding/decoding apparatus using same |
US10368072B2 (en) | 2015-05-29 | 2019-07-30 | Qualcomm Incorporated | Advanced arithmetic coder |
CN112929661B (zh) * | 2015-10-13 | 2023-04-21 | 三星电子株式会社 | 解码方法和编码方法 |
US10142635B2 (en) * | 2015-12-18 | 2018-11-27 | Blackberry Limited | Adaptive binarizer selection for image and video coding |
US10554988B2 (en) * | 2017-03-22 | 2020-02-04 | Qualcomm Incorporated | Binary arithmetic coding with parameterized probability estimation finite state machines |
US10791341B2 (en) * | 2017-10-10 | 2020-09-29 | Qualcomm Incorporated | Binary arithmetic coding with progressive modification of adaptation parameters |
EP3503557A1 (en) * | 2017-12-22 | 2019-06-26 | Thomson Licensing | Method and apparatus for video encoding and decoding based on context switching |
US20190042956A1 (en) * | 2018-02-09 | 2019-02-07 | Intel Corporation | Automatic configurable sequence similarity inference system |
-
2016
- 2016-10-13 CN CN202110171292.5A patent/CN112929661B/zh active Active
- 2016-10-13 US US15/767,307 patent/US10939104B2/en active Active
- 2016-10-13 MX MX2018004250A patent/MX2018004250A/es unknown
- 2016-10-13 KR KR1020247007844A patent/KR20240034898A/ko active Search and Examination
- 2016-10-13 KR KR1020227005128A patent/KR102469145B1/ko active IP Right Grant
- 2016-10-13 CN CN201680059343.2A patent/CN108141594B/zh active Active
- 2016-10-13 KR KR1020177037707A patent/KR102365164B1/ko active IP Right Grant
- 2016-10-13 WO PCT/KR2016/011497 patent/WO2017065525A2/ko active Application Filing
- 2016-10-13 KR KR1020227040175A patent/KR102646890B1/ko active IP Right Grant
- 2016-10-13 CN CN202110145731.5A patent/CN112929659B/zh active Active
- 2016-10-13 EP EP16855742.9A patent/EP3316580A4/en not_active Withdrawn
- 2016-10-13 CN CN202110146952.4A patent/CN112929660B/zh active Active
- 2016-10-13 EP EP19207504.2A patent/EP3629581A1/en not_active Withdrawn
-
2018
- 2018-04-06 MX MX2021013693A patent/MX2021013693A/es unknown
-
2021
- 2021-02-04 US US17/167,866 patent/US11638006B2/en active Active
- 2021-02-04 US US17/167,850 patent/US11553182B2/en active Active
-
2023
- 2023-04-03 US US18/194,817 patent/US12088808B2/en active Active
Non-Patent Citations (1)
Title |
---|
None |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020080901A1 (ko) * | 2018-10-18 | 2020-04-23 | 삼성전자 주식회사 | 엔트로피 부호화/복호화 방법 및 장치 |
RU2778099C1 (ru) * | 2019-06-14 | 2022-08-15 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интерпредсказания |
RU2778099C9 (ru) * | 2019-06-14 | 2022-09-12 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интер-предсказания |
RU2801342C2 (ru) * | 2019-06-14 | 2023-08-07 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интер-предсказания |
RU2802844C2 (ru) * | 2019-06-14 | 2023-09-05 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интер-предсказания |
RU2806280C2 (ru) * | 2019-06-14 | 2023-10-30 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интер-предсказания |
RU2806279C2 (ru) * | 2019-06-14 | 2023-10-30 | Хёндэ Мотор Кампани | Способ и устройство для кодирования и декодирования видео с использованием интер-предсказания |
CN114342397A (zh) * | 2019-09-06 | 2022-04-12 | 索尼集团公司 | 图像处理装置和方法 |
Also Published As
Publication number | Publication date |
---|---|
MX2018004250A (es) | 2018-05-16 |
KR20220025271A (ko) | 2022-03-03 |
US20210258579A1 (en) | 2021-08-19 |
US20230247201A1 (en) | 2023-08-03 |
EP3629581A1 (en) | 2020-04-01 |
US20210258578A1 (en) | 2021-08-19 |
CN112929659A (zh) | 2021-06-08 |
CN112929661A (zh) | 2021-06-08 |
KR20180055761A (ko) | 2018-05-25 |
CN112929660A (zh) | 2021-06-08 |
US11553182B2 (en) | 2023-01-10 |
MX2021013693A (es) | 2021-12-10 |
US11638006B2 (en) | 2023-04-25 |
EP3316580A4 (en) | 2018-09-26 |
KR102646890B1 (ko) | 2024-03-12 |
KR20220156986A (ko) | 2022-11-28 |
CN108141594A (zh) | 2018-06-08 |
EP3316580A2 (en) | 2018-05-02 |
KR102469145B1 (ko) | 2022-11-21 |
CN112929660B (zh) | 2023-12-26 |
KR102365164B1 (ko) | 2022-02-18 |
US12088808B2 (en) | 2024-09-10 |
CN112929659B (zh) | 2023-12-26 |
US10939104B2 (en) | 2021-03-02 |
KR20240034898A (ko) | 2024-03-14 |
WO2017065525A3 (ko) | 2017-06-15 |
CN108141594B (zh) | 2021-02-26 |
CN112929661B (zh) | 2023-04-21 |
US20180309990A1 (en) | 2018-10-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017065525A2 (ko) | 영상을 부호화 또는 복호화하는 방법 및 장치 | |
WO2018070790A1 (ko) | 영상의 부호화/복호화 방법 및 장치 | |
WO2018221817A1 (ko) | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 | |
WO2018236028A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018038554A1 (ko) | 이차 변환을 이용한 비디오 신호의 인코딩/디코딩 방법 및 장치 | |
WO2018062788A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018236031A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2014171713A1 (ko) | 인트라 예측을 이용한 비디오 부호화/복호화 방법 및 장치 | |
WO2018062880A1 (ko) | 영상 처리 방법 및 이를 위한 장치 | |
WO2011016702A2 (ko) | 영상의 부호화 방법 및 장치, 그 복호화 방법 및 장치 | |
WO2013157825A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2019017694A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2017014585A1 (ko) | 그래프 기반 변환을 이용하여 비디오 신호를 처리하는 방법 및 장치 | |
WO2018124333A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2018105759A1 (ko) | 영상 부호화/복호화 방법 및 이를 위한 장치 | |
WO2016159610A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2019221472A1 (ko) | 참조 샘플을 이용하는 비디오 신호 처리 방법 및 장치 | |
WO2019182329A1 (ko) | 영상 복호화 방법/장치, 영상 부호화 방법/장치 및 비트스트림을 저장한 기록 매체 | |
WO2016064123A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2019022537A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2016122253A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020076142A1 (ko) | 교차성분 선형 모델을 이용한 비디오 신호 처리 방법 및 장치 | |
WO2016200235A1 (ko) | 인트라 예측 모드 기반 영상 처리 방법 및 이를 위한 장치 | |
WO2021015512A1 (ko) | Ibc를 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2017065422A1 (ko) | 영상 처리 방법 및 이를 위한 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16855742 Country of ref document: EP Kind code of ref document: A2 |
|
ENP | Entry into the national phase |
Ref document number: 20177037707 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2016855742 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2018/004250 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15767307 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |