WO2013109092A1 - 영상 부호화 및 복호화 방법 및 장치 - Google Patents
영상 부호화 및 복호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2013109092A1 WO2013109092A1 PCT/KR2013/000417 KR2013000417W WO2013109092A1 WO 2013109092 A1 WO2013109092 A1 WO 2013109092A1 KR 2013000417 W KR2013000417 W KR 2013000417W WO 2013109092 A1 WO2013109092 A1 WO 2013109092A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- prediction
- value
- current block
- pixel
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000012937 correction Methods 0.000 claims abstract description 106
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims abstract description 46
- 230000000295 complement effect Effects 0.000 claims abstract description 18
- 238000013139 quantization Methods 0.000 claims abstract description 11
- 230000009466 transformation Effects 0.000 claims abstract description 7
- 230000002123 temporal effect Effects 0.000 claims description 22
- 238000009795 derivation Methods 0.000 claims description 15
- 230000001131 transforming effect Effects 0.000 claims description 7
- 238000001914 filtration Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 17
- 238000004364 calculation method Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 101150039623 Clip1 gene Proteins 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 3
- 241000023320 Luma <angiosperm> Species 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- OSWPMRLSEDHDFF-UHFFFAOYSA-N methyl salicylate Chemical group COC(=O)C1=CC=CC=C1O OSWPMRLSEDHDFF-UHFFFAOYSA-N 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present invention relates to a method and apparatus for encoding and decoding an image, and more particularly, to an intra prediction and an inter prediction method through a reduction in computational complexity.
- an inter prediction technique that predicts pixel values included in a current picture from a previous and / or subsequent picture in time, pixel values included in a current picture using pixel information in the current picture.
- Intra-prediction techniques for predicting the C-type, entropy encoding techniques for assigning short codes to symbols with high appearance frequency and long codes to symbols with low appearance frequency may be used.
- An object of the present invention is to provide an image encoding method and apparatus capable of improving image encoding / decoding efficiency by reducing computational complexity.
- Another technical problem of the present invention is to provide an image decoding method and apparatus capable of improving image encoding / decoding efficiency by reducing computational complexity.
- Another technical problem of the present invention is to provide a method and apparatus for generating a prediction block capable of improving image encoding / decoding efficiency by reducing computational complexity.
- Another technical problem of the present invention is to provide an intra prediction method and apparatus capable of improving image encoding / decoding efficiency by reducing computational complexity.
- Another technical problem of the present invention is to provide an inter prediction method and apparatus capable of improving image encoding / decoding efficiency by reducing computational complexity.
- an image decoding method of the present invention may include reconstructing a residual block by inverse quantization and inverse transformation of an entropy decoded residual block, generating an prediction block by performing intra prediction on a current block, and And reconstructing the image by adding the reconstructed residual block to the prediction block, wherein the generating of the prediction block includes a first prediction value of a prediction target pixel included in the current block and an initial correction value of the prediction target pixel. And generating a final predicted value of the target pixel based on the final corrected value calculated by performing an arithmetic right shift on the two's complement integer representation by 1 binary digit.
- the prediction block generating step may include determining whether to correct an intra prediction value according to encoding information of the current block and a position in a current block of a prediction target pixel, and generate a final prediction value of the prediction target pixel based on the determination result. It may include the step.
- the determining whether to correct may include determining whether to correct the intra prediction value in consideration of at least one of the intra prediction mode, the luminance signal information, the color difference signal information, and the block size of the current block.
- the determining whether to correct may include determining to perform correction on pixels on a left boundary of the current block when the intra prediction mode of the current block is a vertical direction prediction mode.
- the determining whether to correct the correction may include determining to perform correction on pixels of an upper boundary of the current block when the intra prediction mode of the current block is the horizontal prediction mode.
- the final predictive value generating step may include obtaining a first predictive value by using a reference pixel value adjacent to the current block when it is determined to perform correction on an intra prediction value, the horizontal or vertical position in the block of the prediction target pixel. Determining an initial correction value according to the above, calculating an arithmetic right shift by 1 binary digit with respect to the two's complement integer representation of the initial correction value, and calculating a final correction value, and the first predicted value and the final correction value. Comprising the step may include calculating a final prediction value.
- the intra prediction mode is the vertical prediction mode
- correction is performed on pixels on the left boundary of the current block, and the first prediction value is generated using an upper reference pixel value adjacent to the current block, and the prediction target is performed.
- the initial correction value is determined using a difference between the left reference pixel value corresponding to the vertical position in the block of the pixel and the pixel value of the upper left corner pixel of the current block
- the intra prediction mode is the horizontal prediction mode.
- An initial correction value may be determined by using a difference between a reference pixel value and a pixel value of an upper left corner pixel of the current block.
- the final prediction value of the prediction target pixel is generated based on the upper reference pixel value adjacent to the current block and is horizontal.
- the direction prediction mode may include generating a final prediction value of the prediction target pixel based on an upper reference pixel value adjacent to the current block.
- the image decoding method may further include determining a reference pixel to be used for intra prediction of the prediction target pixel, wherein the determining of the reference pixel may be performed by using a reconstructed pixel among pixels at a position adjacent to the current block.
- the method may include determining a reference pixel and smoothing and filtering the pixel value of the reference pixel.
- an apparatus for decoding an image includes a residual block reconstruction unit for reconstructing a residual block by inverse quantization and inverse transformation of an entropy decoded residual block, and generating a prediction block by performing intra prediction on a current block A prediction block generation unit and an image reconstruction unit reconstructing an image by adding the reconstructed residual block to the prediction block, wherein the prediction block generation unit includes a first prediction value of a prediction target pixel included in the current block and the prediction target pixel; An arithmetic right shift is performed on the two's complement integer representation of the initial correction value of by one binary digit to generate the final prediction value of the prediction target pixel based on the final correction value calculated.
- a method of encoding an image by performing intra prediction on an input image, generating a prediction block, and generating a residual block that is a difference between a prediction block predicted by the intra prediction and a current prediction block.
- Transforming and quantizing and performing entropy encoding wherein the predictive block generating step includes a first-prediction value of a prediction target pixel included in the current block and a two's complement integer representation of an initial correction value of the prediction target pixel. And generating a final prediction value of the prediction target pixel based on the final correction value calculated by performing arithmetic right shift by 1 binary digit.
- the image encoding apparatus of the present invention for achieving the above object is a prediction block generating unit for generating a prediction block by performing intra prediction on the input image and the difference between the prediction block predicted by the intra prediction and the current prediction block And a coding unit configured to transform and quantize the residual block to entropy-encode the residual block. And generating a final predicted value of the prediction target pixel based on the final corrected value calculated by performing an arithmetic right shift on the representation by 1 binary digit.
- the image decoding method of the present invention for achieving the above object is to dequantize and inversely transform the entropy decoded residual block to reconstruct the residual block, by performing inter-prediction on the current block using the motion information of the reference block Generating a prediction block and reconstructing an image by adding the reconstructed residual block to the prediction block, wherein generating the prediction block includes whether a reference picture of the current block and a reference picture of the reference block are the same.
- the determining may include the step of scaling the motion vector of the reference block and predicting the current block if the determination result is not the same.
- the reference block used in deriving a spatial or temporal motion vector and deriving a temporal merge candidate includes: i) a lowermost block adjacent to the left side of the current block in deriving a spatial motion vector, a block adjacent to a lower end of the left lowermost block, and At least one of an upper left corner block of the current block, an upper right corner block of the current block, and an adjacent upper rightmost block of the current block, ii) when deriving a temporal motion vector and iii) when deriving a temporal merge candidate.
- a corresponding location picture of a current picture may include at least one of blocks located inside and outside a corresponding location block spatially corresponding to the current block.
- the predictive block generating step includes obtaining first and second values based on a difference value of a picture order count (POC) between the pictures, and a binary number of representations of two's complement integers regarding an absolute value of the first value.
- POC picture order count
- the first value is a difference value between the POC of the current picture and the POC of the reference picture referenced by the reference block
- the second value is a POC of the current picture and a reference picture referenced by the current block.
- the first value is a POC of a co-located picture and a current block and a corresponding position block in the corresponding position picture.
- a difference value may be a difference between POCs of a reference picture
- the second value may be a difference value between a POC of a current block picture and a POC of a reference picture referenced by the current block.
- the calculating of the scaling factor value may include performing an addition operation and an arithmetic right shift operation based on a product of an inverse value of the second value and the first value to calculate the scaling factor value and the scaling factor value in a specific range. Adjusting to include.
- the image decoding apparatus of the present invention for achieving the above object is a residual block reconstruction unit for reconstructing the residual block by inverse quantization and inverse transformation of the entropy decoded residual block, and generating a prediction block by performing inter-prediction on the current block And a reconstruction unit for reconstructing an image by adding the reconstructed residual block to the prediction block generator, wherein the prediction block generator determines whether a reference picture of the current block and a reference picture of the reference block are the same.
- the equality determination unit and the determination unit may not include the same, and may include a scaling unit used to predict the current block by scaling the motion vector of the reference block.
- a method of encoding an image comprising: generating a prediction block by performing inter prediction on an input image and a residual that is a difference between the prediction block predicted by the current input block and the inter prediction.
- Transforming and quantizing the block to entropy encoding the block, and generating the predictive block comprises: determining whether a reference picture of the current block and a reference picture of the reference block are the same; And scaling the motion vector of the reference block to use for prediction of the current block.
- the image encoding apparatus of the present invention provides a prediction block generation unit that generates a prediction block by performing inter prediction on an input image and a difference between the prediction block predicted by the current input block and the inter prediction. And an encoding unit for transforming and quantizing the residual residual block and entropy encoding the prediction block generator, wherein the prediction block generator determines whether the reference picture of the current block and the reference picture of the reference block are the same and are not identical. If not, it may include a scaling unit for scaling the motion vector of the reference block to use for prediction of the current block.
- the computational complexity can be reduced, and the video encoding / decoding efficiency can be improved.
- the computational complexity can be reduced, and the image encoding / decoding efficiency can be improved.
- the computational complexity can be reduced and the image encoding / decoding efficiency can be improved.
- the computational complexity can be reduced, and the image encoding / decoding efficiency can be improved.
- inter prediction method According to the inter prediction method according to the present invention, computational complexity can be reduced, and image encoding / decoding efficiency can be improved.
- FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention
- FIG. 3 is a flowchart illustrating a process of calculating a final predicted value of a current block of an image encoding / decoding method according to an embodiment of the present invention
- FIG. 4 is a flowchart schematically illustrating an embodiment of a process of obtaining a reference pixel to be used for intra prediction
- FIG. 5 is a diagram schematically showing an embodiment for replacing a pixel that is unavailable in a process of obtaining a reference pixel for intra prediction
- FIG. 6 is a flowchart schematically illustrating an embodiment of a process of determining whether to correct an intra prediction value according to encoding information of a current block and a position of a prediction target pixel;
- FIG. 7A schematically illustrates an embodiment of using a first prediction value for a pixel in a current block as a final prediction value in a vertical prediction mode
- FIG. 7B schematically illustrates an embodiment of using a first prediction value for a pixel in a current block as a final prediction value in a horizontal prediction mode
- FIG. 8 is a flowchart schematically illustrating an embodiment of calculating a final prediction value by correcting a first prediction value for a pixel in a current block
- FIG. 9A is a diagram schematically illustrating an embodiment of calculating a final prediction value by correcting a first prediction value when using a vertical mode
- FIG. 9B is a diagram schematically illustrating an embodiment of calculating a final prediction value by performing correction on a first prediction value when using a horizontal mode
- FIG. 10 is a flowchart schematically illustrating a scaling process of an image encoding / decoding method according to another embodiment of the present invention.
- FIG. 11A is a diagram illustrating a difference in Picture Order Count (POC) between a current picture and a reference picture of a spatial reference block and a difference in POC between the current picture and a reference picture of a current block;
- POC Picture Order Count
- 11B is a diagram illustrating a POC difference between a reference picture of a corresponding position block and a corresponding position picture and a POC difference between the current picture and a reference picture of the current block;
- FIG. 12 is a flowchart schematically illustrating an embodiment of a process of calculating a scaling factor value for a motion vector based on POC difference values
- FIG. 13 is a block diagram schematically illustrating a configuration of calculating a final scaling factor value based on an inverse value of a tb value and a td value.
- first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
- each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
- Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
- the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
- the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
- FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
- the image encoding apparatus 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, and a converter 130. And a quantization unit 140, an entropy encoding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference image buffer 190.
- the image may be used as the same meaning as a picture described later.
- the image encoding apparatus 100 may encode an input image in an intra mode or an inter mode and output a bitstream.
- Intra prediction means intra prediction and inter prediction means inter prediction.
- the switch 115 may be switched to intra, and in the inter mode, the switch 115 may be switched to inter.
- the image encoding apparatus 100 may generate a prediction block for an input block of an input image and then encode a difference between the input block and the prediction block.
- the intra predictor 120 may generate a prediction block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
- the motion predictor 111 may obtain a motion vector by searching for a region that best matches an input block in the reference image stored in the reference image buffer 190 during the motion prediction process.
- the motion compensator 112 may generate a prediction block by performing motion compensation using the motion vector.
- the motion vector is a two-dimensional vector used for inter prediction, and may indicate an offset between the current encoding / decoding target image and the reference image.
- the subtractor 125 may generate a residual block by the difference between the input block and the generated prediction block.
- the transform unit 130 may output a transform coefficient by performing a transform on the residual block.
- the quantization unit 140 may output the quantized coefficient by quantizing the input transform coefficient according to the quantization parameter.
- the entropy encoder 150 may output a bit stream by performing entropy encoding based on values calculated by the quantizer 140 or encoding parameter values calculated in the encoding process.
- the entropy encoder 150 may use an encoding method such as exponential golomb, context-adaptive variable length coding (CAVLC), or context-adaptive binary arithmetic coding (CABAC) for entropy encoding.
- CAVLC context-adaptive variable length coding
- CABAC context-adaptive binary arithmetic coding
- the image encoding apparatus Since the image encoding apparatus according to the embodiment of FIG. 1 performs inter prediction encoding, that is, inter prediction encoding, the currently encoded image needs to be decoded and stored to be used as a reference image. Accordingly, the quantized coefficients are inversely quantized by the inverse quantizer 160 and inversely transformed by the inverse transformer 170. The inverse quantized and inverse transformed coefficients are added to the prediction block by the adder 175 and a reconstruction block is generated.
- the reconstruction block passes through the filter unit 180, and the filter unit 180 applies at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstructed block or reconstructed picture. can do.
- the filter unit 180 may be referred to as an adaptive in-loop filter.
- the deblocking filter can remove block distortion generated at the boundary between blocks.
- SAO can add an appropriate offset to the pixel value to compensate for coding errors.
- the ALF may perform filtering based on a value obtained by comparing the reconstructed image with the original image.
- the reconstructed block that has passed through the filter unit 180 may be stored in the reference image buffer 190.
- FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- the image decoding apparatus 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, a motion compensator 250, and an adder ( 255, a filter unit 260, and a reference image buffer 270.
- the image decoding apparatus 200 may receive a bitstream output from the encoder and perform decoding in an intra mode or an inter mode, and output a reconstructed image, that is, a reconstructed image.
- the switch In the intra mode, the switch may be switched to intra, and in the inter mode, the switch may be switched to inter.
- the image decoding apparatus 200 may obtain a reconstructed residual block from the received bitstream, generate a prediction block, and then add the reconstructed residual block and the prediction block to generate a reconstructed block, that is, a reconstruction block. .
- the entropy decoder 210 may entropy decode the input bitstream according to a probability distribution to generate symbols including symbols in the form of quantized coefficients.
- the entropy decoding method is similar to the entropy coding method described above.
- the entropy decoding method When the entropy decoding method is applied, a small number of bits are allocated to a symbol having a high probability of occurrence and a large number of bits are allocated to a symbol having a low probability of occurrence, whereby the size of the bit string for each symbol is increased. Can be reduced. Therefore, the compression performance of image decoding can be improved through an entropy decoding method.
- the quantized coefficients are inversely quantized by the inverse quantizer 220 and inversely transformed by the inverse transformer 230, and as a result of the inverse quantization / inverse transformation of the quantized coefficients, a reconstructed residual block may be generated.
- the intra predictor 240 may generate a prediction block by performing spatial prediction using pixel values of blocks already encoded / decoded around the current block.
- the motion compensator 250 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 270.
- the reconstructed residual block and the prediction block may be added through the adder 255, and the added block may pass through the filter unit 260.
- the filter unit 260 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstructed block or the reconstructed picture.
- the filter unit 260 may output a reconstructed image, that is, a reconstructed image.
- the reconstructed picture may be stored in the reference picture buffer 270 to be used for inter prediction.
- a unit means a unit of image encoding and decoding.
- a coding or decoding unit refers to a divided unit when an image is divided and encoded or decoded.
- a block a coding unit (CU), and a prediction unit (PU) are used. It may be called a transform unit (TU).
- TU transform unit
- a unit may also be referred to as a block.
- One unit may be further divided into smaller sub-units.
- the current block may refer to a block to be subjected to intra prediction or motion compensation, and when performing intra prediction, the current block may mean any one of a prediction unit, a prediction block, a transform unit, and a transform block. In the case of performing motion compensation, the current block may mean any one of a prediction unit and a prediction block.
- FIG. 3 is a flowchart illustrating a process of calculating a final predicted value of a current block of an image encoding / decoding method according to an embodiment of the present invention.
- an image encoding / decoding apparatus calculates a final prediction value based on reference pixels to generate a prediction block for a current block.
- the image encoding / decoding apparatus acquires pixel values of reference pixels to be used for intra prediction.
- the reference pixel may use a pixel that has already been reconstructed among the pixels adjacent to the current block. If the pixel at the adjacent position is unavailable, the reference pixel may replace the pixel value of the unavailable pixel with a corresponding reference pixel value.
- encoding information of the current block and intra-block position information of the prediction target pixel are obtained (S320).
- the determination is determined whether the first prediction value through the reference pixel value is required based on the encoding information and the position information within the block of the prediction target pixel.
- the determination may be different based on at least one of intra (intra picture) prediction mode information, luminance signal information, color difference signal information, and block size.
- the image encoding / decoding apparatus may directly use the first prediction value as the final prediction value of the current block (S340). On the other hand, if it is determined that correction is necessary, the image encoding / decoding apparatus may calculate the final prediction value by adding the first prediction value and the correction value after obtaining the first prediction value and the correction value (S350). In this case, since the computational complexity associated with the calculation of the correction value is generally quite high, in order to reduce it, performing an arithmetic right shift operation may be considered. Arithmetic right shift operation (“>>”) has the characteristic that the sign of the value to be calculated does not change, and unlike the normal integer division operation ("/") whose result is rounded to near zero, the result is negative infinity. It is rounded to be close to.
- FIG. 4 is a flowchart schematically illustrating an embodiment of a process of obtaining a reference pixel to be used for intra prediction.
- the encoding / decoding apparatus determines whether pixel values of neighboring blocks are available based on neighboring block pixel information of the current block (S410).
- the current block may be a block to which the CIP is applied, and the neighboring block may be included in at least one of the blocks encoded by inter prediction.
- the corresponding reference pixel value may be replaced with the usable pixel value or the specific default value of another neighboring block (S420).
- FIG. 5 is a diagram schematically illustrating an embodiment for replacing a pixel that is unavailable in a process of obtaining a reference pixel for intra prediction.
- neighboring blocks of the current block 500 may be used to obtain a reference pixel value.
- the neighboring block that can be used to obtain the reference pixel value is the neighboring block adjacent to the current block 500, the neighboring block adjacent to the bottom of the lower leftmost neighboring block by the height of the current block 500, and the width of the current block 500.
- the neighboring block adjacent to the right side of the upper rightmost neighboring block may be used. In this case, only pixels located around the current block 500 among the pixels of the neighboring block may be used as reference pixels.
- the hatched block among the neighboring blocks of the current block 500 is a usable block, and a block that is not represents an unusable block.
- the image encoding / decoding apparatus may determine whether the pixel at the position adjacent to the current block 500 is available or not, and store the determination result. For example, in FIG. 5, the pixels belonging to the hatched blocks are usable pixels, and the pixels belonging to the blocks not hatched are determined to be unavailable pixels. In this case, when at least one unavailable pixel exists, the unavailable pixel value may be replaced with an available pixel value.
- the pixel 520 in the position A of FIG. 5 is moved to the pixel 522 in the position B as a starting point, and the pixel that is not available may be replaced with the immediately available pixel value.
- the pixel value of the usable pixel 512 that occurs for the first time while moving from the A position to the B position may be replaced with the pixel 520 value of the starting point.
- the neighboring blocks 510, 530, and 532 the neighboring blocks 510 are available, and the neighboring blocks 530 and 532 are unavailable blocks.
- the pixel 520 of the starting point is an unusable pixel, and replaces the pixel 520 of the starting point with the pixel value of the first available pixel 512 that moves from the A position to the B position, and the peripheral block 530.
- the pixel of may be replaced by the pixel value of the pixel 512
- the pixel of the neighboring block 532 may be replaced by the pixel value of the pixel 514, which is the immediately available pixel. In this way, pixels that are not available up to position B can be replaced with usable pixel values.
- the neighboring block pixel value can be used as the reference pixel value as it is (S422).
- the image encoding / decoding apparatus may perform smoothing filtering on the obtained reference pixel value (S430).
- smoothing filtering may be performed differently according to the size of the target block or the intra prediction mode.
- FIG. 6 is a flowchart schematically illustrating an embodiment of a process of determining whether to correct an intra prediction value according to encoding information of a current block and a position of a prediction target pixel.
- the image encoding / decoding apparatus may determine whether to perform correction based on the current block encoding information and the position of the block of the prediction target pixel.
- the encoding information used to determine whether to perform the correction may include at least one of intra prediction mode information, luminance signal information, color difference signal information, and block size.
- the image encoding / decoding apparatus first determines whether an intra prediction mode of the current block is a vertical prediction mode (S610). In the vertical prediction mode, it is determined whether the prediction target pixel is a pixel located at the left boundary of the current block (S612). As a result of the determination, it is determined to perform the surface correction when the pixel is located at the left boundary (S632). If the intra prediction mode is the vertical prediction mode and the pixel is not located at the left boundary, the correction may not be performed (S630). Determination of the vertical prediction mode and the horizontal prediction mode may be performed with reference to the following description.
- the horizontal prediction mode it may be strictly determined whether the prediction direction is exactly the horizontal direction, but more relaxed conditions may be used to determine whether the prediction direction is close to the horizontal direction. For example, in the horizontal prediction mode determination using the relaxed condition, when the horizontal prediction mode is determined, if the prediction direction of the determination target prediction mode belongs to a direction within 30 degrees from the horizontal direction, the horizontal prediction mode may be determined. At this time, the angle used as the criterion for determination is not necessarily limited to an angle of 30 degrees, but may be set based on another angle.
- the relaxed condition may be used to determine whether the prediction direction is close to the vertical direction. The following embodiments are described assuming that both the horizontal and vertical directions are strictly judged, but the present invention is not limited to these embodiments, and the horizontal and / or vertical directions are determined using the relaxed conditions as described above. Cases can also be seen as part of the invention.
- the mode is a horizontal prediction mode (S620).
- the determination processes (S610 and S620) of the vertical prediction mode and the horizontal prediction mode are not related to each other, and may be reversed.
- the horizontal prediction mode it is determined whether the prediction target pixel is a pixel located at an upper boundary of the current block (S622). As a result of the determination, it is determined to perform the surface correction when the pixel is located at the upper boundary (S632). If the intra prediction mode is the horizontal prediction mode and the pixel is not located at the upper boundary, the correction may not be performed (S630). If the intra prediction mode is not the vertical or horizontal prediction mode, correction may not be performed on the prediction value for the current block (S630).
- the correction of the prediction value for the current block is performed by considering at least one information of the intra prediction mode and the size of the block as described above only for the luma signal, and the color difference. It may not be performed for (chroma) signals.
- prediction value correction may be performed on blocks smaller than 32 ⁇ 32 size. That is, prediction value correction may be performed on blocks having sizes of 4 ⁇ 4, 8 ⁇ 8, and 16 ⁇ 16.
- correction when the inter prediction mode is the DC mode, correction may be performed on pixels at upper and left boundaries of the current block.
- FIG. 7A schematically illustrates an embodiment of using a first prediction value for a pixel in a current block as a final prediction value in a vertical prediction mode
- FIG. 7B is a first prediction value for a pixel in a current block in a horizontal prediction mode. Is a diagram schematically showing an embodiment of using a as a final prediction value.
- the image encoding / decoding apparatus corrects a prediction value according to at least one of intra prediction mode, luminance signal, chrominance signal information, and block size in operation S330. If it is determined not to perform the first prediction value, the first prediction value is determined as the final prediction value for the current block 710 without performing the correction.
- the first prediction value pred1 [x, y] may be obtained based on the reference pixel value.
- BlockWidth means the width of the current block
- BlockHeight means the height of the current block.
- the first prediction value pred1 [x, y] may be determined as the upper reference pixel values 722, 724, 726, and 728 adjacent to the current block.
- the pixels at the left boundary of the current block 710 are pixels 722 at the [0, -1] position, and the pixels in the second column on the left are [- Pixels 724 at position 1, -1; pixels 726 at position [2, -1]; and pixels 728 at position [3, -1];
- the first predicted value pred1 [x, y] is determined using the pixel value of.
- the image encoding / decoding apparatus may use the first prediction value pred1 [x, y] as the final prediction value predS [x, y].
- predS [x, y] represents a final prediction value.
- the first prediction value pred1 [x, y] may be determined as the left reference pixel values 732, 734, 736, and 738 adjacent to the current block.
- the pixels at the upper boundary of the current block 710 are pixels 732 at the [-1,0] position, the pixels at the upper second row are the pixels 734 at the [-1,1] position, and the upper third row. Pixels of the [-1,2] position and pixels of the lower boundary are the first prediction values (pred1 [x, y]) using the pixel values of the pixels 738 of the [-1,3] position. Is determined. As in the case of vertical prediction, in the case of horizontal prediction, the first predicted value pred1 [x, y] may be used as the final predicted value predS [x, y].
- FIG. 8 is a flowchart schematically illustrating an embodiment of calculating a final prediction value by correcting a first prediction value for a pixel in a current block.
- the image encoding / decoding apparatus performs correction on a prediction value according to at least one of intra prediction mode, luminance signal, color difference signal information, and block size in operation S330.
- the first predictive value pred1 [x, y] is obtained by using the reference pixel value (see FIGS. 7A and 7B).
- the initial correction value d [x, y] with respect to the first prediction value pred1 [x, y] of the prediction target pixel is determined (S820).
- the initial correction value d [x, y] may be determined according to the horizontal or vertical position in the block of the prediction target pixel. That is, in the case of vertical prediction, it may be determined according to the vertical position in the block of the prediction target pixel, and in the case of horizontal prediction, it may be determined according to the horizontal position in the block of the prediction target pixel.
- Equation 4 may be used by changing the direction of the difference as shown in Equation 4 'below.
- the final correction value delta [x, y] is calculated based on the initial correction value d [x, y] (S830).
- the efficiency of the calculation may be improved by calculating the final correction value delta [x, y] through arithmetic right shift with a relatively low computational complexity without performing a division or multiplication operation having a high computational complexity. That is, the two's complement integer representation of the initial correction value d [x, y] is arithmetically shifted by the binary digit M so that the final correction value delta [x, y] To calculate.
- the most significant bit (MSB) of the arithmetic right shifted final correction value (delta [x, y]) has the same value as the MSB of the initial correction value (d [x, y]), and the final correction value (delta [ x, y]) has a characteristic of being rounded in a direction close to negative infinity.
- the preferable value of binary digit M is 1 or 2.
- predS [x, y] Clip1Y (pred1 [x, y] + delta [x, y])
- Clip1 Y (x) Clip3 (0, (1 ⁇ BitDepthY) -1, x)
- BitDepthY represents the bit depth of the luminance signal.
- the final correction value delta [x, y] may be calculated through an arithmetic right shift operation.
- the correction value may be calculated such that the correction value is rounded to an integer close to zero using the following equation.
- delta [x, y] (d [x, y] + (d [x, y] ⁇ 0? 2 x : 0)) >> (x + 1) (for vertical prediction mode)
- delta [x, y] (d [x, y] + (d [x, y] ⁇ 0? 2 y : 0)) >> (y + 1) (for horizontal prediction mode)
- the correction value may be calculated such that the correction value is rounded to an integer far from zero using the following equation.
- delta [x, y] (d [x, y] + (d [x, y] ⁇ 0? 1 + 2 x : 1)) >> (x + 1) (for vertical prediction mode)
- delta [x, y] (d [x, y] + (d [x, y] ⁇ 0? 1 + 2 y : 1)) >> (y + 1) (for horizontal prediction mode)
- the correction value can be calculated such that the correction value is rounded to an integer close to negative infinity using the following equation.
- Equation 9 may be applied only to the left boundary of the current block using the vertical prediction mode and the upper boundary of the current block using the horizontal prediction mode. Same as the case of 1.
- the final correction value after generating the initial correction value d [x, y], the final correction value may be calculated by using a sign operation and an absolute value operation based on the initial correction value. Can be.
- the final correction value may be calculated by multiplying the sign of the initial correction value by the value of performing an arithmetic right shift operation on the absolute value of the initial correction value.
- the final correction value may be calculated such that the final predicted value is rounded to an integer close to zero.
- delta [x, y] Sign (d [x, y]) * ((Abs (d [x, y]) + 2 x ) >> (x + 1)) (for vertical prediction mode)
- delta [x, y] Sign (d [x, y]) * ((Abs (d [x, y]) + 2 y ) >> (y + 1)) (for horizontal prediction mode)
- the final correction value of the fifth embodiment based on the sign operation and the absolute value operation is performed by performing an addition operation on the sign of the initial correction value and the absolute value of the initial correction value, and then performing an arithmetic right shift operation on the result value. It can be calculated by multiplying one value. In this case, the final correction value may be calculated such that the calculated final predicted value is rounded to an integer far from zero.
- delta [x, y] Sign (d [x, y]) * ((Abs (d [x, y]) + 2 x ) >> (x + 1)) (for vertical prediction mode)
- delta [x, y] Sign (d [x, y]) * ((Abs (d [x, y]) + 2 y ) >> (y + 1)) (for horizontal prediction mode)
- the first predicted value pred1 [x, y] and the final corrected value delta [x, y] are added to the final predicted value based on the final corrected value calculated through the first to fifth embodiments.
- predS [x, y]) can be calculated.
- FIG. 9A is a diagram schematically illustrating an embodiment of calculating a final prediction value by correcting a first prediction value when using a vertical mode.
- FIG. 9B is a diagram schematically illustrating an embodiment of calculating a final prediction value by correcting a first prediction value when using a horizontal mode.
- FIG. 10 is a flowchart schematically illustrating a scaling process of an image encoding / decoding method according to another embodiment of the present invention.
- an image encoding / decoding apparatus may perform scaling to derive motion vector information of a prediction block when performing inter prediction or motion compensation of a current block.
- the image encoding / decoding apparatus determines whether the reference picture of the current block and the reference picture of the reference block are the same (S1010). In this case, the image encoding / decoding apparatus may determine whether the reference picture index indicating the reference picture in the reference picture list as well as the reference picture is the same. Then, it is determined whether to perform scaling of the motion vector of the reference block according to the determination result of the same (S1020). If the reference picture is the same, the scaling for the motion vector of the reference block may not be performed. If not, the scaling for the motion vector of the reference block is required.
- the motion vector of the scaled reference block may be used as the basis of the motion vector of the current block and used for inter prediction of the current block.
- the inter prediction method applied for inter prediction through the scaling process may include an advanced motion vector prediction (AMVP) and a merge mode.
- AMVP advanced motion vector prediction
- it may be applied to temporal merge candidate derivation process in merge mode, and may be applied to temporal motion vector derivation process and spatial motion vector derivation process in AMVP.
- FIG. 11A is a diagram illustrating a POC difference between a current picture and a reference picture of a spatial reference block and a POC difference between the current picture and a reference picture of the current block.
- a reference block 1110 for deriving a spatial motion vector candidate among neighboring blocks of the current block 1100 may be a lowermost block adjacent to the left side of the current block 1100, a lower block adjacent to the lower end of the left lowermost block, At least one of an upper left corner block of the current block, an upper right corner block of the current block, and an adjacent upper rightmost block of the current block.
- the reference picture 1140 of the reference block 1110 is used as the reference picture 1130 of the current block 1100. Should be the same as).
- the motion vector of the reference block 1110 may be scaled and used for prediction of the current block 1100. That is, a tb value indicating a difference between a picture order count (POC) of the reference picture 1130 and the current picture 1120 of the current block and a POC difference between the current picture 1120 and the reference picture 1140 of the reference block 1110. Determining whether the td values are the same, if not, does not perform the scaling process, and if not, may perform the scaling process.
- POC picture order count
- FIG. 11B is a diagram illustrating a POC difference between a reference picture of a corresponding position block and a corresponding position picture, and a POC difference between a current picture and a reference picture of the current block.
- the apparatus for encoding and decoding an image corresponds to a co-located block 1150 at a position corresponding to the current block 1100 in a co-located picture 1160 that has already been reconstructed.
- Prediction of the current block 1100 may be performed based on the motion vector of the associated reference block. That is, a block located inside or outside the corresponding location block may be used as a reference block for temporal motion vector derivation or temporal merge candidate derivation.
- the reference block may be determined according to the relative position of the lower right block among four square blocks based on the center of the lower right corner block or the corresponding position block of the corresponding position block.
- a td value indicating a POC difference between the reference picture 1170 referenced by the corresponding position block 1150 and the corresponding position picture 1160 and the current block 1100 are referred to. It is determined whether the tb value representing the POC difference between the reference picture 1130 and the current picture 1120 is the same. If the two values are the same, the scaling process is not performed. If the two values are not the same, the scaling process may be determined.
- FIG. 12 is a flowchart schematically illustrating an embodiment of a process of calculating a scaling factor value for a motion vector based on POC difference values between pictures.
- the image encoding / decoding apparatus obtains td and tb values representing POC differences between pictures for calculating scaling factor values (S1210).
- the first value may mean a td value and the second value may mean a tb value.
- the td value represents a difference value between the POC of the current picture and the POC of the reference picture referenced by the spatially adjacent reference block
- the tb value represents the current value.
- a difference value between the POC of the picture and the POC of the reference picture referenced by the current block is shown.
- the prediction direction of the reference picture of the current block and the reference picture of the reference block may be different from each other.
- the codes of the td and tb values may be different from each other.
- the td value or tb value may be adjusted to fall within a range between -128 and 127. At this time, if the td value or the tb value is less than -128, the td value or the tb value may be adjusted to -128, and if the td value or the tb value is greater than 127, the td value or the tb value may be adjusted to 127. If the td value or tb value is in the range between -128 and 127, the td value or tb value is not adjusted.
- td Clip3 (-128, 127, PicOrderCnt (currPic)-RefPicOrder (currPic, refIdxZ, ListZ))
- tb Clip3 (-128, 127, PicOrderCnt (currPic)-RefPicOrder (currPic, refIdxLX, LX))
- currPic may mean the current picture.
- X may have a value of 0 or 1.
- refIdxLX and LX may represent refIdxL0 and L0, which may mean variables related to L0 temporal motion information.
- refIdxLX may indicate an LX reference picture index indicating a reference picture in the LX reference picture list to which reference pictures are assigned.
- refIdxLX may indicate the first reference picture in the LX reference picture list
- refIdxLX value is ⁇ 1, refIdxLX may indicate that the reference picture in the reference picture list is not indicated.
- Z is the lowermost block adjacent to the left side, which is the position of the reference block for spatial motion vector derivation, the block adjacent to the lower end of the left lowermost block, the upper left corner block of the current block, the upper right corner block of the current block and the At least one of the adjacent upper rightmost blocks of the current block may be indicated.
- the td value represents a difference value between the POC of the corresponding position picture and the POC of the reference picture referenced by the corresponding position block, and the tb value. Denotes the difference between the POC of the current picture and the POC of the reference picture referenced by the current block.
- the td value or the tb value may be adjusted to be included in a range between -128 and 127.
- td Clip3 (-128, 127, PicOrderCnt (colPic)-RefPicOrder (currPic, refIdxCol, ListCol))
- tb Clip3 (-128, 127, PicOrderCnt (currPic)-RefPicOrder (currPic, refIdxLX, LX))
- colPic may mean a corresponding position picture.
- refIdxCol and ListCol may refer to a reference picture index and a reference picture list of a corresponding position block, respectively.
- the image encoding apparatus may calculate an offset value by performing arithmetic right shift by one binary digit on the two's complement integer representation of the absolute value of the td value (S1220). That is, the offset value uses a value proportional to the absolute value of td, and may be calculated by performing an arithmetic right shift having a relatively low computational complexity without performing a computation having a high complexity.
- Abs () represents an absolute value function
- the output value of the function becomes the absolute value of the input value
- the image encoding / decoding apparatus calculates an inverse value of the td value based on the offset value (S1230).
- the scaling factor value is calculated based on the inverse value tx of the tb value and the td value (S1240).
- FIG. 13 is a block diagram schematically illustrating a configuration of calculating a final scaling factor value based on an inverse value of a tb value and a td value.
- the configuration 1300 for calculating a final scaling factor value ScaleFactor includes a multiplier 1310, an add operation performer 1320, an arithmetic shift performer 1330, and a factor value adjuster 1340. It may include.
- the first value and the second value acquirer 1302 obtain a td value and a tb value through the method described in operation S1210. Then, the offset value calculator 1306 calculates an offset value through the method described in step S1220 based on the td value, and the inverse value calculator 1308 based on the offset value S1230.
- the inverse value (tx) of the td value is calculated through the method described in.
- the multiplier 1310 receives the inverse value tx of the td value calculated by the tb value and the inverse value calculator 1308 and performs multiplication.
- the addition operation performing unit 1320 may perform an addition operation based on the product of the inverse value tx of the tb value and the td value. In this case, an operation of adding 32 may be performed.
- the arithmetic shifting unit 1330 performs arithmetic right shift by 6 binary digits on the two's complement integer representation of the result of the addition operation. The operation so far can be expressed as:
- the scaling factor value ScaleFactor is adjusted to be included in a range between -4096 and 4095.
- adjusting the scaling factor value (ScaleFactor) to be included in a specific range means clipping to the scaling factor value (A) less than A and to B if it is larger than B. it means.
- the image encoding / decoding apparatus may calculate the scaled motion vector value scaledMV.
- the scaled motion vector value (scaledMV) is the result of multiplying the scaling factor value (ScaleFactor) and the corresponding motion vector (which may mean a motion vector associated with at least one of spatial motion vector derivation, temporal motion vector derivation, and temporal merge). Based on the sign of the value and the absolute value of the multiplication result value can be calculated by multiplying the value of the addition operation and the arithmetic movement operation.
- scaledMV Sign (ScaleFactor * mv) * ((Abs (ScaleFactor * mv) +127) >> 8)
- Sign () outputs sign information of a specific value (for example,-if the Sign (-1) is outputted), and mv represents a motion vector value before scaling.
- a scaling process may be performed on the x component and the y component, which are respective component values of the motion vector.
- the image encoding / decoding apparatus may generate the prediction block of the current block by using the scaled motion vector value as described above.
- the scaling factor value calculation method can also be used for the scaling factor value calculation process in implicit weighted prediction.
- the image encoding / decoding apparatus acquires td and tb values representing POC differences between pictures for calculating scaling factor values.
- the td value represents the difference between the POC of the reference picture referenced by the current picture among reference pictures in reference picture list 1 and the POC of the reference picture referenced by the current picture among reference pictures in reference picture list 0 and the tb value represents the current picture.
- a difference value between the POC and the POC of the reference picture referenced by the current picture among the reference pictures in the reference picture list 0 may be indicated.
- the td value or the tb value may be adjusted to be included in the range between -128 and 127.
- the td value or the tb value may be adjusted to -128, and if the td value or the tb value is greater than 127, the td value or the tb value may be adjusted to 127. If the td value or tb value is in the range between -128 and 127, the td value or tb value is not adjusted.
- td Clip3 (-128, 127, PicOrderCnt (currPic, refIdxL1, L1)-RefPicOrder (currPic, refIdxL0, L0))
- tb Clip3 (-128, 127, PicOrderCnt (currPic)-RefPicOrder (currPic, refIdxL0, L0))
- the image encoding apparatus may calculate an offset value by performing arithmetic right shift by one binary digit on the two's complement integer representation of the absolute value of the td value.
- the scaling factor value ScaleFactor may be adjusted to be included in a range between -1024 and 1023.
- the scaling factor value ScaleFactor may be adjusted to fall within a range between -4096 and 4065 without being adjusted to fall within a range between -1024 and 1023.
- the weight value of the reference picture in the reference picture list 0 is determined as 64-(ScaleFactor >> 2) using the weighting factor value, and the weight value of the reference picture in the reference picture list 1 is determined as ScaleFactor >> 2. Can be.
- the methods are described based on a flowchart as a series of steps or blocks, but the present invention is not limited to the order of steps, and any steps may occur in a different order than or simultaneously with other steps as described above. Can be. Also, one of ordinary skill in the art would appreciate that the steps shown in the flowcharts are not exclusive, that other steps may be included, or that one or more steps in the flowcharts may be deleted without affecting the scope of the present invention. I can understand.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (20)
- 영상의 복호화 방법에 있어서,엔트로피 복호화된 잔차 블록을 역양자화 및 역변환하여 잔차 블록을 복원하는 단계;현재 블록에 대해 화면내 예측을 수행하여 예측 블록을 생성하는 단계; 및상기 예측 블록에 상기 복원된 잔차 블록을 더하여 영상을 복원하는 단계를 포함하되,상기 예측 블록 생성 단계는 상기 현재 블록에 포함된 예측 대상 화소의 1차 예측값과, 상기 예측 대상 화소의 초기 보정값에 관한 2의 보수 정수 표현에 대해 이진자릿수 1만큼 산술적 우측 이동을 수행하여 계산된 최종 보정값을 기반으로 상기 예측 대상 화소의 최종 예측값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 1 항에 있어서, 상기 예측 블록 생성 단계는상기 현재 블록의 부호화 정보와 예측 대상 화소의 현재 블록 내의 위치에 따라 화면내 예측 값의 보정 여부를 결정하는 단계; 및상기 결정 결과를 기반으로 상기 예측 대상 화소의 최종 예측값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 2 항에 있어서, 상기 보정 여부 결정 단계는상기 현재 블록의 화면 내 예측 모드, 휘도 신호 정보, 색차 신호 정보 및 블록의 크기 중 적어도 어느 하나를 고려하여 화면 내 예측 값의 보정 여부를 결정하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 2 항에 있어서, 상기 보정 여부 결정 단계는상기 현재 블록의 화면 내 예측 모드가 수직 방향 예측 모드인 경우는 상기 현재 블록 내 좌측 경계의 화소에 대해 보정을 수행하도록 결정하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 2 항에 있어서, 상기 보정 여부 결정 단계는상기 현재 블록의 화면 내 예측 모드가 수평 방향 예측 모드인 경우 상기 현재 블록 내 상측 경계의 화소에 대해 보정을 수행하도록 결정하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 2 항에 있어서, 상기 최종 예측값 생성 단계는화면 내 예측 값에 대한 보정을 수행하기로 결정된 경우,상기 현재 블록과 인접한 참조 화소 값을 이용하여 1차 예측값을 획득하는 단계;상기 예측 대상 화소의 블록 내 가로 또는 세로 위치에 따라 초기 보정값을 결정하는 단계;상기 초기 보정값에 관한 2의 보수 정수 표현에 대해 이진자릿수 1만큼 산술적 우측 이동을 수행하여 최종 보정값을 산출하는 단계; 및상기 1차 예측값과 상기 최종 보정값을 기반으로 최종 예측값을 산출하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 6 항에 있어서,화면 내 예측 모드가 수직 방향 예측 모드인 경우, 상기 현재 블록의 좌측 경계의 화소에 대해서 보정을 수행하되, 상기 현재 블록과 인접한 상측 참조 화소 값을 이용하여 상기 1차 예측값을 생성하고, 상기 예측 대상 화소의 블록 내 세로 방향 위치에 대응되는 좌측 참조 화소값과 상기 현재 블록의 좌측 상단 코너 화소의 화소값과의 차를 이용하여 초기 보정값을 결정하고,화면 내 예측 모드가 수평 방향 예측 모드인 경우, 상기 현재 블록의 상단 경계의 화소에 대해서 보정을 수행하되, 상기 현재 블록과 인접한 좌측 참조 화소 값을 이용하여 상기 1차 예측값을 생성하고, 상기 예측 대상 화소의 블록 내 가로 방향 위치에 대응되는 상측 참조 화소값과 상기 현재 블록의 좌측 상단 코너 화소의 화소값과의 차를 이용하여 초기 보정값을 결정하는 것을 특징으로 하는 영상 복호화 방법.
- 제 2 항에 있어서, 상기 최종 예측값 생성 단계는화면 내 예측 값에 대한 보정을 수행하지 않기로 결정된 경우,수직 방향 예측 모드에서는 상기 현재 블록과 인접한 상측 참조 화소값에 기반하여 상기 예측 대상 화소의 최종 예측 값을 생성하고,수평 방향 예측 모드에서는 상기 현재 블록과 인접한 상측 참조 화소값에 기반하여 상기 예측 대상 화소의 최종 예측 값을 생성하는 단계를 포함하는 것을 특징으로 하는 것을 특징으로 하는 영상 복호화 방법.
- 제 2 항에 있어서,상기 예측 대상 화소에 대한 화면내 예측에 사용할 참조 화소를 결정하는 단계를 더 포함하되, 상기 참조 화소 결정 단계는상기 현재 블록과 인접한 위치의 화소들 중 이미 복원된 화소를 이용하여 참조 화소를 결정하는 단계; 및상기 참조 화소의 화소값을 평활화 필터링하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 영상의 복호화 장치에 있어서,엔트로피 복호화된 잔차 블록을 역양자화 및 역변환하여 잔차 블록을 복원하는 잔차 블록 복원부;현재 블록에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 예측 블록 생성부; 및상기 예측 블록에 상기 잔차 블록을 더하여 영상을 복원하는 영상 복원부를 포함하되,상기 예측 블록 생성부는 상기 복원된 현재 블록에 포함된 예측 대상 화소의 1차 예측값과, 상기 예측 대상 화소의 초기 보정값에 관한 2의 보수 정수 표현에 대해 이진자릿수 1만큼 산술적 우측 이동을 수행하여 계산된 최종 보정값을 기반으로 상기 예측 대상 화소의 최종 예측값을 생성하는 것을 특징으로 하는 영상 복호화 장치.
- 영상의 부호화 방법에 있어서,입력 영상에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 단계; 및상기 인트라 예측에 의해 예측된 예측 블록과 현재 예측 블록간의 차이인 잔차 블록을 변환 및 양자화하여 엔트로피 부호화하는 단계를 포함하되,상기 예측 블록 생성 단계는 상기 현재 블록에 포함된 예측 대상 화소의 1차 예측값과, 상기 예측 대상 화소의 초기 보정값에 관한 2의 보수 정수 표현에 대해 이진자릿수 1만큼 산술적 우측 이동을 수행하여 계산된 최종 보정값을 기반으로 상기 예측 대상 화소의 최종 예측값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 영상의 부호화 장치에 있어서,입력 영상에 대해 인트라 예측을 수행하여 예측 블록을 생성하는 예측 블록 생성부; 및상기 인트라 예측에 의해 예측된 예측 블록과 현재 예측 블록간의 차이인 잔차 블록을 변환 및 양자화하여 엔트로피 부호화하는 부호화부를 포함하되,상기 예측 블록 생성 단계는 상기 현재 블록에 포함된 예측 대상 화소의 1차 예측값과, 상기 예측 대상 화소의 초기 보정값에 관한 2의 보수 정수 표현에 대해 이진자릿수 1만큼 산술적 우측 이동을 수행하여 계산된 최종 보정값을 기반으로 상기 예측 대상 화소의 최종 예측값을 생성하는 단계를 포함하는 것을 특징으로 하는 영상 부호화 장치.
- 영상 복호화 방법에 있어서,엔트로피 복호화된 잔차 블록을 역양자화 및 역변환하여 잔차 블록을 복원하는 단계;현재 블록에 대해 화면간 예측을 수행하여 예측 블록을 생성하는 단계; 및상기 예측 블록에 상기 복원된 잔차 블록을 더하여 영상을 복원하는 단계를 포함하되,상기 예측 블록 생성 단계는,상기 현재 블록의 참조 픽처와, 상기 참조 블록의 참조 픽처가 동일한지 판단하는 단계; 및상기 판단 결과 동일하지 않은 경우, 상기 참조 블록의 움직임 벡터를 스케일링하여 상기 현재 블록의 예측에 이용하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 13 항에 있어서,공간적 또는 시간적 움직임 벡터 유도시 및 시간적 머지(merge) 후보 유도시 사용되는 상기 참조 블록은i) 공간적 움직임 벡터 유도시 상기 현재 블록의 좌측에 인접한 최하단 블록, 상기 좌측 최하단 블록의 하단과 인접한 블록, 상기 현재 블록의 좌측 상단 코너 블록, 상기 현재 블록의 우측 상단 코너 블록 및 상기 현재 블록의 인접한 상단 최우측 블록 중 적어도 어느 하나를 포함하고,ii) 시간적 움직임 벡터 유도시 및 iii) 시간적 머지 후보 유도시에는 현재 픽처의 대응 위치 픽처에서 상기 현재 블록과 공간적으로 대응되는 대응 위치 블록의 내부 및 외부에 위치한 블록 중 적어도 어느 하나를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 13 항에 있어서, 상기 예측 블록 생성 단계는상기 픽처들 간의 POC(Picture Order Count) 차이 값을 기반으로 제 1 및 제 2 값을 획득하는 단계;상기 제 1 값의 절대값에 관한 2의 보수 정수 표현에 대해 이진자릿수 1만큼 산술적 우측 이동을 수행하여 오프셋 값을 산출하여 상기 제 1 값의 반비례값을 산출하는 단계; 및상기 제 2 값 및 상기 제 1 값의 반비례값을 기반으로 상기 스케일링 인자 값을 계산하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 제 13 항에 있어서,i) 공간적 움직임 벡터 유도시 상기 제 1 값은 현재 픽처의 POC와 상기 참조 블록이 참조하는 참조 픽처의 POC 간의 차이 값이고, 상기 제 2 값은 현재 픽처의 POC와 상기 현재 블록이 참조하는 참조 픽처의 POC 간의 차이 값이며,ii) 시간적 움직임 벡터 유도 또는 iii) 시간적 머지 후보 유도시 상기 제 1 값은 대응 위치 픽처(co-located picture)의 POC와 상기 대응 위치 픽처에서 현재 블록과 대응 위치 블록이 참조하는 참조 픽처의 POC 간의 차이 값이고, 상기 제 2 값은 현재 블록 영상의 POC와 상기 현재 블록이 참조하는 참조 픽처의 POC 간의 차이 값인 것을 특징으로 하는 영상 복호화 방법.
- 제 15 항에 있어서, 상기 스케일링 인자 값 계산 단계는상기 제 2 값 및 상기 제 1 값의 반비례값의 곱을 기반으로 덧셈 연산 및 산술적 우측 이동 연산을 수행하여 상기 스케일링 인자 값을 산출하는 단계; 및상기 스케일링 인자 값을 특정 범위에 포함되도록 조정하는 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
- 영상 복호화 장치에 있어서,엔트로피 복호화된 잔차 블록을 역양자화 및 역변환하여 잔차 블록을 복원하는 잔차 블록 복원부;현재 블록에 대해 화면간 예측을 수행하여 예측 블록을 생성하는 예측 블록 생성부; 및상기 예측 블록에 상기 복원된 잔차 블록을 더하여 영상을 복원하는 영상 복원부를 포함하되, 상기 예측 블록 생성부는,상기 현재 블록의 참조 픽처와, 상기 참조 블록의 참조 픽처가 동일한지 판단하는 동일 여부 판단부; 및상기 판단 결과 동일하지 않은 경우, 상기 참조 블록의 움직임 벡터를 스케일링하여 상기 현재 블록의 예측에 이용하는 스케일링부를 포함하는 것을 특징으로 하는 영상 복호화 장치.
- 영상의 부호화 방법에 있어서,입력 영상에 대해 화면간 예측을 수행하여 예측 블록을 생성하는 단계;상기 현재 입력 블록과 화면간 예측에 의해 예측된 예측 블록 간의 차이인 잔차 블록을 변환 및 양자화하여 엔트로피 부호화하는 단계를 포함하되,상기 예측 블록 생성 단계는,상기 현재 블록의 참조 픽처와, 상기 참조 블록의 참조 픽처가 동일한지 판단하는 단계; 및상기 판단 결과 동일하지 않은 경우, 상기 참조 블록의 움직임 벡터를 스케일링하여 상기 현재 블록의 예측에 이용하는 단계를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 영상의 부호화 장치에 있어서,입력 영상에 대해 화면간 예측을 수행하여 예측 블록을 생성하는 예측 블록 생성부; 및상기 현재 입력 블록과 화면간 예측에 의해 예측된 예측 블록 간의 차이인 잔차 블록을 변환 및 양자화하여 엔트로피 부호화하는 부호화부를 포함하되,상기 예측 블록 생성부는,상기 현재 블록의 참조 픽처와, 상기 참조 블록의 참조 픽처가 동일한지 판단하는 동일 여부 판단부; 및상기 판단 결과 동일하지 않은 경우, 상기 참조 블록의 움직임 벡터를 스케일링하여 상기 현재 블록의 예측에 이용하는 스케일링부를 포함하는 것을 특징으로 하는 영상 부호화 장치.
Priority Applications (27)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911106293.0A CN110769251B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
CN201911106289.4A CN110855984B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
CN201911107028.4A CN110650336B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
EP19151111.2A EP3499895A1 (en) | 2012-01-18 | 2013-01-18 | Device for encoding and decoding video |
EP13738011.9A EP2806635A4 (en) | 2012-01-18 | 2013-01-18 | METHOD AND DEVICE FOR ENCODING AND DECODING AN IMAGE |
CN201911107047.7A CN110708542B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
CN201911106301.1A CN110830797B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
CN201380009877.0A CN104137549B9 (zh) | 2012-01-18 | 2013-01-18 | 对图像进行编码和解码的方法及设备 |
JP2014553257A JP6101709B2 (ja) | 2012-01-18 | 2013-01-18 | 映像復号化装置 |
CN201911107027.XA CN110868588B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和计算机可读记录介质 |
CN201911106307.9A CN110830798B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
CN201911106288.XA CN110769250B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
EP23218011.7A EP4366307A3 (en) | 2012-01-18 | 2013-01-18 | Method and device for encoding and decoding image |
CN201911107030.1A CN110809157B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
US14/372,278 US9807412B2 (en) | 2012-01-18 | 2013-01-18 | Method and device for encoding and decoding image |
CN201911107049.6A CN110830799B (zh) | 2012-01-18 | 2013-01-18 | 视频解码装置、视频编码装置和传输比特流的方法 |
US14/446,447 US9374595B2 (en) | 2012-01-18 | 2014-07-30 | Method and device for generating a prediction block to encode and decode an image |
US15/151,163 US9635380B2 (en) | 2012-01-18 | 2016-05-10 | Method and device for generating a prediction block to encode and decode an image |
US15/151,142 US9635379B2 (en) | 2012-01-18 | 2016-05-10 | Method and device for generating a prediction block to encode and decode an image |
US15/151,097 US9621912B2 (en) | 2012-01-18 | 2016-05-10 | Method and device for generating a prediction block to encode and decode an image |
US15/151,125 US9621913B2 (en) | 2012-01-18 | 2016-05-10 | Method and device for generating a prediction block to encode and decode an image |
US15/717,379 US10397598B2 (en) | 2012-01-18 | 2017-09-27 | Method and device for encoding and decoding image |
US16/430,348 US10834418B2 (en) | 2012-01-18 | 2019-06-03 | Method and device for encoding and decoding image |
US17/028,678 US11228778B2 (en) | 2012-01-18 | 2020-09-22 | Method and device for encoding and decoding image |
US17/544,709 US11706438B2 (en) | 2012-01-18 | 2021-12-07 | Method and device for encoding and decoding image |
US18/200,085 US12003752B2 (en) | 2012-01-18 | 2023-05-22 | Method and device for encoding and decoding image |
US18/651,227 US20240283963A1 (en) | 2012-01-18 | 2024-04-30 | Method and device for encoding and decoding image |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2012-0005950 | 2012-01-18 | ||
KR20120005950 | 2012-01-18 | ||
KR10-2013-0005653 | 2013-01-18 | ||
KR1020130005653A KR101366649B1 (ko) | 2012-01-18 | 2013-01-18 | 영상 부호화 및 복호화 방법 및 장치 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/372,278 A-371-Of-International US9807412B2 (en) | 2012-01-18 | 2013-01-18 | Method and device for encoding and decoding image |
US14/446,447 Division US9374595B2 (en) | 2012-01-18 | 2014-07-30 | Method and device for generating a prediction block to encode and decode an image |
US15/717,379 Continuation US10397598B2 (en) | 2012-01-18 | 2017-09-27 | Method and device for encoding and decoding image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013109092A1 true WO2013109092A1 (ko) | 2013-07-25 |
Family
ID=48995380
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/000417 WO2013109092A1 (ko) | 2012-01-18 | 2013-01-18 | 영상 부호화 및 복호화 방법 및 장치 |
Country Status (6)
Country | Link |
---|---|
US (12) | US9807412B2 (ko) |
EP (4) | EP2806635A4 (ko) |
JP (7) | JP6101709B2 (ko) |
KR (16) | KR101366649B1 (ko) |
CN (12) | CN110830799B (ko) |
WO (1) | WO2013109092A1 (ko) |
Families Citing this family (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120140181A (ko) * | 2011-06-20 | 2012-12-28 | 한국전자통신연구원 | 화면내 예측 블록 경계 필터링을 이용한 부호화/복호화 방법 및 그 장치 |
CN110830799B (zh) | 2012-01-18 | 2023-09-15 | 韩国电子通信研究院 | 视频解码装置、视频编码装置和传输比特流的方法 |
KR102312335B1 (ko) | 2013-12-20 | 2021-10-13 | 삼성전자주식회사 | 휘도 보상을 이용한 인터 레이어 비디오 부호화 방법 및 그 장치, 비디오 복호화 방법 및 그 장치 |
CN118400532A (zh) * | 2014-01-03 | 2024-07-26 | 庆熙大学校产学协力团 | 存储由编码方法生成的比特流的计算机可读记录介质 |
WO2015133320A1 (ja) * | 2014-03-05 | 2015-09-11 | ソニー株式会社 | 画像符号化装置および方法 |
CN112584140B (zh) * | 2014-11-27 | 2024-08-13 | 株式会社Kt | 对视频信号进行解码或编码的方法 |
KR102349788B1 (ko) * | 2015-01-13 | 2022-01-11 | 인텔렉추얼디스커버리 주식회사 | 영상의 부호화/복호화 방법 및 장치 |
WO2016195460A1 (ko) * | 2015-06-05 | 2016-12-08 | 한양대학교 산학협력단 | 화면 내 예측에 대한 부호화/복호화 방법 및 장치 |
CN108141604B (zh) * | 2015-06-05 | 2022-07-05 | 杜比实验室特许公司 | 图像编码和解码方法和图像解码设备 |
WO2017052118A1 (ko) * | 2015-09-23 | 2017-03-30 | 엘지전자 주식회사 | 영상 코딩 시스템에서 인트라 예측 방법 및 장치 |
KR20170058869A (ko) * | 2015-11-18 | 2017-05-29 | 한국전자통신연구원 | 인루프 필터를 이용한 비디오 복호화 방법 및 이를 위한 장치 |
TWI610292B (zh) * | 2015-11-19 | 2018-01-01 | 瑞鼎科技股份有限公司 | 驅動電路及其運作方法 |
US20180359494A1 (en) * | 2015-12-31 | 2018-12-13 | Mediatek Inc. | Method and apparatus of prediction binary tree structure for video and image coding |
KR102345475B1 (ko) * | 2016-01-05 | 2022-01-03 | 한국전자통신연구원 | 잔차 신호에 대한 예측 방법 및 장치 |
CN115460408A (zh) * | 2016-01-27 | 2022-12-09 | 韩国电子通信研究院 | 通过使用预测对视频进行编码和解码的方法和装置 |
CN113810713B (zh) * | 2016-04-29 | 2024-05-10 | 世宗大学校产学协力团 | 用于对图像信号进行编码和解码的方法和装置 |
EP4030754A1 (en) * | 2016-05-02 | 2022-07-20 | Industry-University Cooperation Foundation Hanyang University | Image encoding/decoding method and computer-readable medium |
KR20180040319A (ko) * | 2016-10-12 | 2018-04-20 | 가온미디어 주식회사 | 영상 처리 방법, 그를 이용한 영상 복호화 및 부호화 방법 |
CN109417642B (zh) * | 2016-07-01 | 2021-06-22 | Sk电信有限公司 | 用于高分辨率影像流的影像比特流生成方法和设备 |
CN116708775A (zh) * | 2016-07-18 | 2023-09-05 | 韩国电子通信研究院 | 图像编码/解码方法和装置以及存储比特流的记录介质 |
US10771791B2 (en) * | 2016-08-08 | 2020-09-08 | Mediatek Inc. | View-independent decoding for omnidirectional video |
CN109937571B (zh) * | 2016-09-05 | 2022-01-18 | Lg电子株式会社 | 图像编码/解码方法及其装置 |
CN116915985A (zh) | 2016-10-04 | 2023-10-20 | Lx 半导体科技有限公司 | 图像编码/解码方法和发送方法 |
CN117395408A (zh) * | 2016-10-04 | 2024-01-12 | 韩国电子通信研究院 | 对图像进行编码/解码的方法和设备以及记录介质 |
KR20230070062A (ko) * | 2016-10-04 | 2023-05-19 | 주식회사 비원영상기술연구소 | 영상 데이터 부호화/복호화 방법 및 장치 |
CN116647680A (zh) | 2016-10-28 | 2023-08-25 | 韩国电子通信研究院 | 视频编码/解码方法和设备以及存储比特流的记录介质 |
CN116781899A (zh) * | 2016-10-28 | 2023-09-19 | 韩国电子通信研究院 | 视频编码/解码方法和设备以及存储比特流的记录介质 |
WO2018097692A2 (ko) * | 2016-11-28 | 2018-05-31 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
JP7044778B2 (ja) * | 2016-11-28 | 2022-03-30 | エレクトロニクス アンド テレコミュニケーションズ リサーチ インスチチュート | 画像復号方法、画像符号化方法、及び非一時的なコンピュータ可読記録媒体 |
KR20180061042A (ko) * | 2016-11-29 | 2018-06-07 | 한국전자통신연구원 | 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 기록 매체 |
FR3066873A1 (fr) * | 2017-05-29 | 2018-11-30 | Orange | Procedes et dispositifs de codage et de decodage d'un flux de donnees representatif d'au moins une image |
WO2019009491A1 (ko) * | 2017-07-06 | 2019-01-10 | 삼성전자 주식회사 | 영상을 부호화 또는 복호화하는 방법 및 장치 |
CN117221523A (zh) * | 2017-08-29 | 2023-12-12 | 株式会社Kt | 视频解码方法、视频编码方法及装置 |
WO2019139393A1 (ko) | 2018-01-15 | 2019-07-18 | 김기백 | 색차 성분에 관한 화면내 예측 부호화/복호화 방법 및 장치 |
WO2019147067A1 (ko) | 2018-01-26 | 2019-08-01 | 한국전자통신연구원 | 시간적 움직임 정보를 이용하는 영상 부호화 및 영상 복호화를 위한 방법 및 장치 |
CN111919448A (zh) | 2018-01-26 | 2020-11-10 | 韩国电子通信研究院 | 用于使用时间运动信息的图像编码和图像解码的方法和设备 |
KR102570087B1 (ko) | 2018-03-27 | 2023-08-25 | (주)휴맥스 | 모션 보상을 이용하는 비디오 신호 처리 방법 및 장치 |
CN111971966B (zh) | 2018-03-30 | 2024-10-29 | 英迪股份有限公司 | 图像编码/解码方法和设备以及存储比特流的记录介质 |
KR102516233B1 (ko) * | 2018-04-02 | 2023-03-30 | 엘지전자 주식회사 | 움직임 벡터에 기반한 영상 코딩 방법 및 그 장치 |
CN116668678A (zh) | 2018-05-10 | 2023-08-29 | Lg电子株式会社 | 图像编解码方法、计算机可读存储介质和数据发送方法 |
WO2019234606A1 (en) | 2018-06-05 | 2019-12-12 | Beijing Bytedance Network Technology Co., Ltd. | Interaction between ibc and atmvp |
CN110636298B (zh) | 2018-06-21 | 2022-09-13 | 北京字节跳动网络技术有限公司 | 对于Merge仿射模式和非Merge仿射模式的统一约束 |
WO2019244118A1 (en) | 2018-06-21 | 2019-12-26 | Beijing Bytedance Network Technology Co., Ltd. | Component-dependent sub-block dividing |
PT3815377T (pt) * | 2018-07-16 | 2023-03-30 | Huawei Tech Co Ltd | Codificador de vídeo, decodificador de vídeo e métodos de codificação e decodificação correspondentes |
KR20200010113A (ko) | 2018-07-18 | 2020-01-30 | 한국전자통신연구원 | 지역 조명 보상을 통한 효과적인 비디오 부호화/복호화 방법 및 장치 |
CN117768651A (zh) | 2018-09-24 | 2024-03-26 | 北京字节跳动网络技术有限公司 | 处理视频数据的方法、装置、介质、以及比特流存储方法 |
WO2020094150A1 (en) | 2018-11-10 | 2020-05-14 | Beijing Bytedance Network Technology Co., Ltd. | Rounding in current picture referencing |
CN114885158B (zh) * | 2018-12-21 | 2023-10-20 | 华为技术有限公司 | 位置相关预测组合的模式相关和大小相关块级限制的方法和装置 |
CN118612456A (zh) * | 2019-02-14 | 2024-09-06 | Lg 电子株式会社 | 基于dmvr的帧间预测方法和设备 |
US20220337842A1 (en) * | 2019-09-10 | 2022-10-20 | Lg Electronics Inc. | Image encoding/decoding method and device for performing bdof, and method for transmitting bitstream |
CN112135127B (zh) * | 2019-11-05 | 2021-09-21 | 杭州海康威视数字技术股份有限公司 | 一种编解码方法、装置、设备及机器可读存储介质 |
KR20220105169A (ko) * | 2019-12-06 | 2022-07-26 | 엘지전자 주식회사 | 동일 위치 픽처에 관한 정보를 포함하는 픽처 헤더에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
CN116724553A (zh) * | 2020-12-29 | 2023-09-08 | 创峰科技 | 用于vvc高比特深度编码的扩展精度加权预测的方法和装置 |
CN113691810B (zh) * | 2021-07-26 | 2022-10-04 | 浙江大华技术股份有限公司 | 帧内帧间联合预测方法、编解码方法及相关设备、存储介质 |
CN113822276B (zh) * | 2021-09-30 | 2024-06-14 | 中国平安人寿保险股份有限公司 | 基于神经网络的图片矫正方法、装置、设备及介质 |
CN114501029B (zh) * | 2022-01-12 | 2023-06-06 | 深圳市洲明科技股份有限公司 | 图像编码、图像解码方法、装置、计算机设备和存储介质 |
WO2024205296A1 (ko) * | 2023-03-30 | 2024-10-03 | 현대자동차주식회사 | 비 지역적 샘플을 이용하여 인루프필터링을 수행하는 비디오 부호화/복호화 방법 및 장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070115551A (ko) * | 2006-06-02 | 2007-12-06 | 삼성전자주식회사 | N-bit 영상 데이터 부호화/복호화 장치 및 방법 |
KR20090043353A (ko) * | 2007-10-29 | 2009-05-06 | 삼성전자주식회사 | 영상의 디퓨전 특성을 이용한 영상 부호화/복호화 방법 및장치 |
KR20100092268A (ko) * | 2009-02-12 | 2010-08-20 | 에스케이 텔레콤주식회사 | 영상 부호화/복호화 장치 및 방법 |
KR20110138098A (ko) * | 2010-06-18 | 2011-12-26 | 삼성전자주식회사 | 영상 인트라 예측 방법 및 장치 및 그를 적용한 영상 디코딩 방법 및 장치 |
KR20120005934A (ko) * | 2010-07-09 | 2012-01-17 | 삼성전자주식회사 | 움직임 벡터의 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
Family Cites Families (56)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0244660A3 (de) * | 1986-04-10 | 1988-07-06 | Waldemar Dipl.-Ing. Kehler | Relevanz- und irrelevanz-analytisch bereichsprädiktives Code-Modulations-Verfahren zur Bestimmung, Prädiktion und Übertragung (ir-)relevanter (Bild-)Signal-Gebiete durch lokale Differenzenbildung |
AU2008201158C1 (en) * | 2002-07-15 | 2011-11-17 | Interdigital Vc Holdings, Inc. | Adaptive weighting of reference pictures in video encoding |
US6728315B2 (en) * | 2002-07-24 | 2004-04-27 | Apple Computer, Inc. | Method and apparatus for variable accuracy inter-picture timing specification for digital video encoding with reduced requirements for division operations |
ES2650068T3 (es) * | 2002-08-08 | 2018-01-16 | Godo Kaisha Ip Bridge 1 | Procedimiento de codificación y procedimiento de descodificación de imágenes en movimiento |
RU2375839C2 (ru) * | 2003-02-18 | 2009-12-10 | Нокиа Корпорейшн | Способ кодирования изображений |
EP1636998A2 (en) * | 2003-06-25 | 2006-03-22 | Thomson Licensing | Method and apparatus for weighted prediction estimation using a displaced frame differential |
JP4500213B2 (ja) * | 2005-05-20 | 2010-07-14 | オリンパスイメージング株式会社 | データ符号化装置、データ復号化装置、データ符号化方法、データ復号化方法、プログラム |
KR100703788B1 (ko) * | 2005-06-10 | 2007-04-06 | 삼성전자주식회사 | 스무딩 예측을 이용한 다계층 기반의 비디오 인코딩 방법,디코딩 방법, 비디오 인코더 및 비디오 디코더 |
WO2007029914A1 (en) * | 2005-07-19 | 2007-03-15 | Samsung Eletronics Co., Ltd. | Video encoding/decoding method and apparatus in temporal direct mode in hierarchica structure |
WO2007081116A1 (en) * | 2006-01-07 | 2007-07-19 | Electronics And Telecommunications Research Institute | Method and apparatus for video data encoding and decoding |
KR101406156B1 (ko) * | 2006-02-02 | 2014-06-13 | 톰슨 라이센싱 | 움직임 보상 예측을 위한 적응 가중 선택 방법 및 장치 |
FR2897213A1 (fr) * | 2006-02-08 | 2007-08-10 | Thomson Licensing Sas | Procede de codage par blocs d'images d'une sequence d'images video |
CN100588259C (zh) * | 2006-03-22 | 2010-02-03 | 弗劳恩霍夫应用研究促进协会 | 具有精度可缩放性的编码方案 |
JP2007300380A (ja) * | 2006-04-28 | 2007-11-15 | Ntt Docomo Inc | 画像予測符号化装置、画像予測符号化方法、画像予測符号化プログラム、画像予測復号装置、画像予測復号方法及び画像予測復号プログラム |
JP5155157B2 (ja) * | 2006-05-12 | 2013-02-27 | パナソニック株式会社 | 動画像復号化装置 |
JP4561688B2 (ja) * | 2006-05-18 | 2010-10-13 | 日本ビクター株式会社 | 動画像符号化装置、動画像符号化プログラム、動画像復号化装置、及び動画像復号化プログラム |
WO2007139266A1 (en) * | 2006-06-01 | 2007-12-06 | Electronics And Telecommunications Research Institute | Method and apparatus for video coding on pixel-wise prediction |
ZA200900857B (en) * | 2006-07-06 | 2010-05-26 | Thomson Licensing | Method and apparatus for decoupling frame number and/or picture order count (POC) for multi-view video encoding and decoding |
US20080165860A1 (en) * | 2006-08-31 | 2008-07-10 | Zohair Sahraoui | H.264 Data processing |
KR101365567B1 (ko) | 2007-01-04 | 2014-02-20 | 삼성전자주식회사 | 영상의 예측 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
CN101573982B (zh) * | 2006-11-03 | 2011-08-03 | 三星电子株式会社 | 利用运动矢量跟踪编码/解码图像的方法和装置 |
KR20080065355A (ko) | 2007-01-09 | 2008-07-14 | 삼성전자주식회사 | 반도체 소자의 형성 방법 |
KR101365575B1 (ko) * | 2007-02-05 | 2014-02-25 | 삼성전자주식회사 | 인터 예측 부호화, 복호화 방법 및 장치 |
JP5417342B2 (ja) * | 2007-12-21 | 2014-02-12 | テレフオンアクチーボラゲット エル エム エリクソン(パブル) | ビデオ符号化に対する改善された画素予測 |
KR101596829B1 (ko) * | 2008-05-07 | 2016-02-23 | 엘지전자 주식회사 | 비디오 신호의 디코딩 방법 및 장치 |
KR20100027384A (ko) * | 2008-09-02 | 2010-03-11 | 삼성전자주식회사 | 예측 모드 결정 방법 및 장치 |
US8750378B2 (en) * | 2008-09-23 | 2014-06-10 | Qualcomm Incorporated | Offset calculation in switched interpolation filters |
JP2010251953A (ja) * | 2009-04-14 | 2010-11-04 | Sony Corp | 画像符号化装置と画像符号化方法およびコンピュータ・プログラム |
JP5183573B2 (ja) * | 2009-06-08 | 2013-04-17 | 三菱電機株式会社 | 動画像符号化装置 |
US8301231B2 (en) | 2009-08-27 | 2012-10-30 | Angel Medical, Inc. | Alarm testing and backup for implanted medical devices with vibration alerts |
KR101623124B1 (ko) | 2009-12-03 | 2016-05-24 | 에스케이 텔레콤주식회사 | 비디오 인코딩 장치 및 그 인코딩 방법, 비디오 디코딩 장치 및 그 디코딩 방법, 및 거기에 이용되는 방향적 인트라 예측방법 |
WO2011070730A1 (ja) * | 2009-12-07 | 2011-06-16 | 日本電気株式会社 | 映像符号化装置および映像復号装置 |
US9036692B2 (en) * | 2010-01-18 | 2015-05-19 | Mediatek Inc. | Motion prediction method |
KR101503269B1 (ko) * | 2010-04-05 | 2015-03-17 | 삼성전자주식회사 | 영상 부호화 단위에 대한 인트라 예측 모드 결정 방법 및 장치, 및 영상 복호화 단위에 대한 인트라 예측 모드 결정 방법 및 장치 |
KR101752418B1 (ko) * | 2010-04-09 | 2017-06-29 | 엘지전자 주식회사 | 비디오 신호 처리 방법 및 장치 |
ES2936323T3 (es) * | 2010-05-04 | 2023-03-16 | Lg Electronics Inc | Método y aparato para codificar y descodificar una señal de vídeo |
CN101883286B (zh) * | 2010-06-25 | 2012-12-05 | 无锡中星微电子有限公司 | 运动估计中的校准方法及装置、运动估计方法及装置 |
KR20120008299A (ko) | 2010-07-16 | 2012-01-30 | 광운대학교 산학협력단 | 화면 내 예측 블록의 영역별 특성을 고려한 적응적 필터 적용 장치 및 그 방법 |
US20120163457A1 (en) | 2010-12-28 | 2012-06-28 | Viktor Wahadaniah | Moving picture decoding method, moving picture coding method, moving picture decoding apparatus, moving picture coding apparatus, and moving picture coding and decoding apparatus |
KR101532665B1 (ko) * | 2011-03-14 | 2015-07-09 | 미디어텍 인크. | 시간적 움직임 백터 예측을 도출하기 위한 방법 및 장치 |
KR101645787B1 (ko) * | 2011-04-25 | 2016-08-04 | 엘지전자 주식회사 | 인트라 예측 방법과 이를 이용한 부호화기 및 복호화기 |
KR200466278Y1 (ko) | 2011-05-25 | 2013-04-08 | 소나테크 주식회사 | 멀티빔 사이드스캔소나의 트랜스듀서 |
JP5786478B2 (ja) * | 2011-06-15 | 2015-09-30 | 富士通株式会社 | 動画像復号装置、動画像復号方法、及び動画像復号プログラム |
CN106658013B (zh) * | 2011-06-24 | 2019-07-19 | 三菱电机株式会社 | 图像编码装置和方法、图像解码装置和方法以及记录介质 |
CN103891290B (zh) * | 2011-08-19 | 2018-02-02 | 瑞典爱立信有限公司 | 运动矢量处理 |
US20130070855A1 (en) * | 2011-09-17 | 2013-03-21 | Qualcomm Incorporated | Hybrid motion vector coding modes for video coding |
US9762904B2 (en) * | 2011-12-22 | 2017-09-12 | Qualcomm Incorporated | Performing motion vector prediction for video coding |
CN110830799B (zh) | 2012-01-18 | 2023-09-15 | 韩国电子通信研究院 | 视频解码装置、视频编码装置和传输比特流的方法 |
ES2703628T3 (es) * | 2012-02-29 | 2019-03-11 | Lg Electronics Inc | Método de predicción intercapa y aparato que hace uso del mismo |
US20130343459A1 (en) * | 2012-06-22 | 2013-12-26 | Nokia Corporation | Method and apparatus for video coding |
US9277200B2 (en) * | 2013-01-17 | 2016-03-01 | Qualcomm Incorporated | Disabling inter-view prediction for reference picture list in video coding |
US10070125B2 (en) * | 2013-07-31 | 2018-09-04 | Nokia Technologies Oy | Method and apparatus for video coding and decoding |
US10244253B2 (en) * | 2013-09-13 | 2019-03-26 | Qualcomm Incorporated | Video coding techniques using asymmetric motion partitioning |
WO2015056182A2 (en) * | 2013-10-15 | 2015-04-23 | Nokia Technologies Oy | Video encoding and decoding |
CN106416250B (zh) * | 2013-12-02 | 2020-12-04 | 诺基亚技术有限公司 | 视频编码和解码 |
KR102570087B1 (ko) * | 2018-03-27 | 2023-08-25 | (주)휴맥스 | 모션 보상을 이용하는 비디오 신호 처리 방법 및 장치 |
-
2013
- 2013-01-18 CN CN201911107049.6A patent/CN110830799B/zh active Active
- 2013-01-18 CN CN201911107027.XA patent/CN110868588B/zh active Active
- 2013-01-18 CN CN201911106301.1A patent/CN110830797B/zh active Active
- 2013-01-18 EP EP13738011.9A patent/EP2806635A4/en not_active Ceased
- 2013-01-18 KR KR1020130005653A patent/KR101366649B1/ko active IP Right Grant
- 2013-01-18 EP EP23218011.7A patent/EP4366307A3/en active Pending
- 2013-01-18 CN CN201911107047.7A patent/CN110708542B/zh active Active
- 2013-01-18 EP EP14181064.8A patent/EP2806649A1/en not_active Ceased
- 2013-01-18 CN CN201410471753.0A patent/CN104202608B/zh active Active
- 2013-01-18 CN CN201911106288.XA patent/CN110769250B/zh active Active
- 2013-01-18 EP EP19151111.2A patent/EP3499895A1/en not_active Ceased
- 2013-01-18 CN CN201911106307.9A patent/CN110830798B/zh active Active
- 2013-01-18 CN CN201911107030.1A patent/CN110809157B/zh active Active
- 2013-01-18 CN CN201911106293.0A patent/CN110769251B/zh active Active
- 2013-01-18 CN CN201911106289.4A patent/CN110855984B/zh active Active
- 2013-01-18 WO PCT/KR2013/000417 patent/WO2013109092A1/ko active Application Filing
- 2013-01-18 US US14/372,278 patent/US9807412B2/en active Active
- 2013-01-18 JP JP2014553257A patent/JP6101709B2/ja active Active
- 2013-01-18 CN CN201380009877.0A patent/CN104137549B9/zh active Active
- 2013-01-18 CN CN201911107028.4A patent/CN110650336B/zh active Active
- 2013-08-06 KR KR1020130093308A patent/KR101911242B1/ko active IP Right Grant
- 2013-08-06 KR KR1020130093272A patent/KR101435558B1/ko active IP Right Grant
- 2013-08-06 KR KR1020130093273A patent/KR101442785B1/ko active IP Right Grant
- 2013-08-06 KR KR1020130093307A patent/KR101918079B1/ko active IP Right Grant
- 2013-08-06 KR KR1020130093306A patent/KR101853635B1/ko active IP Right Grant
- 2013-08-06 KR KR1020130093274A patent/KR101461500B1/ko active IP Right Grant
- 2013-08-06 KR KR1020130093305A patent/KR101435557B1/ko active IP Right Grant
-
2014
- 2014-05-07 KR KR1020140054303A patent/KR101461501B1/ko active IP Right Grant
- 2014-05-07 KR KR1020140054306A patent/KR101918080B1/ko active IP Right Grant
- 2014-07-30 US US14/446,447 patent/US9374595B2/en active Active
- 2014-12-01 JP JP2014243486A patent/JP2015080240A/ja active Pending
-
2016
- 2016-05-10 US US15/151,163 patent/US9635380B2/en active Active
- 2016-05-10 US US15/151,125 patent/US9621913B2/en active Active
- 2016-05-10 US US15/151,097 patent/US9621912B2/en active Active
- 2016-05-10 US US15/151,142 patent/US9635379B2/en active Active
-
2017
- 2017-02-27 JP JP2017035446A patent/JP6495359B2/ja active Active
- 2017-09-27 US US15/717,379 patent/US10397598B2/en active Active
- 2017-09-28 KR KR1020170126207A patent/KR102013638B1/ko active IP Right Grant
-
2019
- 2019-03-05 JP JP2019039837A patent/JP2019126073A/ja active Pending
- 2019-06-03 US US16/430,348 patent/US10834418B2/en active Active
- 2019-08-19 KR KR1020190100771A patent/KR102100700B1/ko active IP Right Grant
-
2020
- 2020-04-08 KR KR1020200042573A patent/KR102205472B1/ko active Application Filing
- 2020-09-22 US US17/028,678 patent/US11228778B2/en active Active
-
2021
- 2021-01-14 KR KR1020210005377A patent/KR102315438B1/ko active IP Right Grant
- 2021-02-16 JP JP2021022496A patent/JP2021106384A/ja active Pending
- 2021-10-14 KR KR1020210136956A patent/KR102404417B1/ko active IP Right Grant
- 2021-12-07 US US17/544,709 patent/US11706438B2/en active Active
-
2022
- 2022-05-27 KR KR1020220065091A patent/KR102509543B1/ko active IP Right Grant
-
2023
- 2023-01-19 JP JP2023006858A patent/JP2023052531A/ja active Pending
- 2023-05-22 US US18/200,085 patent/US12003752B2/en active Active
- 2023-06-28 JP JP2023106424A patent/JP2023123736A/ja active Pending
-
2024
- 2024-04-30 US US18/651,227 patent/US20240283963A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20070115551A (ko) * | 2006-06-02 | 2007-12-06 | 삼성전자주식회사 | N-bit 영상 데이터 부호화/복호화 장치 및 방법 |
KR20090043353A (ko) * | 2007-10-29 | 2009-05-06 | 삼성전자주식회사 | 영상의 디퓨전 특성을 이용한 영상 부호화/복호화 방법 및장치 |
KR20100092268A (ko) * | 2009-02-12 | 2010-08-20 | 에스케이 텔레콤주식회사 | 영상 부호화/복호화 장치 및 방법 |
KR20110138098A (ko) * | 2010-06-18 | 2011-12-26 | 삼성전자주식회사 | 영상 인트라 예측 방법 및 장치 및 그를 적용한 영상 디코딩 방법 및 장치 |
KR20120005934A (ko) * | 2010-07-09 | 2012-01-17 | 삼성전자주식회사 | 움직임 벡터의 부호화 방법 및 장치, 그 복호화 방법 및 장치 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013109092A1 (ko) | 영상 부호화 및 복호화 방법 및 장치 | |
WO2020166897A1 (ko) | Dmvr 기반의 인터 예측 방법 및 장치 | |
WO2018221817A1 (ko) | 영상 코딩 시스템에서 인트라 예측에 따른 영상 디코딩 방법 및 장치 | |
WO2016052977A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020071830A1 (ko) | 히스토리 기반 움직임 정보를 이용한 영상 코딩 방법 및 그 장치 | |
WO2017052081A1 (ko) | 영상 코딩 시스템에서 인터 예측 방법 및 장치 | |
WO2016085231A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2017179835A1 (ko) | 인트라 예측 기반의 비디오 신호 처리 방법 및 장치 | |
WO2016143991A1 (ko) | 저 복잡도 변환에 기반한 영상 부호화 및 복호화 방법 및 이를 이용하는 장치 | |
WO2017082443A1 (ko) | 영상 코딩 시스템에서 임계값을 이용한 적응적 영상 예측 방법 및 장치 | |
WO2014171713A1 (ko) | 인트라 예측을 이용한 비디오 부호화/복호화 방법 및 장치 | |
WO2013032073A1 (ko) | Amvp 모드에서의 예측 블록 생성 방법 | |
WO2012173415A2 (ko) | 움직임 정보의 부호화 방법 및 장치, 그 복호화 방법 및 장치 | |
WO2019117639A1 (ko) | 변환에 기반한 영상 코딩 방법 및 그 장치 | |
WO2019194568A1 (ko) | 어파인 모델 기반의 영상 부호화/복호화 방법 및 장치 | |
WO2016159610A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016048092A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016114583A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2013109123A1 (ko) | 인트라 예측 처리 속도 향상을 위한 비디오의 부호화 방법 및 장치, 비디오의 복호화 방법 및 장치 | |
WO2015020504A1 (ko) | 병합 모드 결정 방법 및 장치 | |
WO2016064123A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016122251A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2016122253A1 (ko) | 비디오 신호 처리 방법 및 장치 | |
WO2020141831A2 (ko) | 인트라 블록 카피 예측을 이용한 영상 코딩 방법 및 장치 | |
WO2019013363A1 (ko) | 영상 코딩 시스템에서 주파수 도메인 잡음 감소 방법 및 장치 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13738011 Country of ref document: EP Kind code of ref document: A1 |
|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 14372278 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2014553257 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2013738011 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013738011 Country of ref document: EP |