WO2017131473A1 - Method and device for encoding and decoding video by using prediction - Google Patents

Method and device for encoding and decoding video by using prediction Download PDF

Info

Publication number
WO2017131473A1
WO2017131473A1 PCT/KR2017/000965 KR2017000965W WO2017131473A1 WO 2017131473 A1 WO2017131473 A1 WO 2017131473A1 KR 2017000965 W KR2017000965 W KR 2017000965W WO 2017131473 A1 WO2017131473 A1 WO 2017131473A1
Authority
WO
Grant status
Application
Patent type
Prior art keywords
prediction
block
unit
reference
reconstructed
Prior art date
Application number
PCT/KR2017/000965
Other languages
French (fr)
Korean (ko)
Inventor
임성창
고현석
강정원
이진호
전동산
김휘용
전병우
김남욱
전승수
최진수
Original Assignee
한국전자통신연구원
성균관대학교산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/567Motion estimation based on rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution

Abstract

A method and a device for performing encoding and decoding by using first prediction and second prediction are disclosed. In the encoding of a current block, a residual signal of the current block is generated on the basis of the current block, the first prediction, and the second prediction. In addition, information on the residual signal encoded by encoding the residual signal is generated. In the decoding of the current block, a restored residual signal of the current block is generated. A restored block of the current block is generated on the basis of the restored residual signal, the second prediction, and the first prediction.

Description

Method and apparatus for encoding and decoding of video using the prediction

Performed under embodiments are related to a video decoding method, decoding apparatus, coding method and coding apparatus, to a more particularly to a method of using prediction for the block to perform the coding and decoding for the video, and devices.

Sustainable development of the broadcasting service with a resolution HD (High Definition) via the telecommunications industry has been spread around the world. With this spread, many users became accustomed to a high-resolution, high-quality video (image) and / or video (video).

In order to meet the user demand for higher image quality, a number of organizations are spurring the development of the next generation of the video equipment. H. de-TV (High Definition TV; HDTV) and pools H D (Full HD; FHD) as well as a TV, ultra H. D having at least four times the resolution than the FHD TV (Ultra High Definition; UHD) of interest of the users on the TV the increase was, according to an increase of this concern, the more video image coding (encoding) / decoding (decoding) technique for having a high resolution and picture quality are required.

Image coding (encoding) / decryption (decoding) apparatus and method for performing encoding / decoding for an image of high resolution and high definition, an inter (inter) prediction (prediction) technique, and intra (intra) prediction technology and entropy encoding techniques, such as the can be used. The inter-prediction technique may be chronologically (temporally) by using a picture subsequent to the previous picture and / or temporal technique for predicting the value of the pixels included in the current picture. Intra prediction techniques by using the information of the pixels in the current picture may be a technique for predicting the value of the pixels included in the current picture. The entropy encoding technique may be, should be assigned the shortest code (code) is a high frequency of appearance symbols, the appearance frequency technique for assigning long code has a lower symbol.

In the coding and decoding of an image, the prediction may refer to the generation of a predicted signal similar to the original signal. Spatial prediction is significantly (spatial) can be classified into a reference to the restored (reconstructed) predicting picture, time (temporal) that refers to the restored prediction image and the prediction for the other symbols. That is to say, a temporal reference may meant to refer to the temporally the restored image, the spatial reference may indicate that the reference to a restored image spatially.

The current block can be a target block of a current coding or decoding. Current block may be referred to as the current block or the destination unit. In the encoder, the current block may be referred to as the current block or encoded units. In the decoding, the current block may be referred to as the current block or a decoded unit.

Inter (inter) prediction using the temporal reference and the spatial reference may be a technique for predicting the current block. Intra (intra) prediction using only the spatial reference may be a technique for predicting the current block.

The image encoding / decoding technique using only one prediction mode from the plurality of prediction mode when performing the intra prediction in order to reduce the spatial repetition encode the current block. For example, high-efficiency video coding; In (High Efficiency Video Coding HEVC) technology uses the 35 intra prediction modes.

The encoder uses all of the available prediction modes of a plurality of prediction modes is selected as a prediction mode of generating a prediction block of the current block and the prediction mode of one lays the best result of coding the current block. However, even with the prediction mode lays the best results, there is still an error between the original block and the prediction block. This error is represented as a residual block.

For some of the pixels, there is a large error between the original block and the prediction block. With such a large error, even after applied, the conversion and / or quantization of the residual block can not be achieved sufficient spatial repetition decreases.

One embodiment may provide a method and apparatus for reducing a prediction error of a prediction unit for generating the estimate.

One embodiment may provide a method and apparatus for reducing the prediction error generated by the prediction directions of the prediction modes.

One embodiment may provide a method and apparatus for performing the optional second prediction in the direction of the first prediction mode for prediction error in accordance with the first prediction.

One embodiment may provide a method and apparatus for performing the optional second predicted for some of the prediction error according to the first prediction.

One embodiment may provide a method and apparatus for performing the optional second prediction for severe distortion part of the prediction error according to the first prediction.

One embodiment is a method and apparatus for preventing the additional prediction error that may occur when performing a second prediction for the whole of the prediction error by carrying out the optional second prediction only some of the prediction error according to the first prediction It can provide.

One embodiment may provide a method and apparatus for obtaining a sufficient spatial repetition decreases with the progress of the conversion quantization by reducing the large residual error that is generated in the prediction direction.

In one side, it based on the current block, a first prediction and a second prediction step of generating a residual signal of a current block; And a step of generating information for a residual signal coding by performing encoding on the residual signal, the second prediction is provided in the prediction, the coding method for the residual error of the first prediction.

In the other one side, the reconstructed residual signal generating unit for generating the restored residual signal of the current block; And based on a second prediction and the first prediction residual signal, the reconstructed, there is provided a decoding method including a reconstructed block generator to generate a reconstructed block of the current block.

According to another side, generating a reconstructed residual signal of the current block; And based on the reconstructed residual signal, the second prediction and the first prediction it is provided a decoding method comprising the step of generating the reconstructed blocks of the current block.

The reconstructed block may correspond to a sum of the second predicted signal generated by the first predicted signal and the second prediction generated by the first predicted the reconstructed residual signal.

The first prediction and a second prediction may be an intra-prediction.

Prediction direction of the second prediction may be the same as the first prediction direction.

The plurality of first reference of the first prediction block and a plurality of the at least one of the second reference block portion of the second prediction may be different from each other.

The two types of the reference blocks of the first type of the first reference block and the second prediction of the prediction may be different from each other.

A first reference block in the first prediction may be a current block reconstructed neighboring blocks adjacent to.

See also the second prediction of the second block may be a neighbor block to restore a residual neighboring the current block.

The reconstructed neighboring residual block may be a difference between the reconstructed neighboring blocks, and a first prediction block of the reconstructed neighboring blocks.

Region of the reference sample of the second prediction may be part of a region of a reference sample in the first prediction.

Wherein the second area is predicted to be applied may be specified on the basis of the range of reference samples used for said second prediction.

Wherein the second area is predicted to be applied may be specified based on the prediction direction in the second prediction.

The range of the reference sample, the second prediction information and the first reference that points to the sample specified by the information indicating the number of reference samples used for said second prediction, the decoding method to be used for.

Wherein the second area is predicted to be applied may be generated region as the reference sample of the above range are in progress in the prediction direction of the second prediction.

The value of the reference sample of the outside of the above range can be considered as a null or zero.

If the current block is coded according to the use of a second prediction indicates whether to use the second prediction information indicates that it is not using the second prediction and the second prediction can not be used.

The prediction mode of the first prediction is non-directional mode, the second prediction can not be used.

When close to the boundary of the current block is a boundary of the top and left side and the second prediction can not be used.

The border may be a border or boundary of the tile border, the slice of the picture.

The second prediction will be used whether or not may be determined based on the number of adjacent reconstructed block in the current block.

The second block if the reconstructed prediction is not used may be generated based on the first prediction signal according to the restored residual signal and the first prediction.

A method and apparatus that reduces the prediction error of a prediction unit for generating the estimate is provided.

A method and apparatus for reducing the prediction error generated by the prediction directions of the prediction modes is provided.

The method of claim 1 and an apparatus for predicting the performance of the choice in the direction of the first prediction mode, the second prediction for a prediction error in accordance is provided.

The method of claim 1 and an apparatus for performing the optional second predicted for some of the prediction error of the prediction is provided.

The method of claim 1 and an apparatus for performing the optional second prediction for a prediction error severe distortion of the part according to the prediction is provided.

A method and apparatus for preventing the additional prediction error that may occur when performing a second prediction for the whole of the prediction error by carrying out the optional second prediction only some of the prediction error according to the first prediction is provided.

A method and apparatus for obtaining a sufficient spatial repetition decreases with the progress of the conversion is provided by reducing the quantization for a prediction error generated by the prediction direction.

1 is a block diagram showing a configuration according to an embodiment of a coding apparatus to which the present invention is applied.

2 is a block diagram showing a configuration according to an embodiment of a decoding apparatus to which the present invention is applied.

3 is a view schematically showing a partition structure of the image in coding and decoding an image.

4 is a diagram showing a form of a prediction unit (PU), which may include a coding unit (CU).

5 is a diagram showing the form of a transformation unit (TU), which may be included in a coding unit (CU).

6 is a view for explaining an embodiment of an intra-prediction process.

7 is a view for explaining the position of a reference sample to be used in intra-prediction process.

8 is a view for explaining an embodiment of an inter-prediction process.

9 shows a prediction error in the case where the source image according to an example of prediction in the vertical direction.

10 is a structural diagram of an encoding apparatus according to an embodiment.

11 is a flowchart of an encoding method according to an embodiment.

12 shows a first prediction and a second prediction using the intra-prediction in accordance with one embodiment.

13 is a flow diagram of a method for generating a residual block according to one embodiment.

14 is a flow chart of a method of generating the restored block, according to an example.

Figure 15 illustrates an encoding process according to an embodiment.

16 is a structural diagram of a decoding apparatus according to an embodiment.

17 is a flow chart of a decoding method according to an embodiment.

18 is a flow chart of a method of generating the restored block, according to one embodiment.

Figure 19 illustrates a decoding process according to an embodiment.

Detailed description of the exemplary embodiments described below are described with reference to the accompanying drawings showing by way of illustration a specific embodiment. These embodiments are described in detail because it can be sufficient to those skilled in the art to practice the embodiments. Various embodiments should be understood that different mutually exclusive needs no other. For example, a particular feature, structure, and characteristic described herein may be implemented as other embodiments without departing from the spirit and scope of the invention in connection with an embodiment. In addition, each of the location or arrangement of individual components in the disclosed embodiment is to be understood that changes may be made without departing from the spirit and scope embodiment. Therefore, the following description is not to be taken as limiting sense, the scope of example embodiments is, if properly described, the claims are limited only by the terms of the appended claims along with the full range equivalent to that claim.

Like reference numerals in the drawings refer to the same or similar functionality throughout the several aspects. The shape and size of the elements in the drawings may be exaggerated for more clear explanation.

Which components (component) are the other components when a is "connected" or referred to as being "connected" may be two components of said to be directly connected or connected to each other. However, the 2 in the middle of the two components it is to be understood that other components may be present. Further, exemplary embodiments of a specific configuration "comprising" that description is not intended to exclude the configuration other than the above, with certain configurations, an additional configuration of the technical concept of the exemplary embodiment embodiment or exemplary embodiments of the in means that can be included in the range.

The first and the second, etc., but the term will be used in describing various elements, but the above elements shall not be restricted by the above terms. The above terms are used to refer to distinguish one element from the other. For example, without departing from the scope standing in the first component may be named also first component to the second component may be named as a second component, similar.

In addition, components shown in the embodiments are not meant to yirueojim only different features to be shown independently to represent features, each hardware configuration or a software configuration of the unit element is released. In other words, each component is listed for convenience of description, each of the components. For example, at least two components of the components may be combined into a single component. Further, one component may be divided into a plurality of components. The integrated conduct of each of these components, for example, and a separate embodiment is also included in the scope without departing from the spirit.

In addition, some of the components are not the essential components to perform the essential function can only be an optional component to improve the performance. Embodiments may be implemented to include only the essential component parts to implement the embodiment of the essence, for example, the only such components that are used to improve performance, except that optional component structure is also included in the scope.

Hereinafter, in order to be able to be carried out with ordinary skill in the art self easily embodiments, with reference to the accompanying drawings will be described in detail the embodiments. In the following description of the embodiments, when a detailed description of known functions and configurations that are determined to obscure the gist of the present specification, the detailed description thereof will be omitted.

Hereinafter, the image can also mean a single picture (picture) constituting a video (video), it may represent the video itself. For example, refer to a "coding and / or decoding of an image" may mean "the encoding and / or decoding of the video," "encoding and / or decoding of the one image of the images constituting the video" may.

In the following, "video" and "video (motion picture)" may be used in the same sense, it can be substituted with each other.

Hereinafter, "picture", "picture", "frame (frame)," and "SCR (screen)" may be used in the same sense, it can be substituted with each other.

In embodiments, each such as the particular information, data, flag (flag) and an element (element), attribute (attribute) may have a value. The value of the information, data, such as flags (flag) and an element (element), attribute (attribute) "0" may represent a logic false (logical false) or a (predefined) value for the first fixation. That is to say, the value "0", a logical false, and the value of the first fixation may be used in alternative to each other. The value of the information, data, such as flags (flag) and an element (element), attribute (attribute) "1" may indicate a logical true (logical true) or the second fixation of the (predefined) value. That is to say, the value "1", a logic true and the value of the second fixation may be used in alternative to each other.

Row, column, or a variable, such as i or j is used to denote an index (index) - the value, i may be greater than or equal to zero and an integer, an integer of 1 or more. That is to say, embodiments in line, column, and index, etc. may be counted from zero, may be counted from the first.

In the following, the terms used in the embodiments will be described.

Unit (unit): "unit" may represent a unit of coding and decoding of video. Means of the unit and the block (block) may be the same. Further, the term "unit" and "block" may be substituted with each other.

- the unit (or a block) may be a MxN array of samples. M and N may be a positive integer, respectively. Unit may commonly means arranged in a two-dimensional sample. The sample may be a value pixel or pixels.

- The term "pixel", and "sample" may be used in the same sense, it can be substituted with each other.

- in the coding and decoding of an image, the unit may be an area produced by the division of a single image. One image may be divided into a plurality of units. In the coding and decoding of an image, the fixation of the processing for the unit can be carried out according to the type of unit. According to the function, the type of the unit is a macro unit (Macro Unit), the coding unit may be classified as such; (TU transform Unit) (Coding Unit;; CU), prediction unit (Prediction Unit PU) and the conversion unit. One unit may be further divided into sub-units having a size smaller than the unit.

- unit division information may include information relating to the depth (depth) of the unit. Depth information may indicate the number of times and / or the degree to which the unit is divided.

- a unit may be divided into a tree structure (tree structure) while having the depth information (depth) hierarchical plurality of sub-units on the basis of the. That is to say, units and sub-units produced by the division of the unit may correspond to each of the nodes and child nodes of the node. Each of the divided sub-unit may have an information depth. Depth information of the unit may be because it represents the number of times and / or the extent to which the unit is divided, dividing the sub-information unit includes information on the size of the sub-unit.

- in the tree structure, the root node can correspond to a first unit that is not divided. The parent node may be referred to as the root node (root node). In addition, the root node may have a depth value of the minimum. At this time, the root node may have a depth of level zero.

A node has a depth of one level may represent the generating unit as a first unit a time division. Node has a depth of level 2 may represent the generating unit as the first division unit twice.

A node has a depth of level n may represent the generating unit as the first division unit n times.

- leaf node may be at the lowest node and can not be split one more node. The depth of the leaf node can be a maximum level. For example, the default value of the maximum level can be 3 days.

Conversion unit (Transform Unit): conversion unit may be a basic unit in the transform, inverse transform, quantization, inverse quantization and transform coefficient coding, and the residual signal (residual signal) coding and / or a residual signal decoding, such as transform coefficients decoded . A conversion unit may be divided into a plurality of conversion units having a smaller size.

Prediction unit (Prediction Unit): prediction unit may be a basic unit in the performing of prediction or compensation (compensation). Prediction unit may be a number of partitions (partition) by the division. A plurality of partitions may also be a basic unit in the performing of prediction or compensation. The partition created by the division of the prediction unit may also be a prediction unit.

The reconstructed neighboring unit (Neighbor Reconstructed Unit): the reconstructed neighboring unit may be already coded or decoded reconstruction unit in the vicinity of the encoding target unit or a decoded unit. The reconstructed neighboring unit may be spatial (spatial) or temporally adjacent unit (temporal) adjacent unit to the destination unit.

Prediction unit partitions: Partition prediction unit may refer to the prediction unit is divided form.

Set of parameters (Parameter Set): set of parameters may correspond to a header (header) of the information structure (structure) in the bit stream. For example, the parameter set may include a sequence parameter set (sequence parameter set), a picture parameter set (picture parameter set), and adaptive parameter set (parameter set adaptation).

The rate-distortion optimization (rate-distortion optimization): the encoder is rate to the size of a coding unit, prediction mode, prediction unit size, the motion information and, conversion using a combination of the size of the unit to provide a high coding efficiency - You can use the distortion optimization.

- the rate-distortion optimization scheme for each rate combination in order to select the optimum combination from among the combinations of the-one can calculate the distortion cost (rate-distortion cost). The rate-distortion cost may be calculated using Equation 1 below. In general, the rate may be selected as a winner in the distortion optimization method-distortion cost of the combination ratio is minimum.

Figure PCTKR2017000965-appb-M000001

D may represent a distortion. D may be a root mean square (mean square error) of the value of the difference between the original transform coefficient and the restored transform coefficients in a transform block.

R may represent a ratio. R may represent a bit rate associated with the context information.

λ can represent a Lagrangian multiplier (Lagrangian multiplier). R as well as the encoding parameters such as prediction mode information, motion information and coded block flag (coded block flag), may also include bits generated by encoding the transform coefficients.

The encoder is the inter-prediction and / or for carrying out processes, such as intra prediction, transform, quantization, entropy encoding, inverse quantization, inverse transformation, this process can significantly increase the complexity of the coding apparatus, to calculate the exact D and R have.

A reference picture (reference picture): a reference picture can be a picture used for inter-prediction or motion compensation. Reference picture may be a picture that includes the reference unit for the object reference unit for inter-prediction or motion compensation. Means of the picture and the image may be the same. In addition, the terms "picture" and "image" may be substituted with each other.

A reference picture list (reference picture list): a reference picture list may be a list including the reference image used for inter-prediction or motion compensation. Reference picture list is a list of types in combination may include; (L1 List 1) (List Combined;; LC), the list 0 (List 0 L0) and a list 1.

Motion vectors (Motion Vector; MV): motion vector may be a vector in the 2-D used in inter prediction. For example, MV may be expressed in a form such as (mv x, mv y). mv x may represent a horizontal (horizontal) component, mv y may represent the perpendicular (vertical) component.

- MV may indicate an offset (offset) between the target picture and a reference picture.

Search region (search range): the search area may be an area of ​​the two-dimensional search is made for the MV of the inter-prediction. For example, the size of the search area may be an MxN. M and N may be a positive integer, respectively.

1 is a block diagram showing a configuration according to an embodiment of a coding apparatus to which the present invention is applied.

The encoder 100 may be a video encoder or video encoder. Video may include one or more images. The encoder 110 can be sequentially encoded based on the time the at least one image of the video.

1, the encoding apparatus 100, an inter prediction unit 110, intra prediction unit 120, a switch 115, a subtracter 125, a conversion unit 130, a quantization unit 140, entropy decoding section 150, may include an inverse quantization unit 160, inverse transformer 170, adder 175, filter unit 180 and the reference picture buffer 190.

Encoding apparatus 100 may perform the coding in intra mode and / or the inter mode for the input image. Input image may be referred to as a current image that is the subject of the current encoding.

In addition, the encoder 100 may generate a bitstream that contains information of encoding with the encoding of the input image, it is possible to output the resulting bit stream.

When the intra mode, the switch 115 may be switched to intra. When the inter mode, the switch 115 may be converted to a pointer.

The encoder 100 may generate a prediction block to the input block of the input image. In addition, the encoder 100 may encode the difference (residual) of the input block and a prediction block after the prediction block generated. Input block may be referred to as a current block that is the target of the current encoding.

When the prediction mode is the intra mode, the intra predictor 120 may utilize, the pixel value of the already coded block located in the vicinity of the current block as a reference pixel. Intra predictor 120 may generate the prediction samples of the current block from the spatial prediction is possible to perform a spatial prediction for the current block using the reference pixel.

Inter prediction section 110 may include motion prediction unit and motion compensation unit.

When the prediction mode is inter mode, a motion estimation unit may search for the current block and the area match well from the reference image from the motion prediction process, it is possible to derive a motion vector for the current block and the matched area. Reference images can be stored in the reference picture buffer can be stored in 190, a reference when coding and / or decoding with respect to the reference picture buffer in the image processing unit 190.

Motion compensating section may generate a prediction block by performing motion compensation using the motion vector. Here, the motion vector may be a two-dimensional vector used for inter-prediction. In addition, the motion vector may indicate the offset (offset) between a current image and a reference image.

The subtractor 125 may generate a difference in the residual block of the input block and a prediction block (residual block). Residual block may be referred to as a residual signal.

Converter 130 may output to perform the conversion (transform) can generate the transform coefficients, the resulting transform coefficients (transform coefficient) for the residual blocks. Here, the conversion coefficient is a coefficient may be a value generated by performing a transform to the residual block. When the conversion is omitted (transform skip) mode is applied, the conversion unit 130 may omit the conversion of the residual block.

By applying the quantized transform coefficients quantized transform coefficient levels (transform coefficient level) may be generated. In the following examples the quantized transform coefficient levels also may be referred to as transform coefficients.

The quantization unit 140 may generate a transform coefficient levels (quantized transform coefficient level) quantized by quantizing the transform coefficients according to the quantization parameter. The quantization unit 140 may output the resulting quantized transform coefficient level. At this time, the quantization unit 140 may quantize the transform coefficients using the quantization matrix.

The entropy decoding unit 150 may generate a bit stream (bitstream) by performing entropy encoding in accordance with the probability distribution basis of the coded parameter values ​​calculated from the values ​​and / or coding process calculated by the quantization unit 140 . The entropy decoding unit 150 may output the resulting bit stream.

The entropy decoding unit 150 may perform an entropy encoding of the information for decoding an image, in addition to information of the pixels in the image. For example, the information for decoding of the image may include a syntax element (syntax element).

Encoding parameters may be information required for encoding and / or decoding. Encoding parameters may include the information is coded by the encoder is passed to a decoding apparatus, may include information that can be derived in the encoding or decoding process. For example, there is, syntax elements as the information is transmitted to the decoding apparatus.

For example, the encoding parameters are prediction mode, motion vectors, a reference picture index (index), coded block pattern (pattern), the residual signal presence or absence, the transform coefficients, the quantized transform coefficients, a quantization parameter, a block size, the block division (partition) information may include a value or statistics, such as. Prediction mode may indicate an intra prediction mode or inter prediction mode.

Residual signal may indicate the difference (difference) between the original signal and the prediction signal. Alternatively, the residual signal may be generated by conversion (transform) the difference between the original signal and the prediction signal. Alternatively, the residual signal may be generated by converting and quantizing the difference between the original signal and the prediction signal. Residual block may be a signal of a residual block.

If the entropy encoding applied, may be a small number of bits assigned to a symbol having a high probability of occurrence, a number of bits can be assigned to a symbol having a low probability of occurrence. Symbols according to the expressed through this assignment, the size of the bit stream (bitstring) for the symbol that is the target of encoding can be reduced. Thus, the compression efficiency of image encoding can be improved through the entropy encoding.

In addition, the index Golomb for entropy encoding (exponential golomb), context-adaptive variable length coding (Context-Adaptive Variable Length Coding; CAVLC) -; such as adaptive binary arithmetic coding (CABAC Context-Adaptive Binary Arithmetic Coding) and contextual there is a coding method can be used. For example, entropy decoding unit 150 variable-length encoding; it is possible to perform the entropy coding is performed using the (Variable Coding Lenghth / Code VLC) table. For example, entropy decoding unit 150 may derive a binarizing (binarization) method for a target symbol. Ent rope decoding unit 150. In addition, is possible to derive a probability model (probability model) of the target symbol / bin (bin). The entropy decoding unit 150 may also be used to derive a binary method or a probability model to perform entropy encoding.

Since performs encoding through the inter-prediction by the encoding device 100, the coded current picture may be used as a reference image with respect to the other image (s) to be processed at a later time. Thus, the encoder 100 may again decode the coded current picture, and store the decoded image as the reference image. The encoding for decoding can now be de-quantization and inverse transformation on the image processing.

Quantized coefficients may (inversely quantized) be inverse quantized by the inverse quantization unit 160, it is possible (inversely transformed) to be in a reverse inverse transformation unit 170. The inverse quantization and inverse transform the coefficients via the adder 175 may be combined with the prediction block, and the restored (reconstructed) block by summing the inverse quantized and inverse transformed coefficients and the predicted block can be generated.

The reconstructed block may be subjected to filter unit 180. Filter unit 180 is a deblocking filter (deblocking filter), S to EO (Sample Adaptive Offset; SAO), AL F (Adaptive Loop Filter; ALF) can be applied to the block or restore restore the at least one or more picture of have. Filter unit 180 may also be referred to as the adaptive (adaptive) the loop (in-loop) filter.

A deblocking filter may remove the block distortion occurs at the boundary between blocks. SAO may add appropriate offset value (offset) to the pixel value to compensate for the coding error. ALF may be based on a comparison of the reconstructed image and the original image values ​​do the filtering. The restored block passed through the filter unit 180 may be stored in the reference picture buffer 190.

2 is a block diagram showing a configuration according to an embodiment of a decoding apparatus to which the present invention is applied.

Decoding device 200 may be a video decoding apparatus or image decoding apparatus.

2, the decoding apparatus 200 includes an entropy decoding unit 210, an inverse quantization unit 220, an inversion unit 230, the intra predictor 240, an inter prediction unit 250, an adder 255 It may include a filter unit 260 and the reference picture buffer 270.

Decoding device 200 may receive a bit stream output by the encoder 100. Decoding apparatus 200 may perform decoding of an intra mode and / or an inter mode with respect to the bit stream. Further, the decoding apparatus 200 may generate the reconstructed image by decoding, it is possible to output the resulting reconstructed image.

For example, conversion to an intra mode or an inter mode according to a prediction mode to be used for decoding can be made by the switch. The switch when the prediction mode used for decoding the intra mode may be switched to intra. If the prediction mode used for decoding the inter mode, the switch can be converted to a pointer.

Decoding apparatus 200 may obtain the residual block (reconstructed residual block) restored from an input bit stream, and may generate a prediction block. When the reconstructed residual block and the prediction block obtained, the decoding apparatus 200 may generate a reconstructed block by adding the reconstructed residual block and the prediction block.

The entropy decoding unit 210 may generate symbol by performing entropy decoding of a bit stream based on a probability distribution. The generated symbols may comprise the form of symbols quantized coefficients (quantized coefficient). Here, the entropy coding method may be similar to the above-described entropy encoding method. For example, entropy decoding method may be of the above-described entropy encoding method Jong Il station.

Quantized coefficients may be inverse quantized by the inverse quantization unit 220. In addition, the de-quantized coefficients may be inverse transformed at inverse transformer 230. As the quantized coefficients are inverse quantized and inverse transformed result, the reconstructed residual block may be generated. At this time, the inverse quantization unit 220 may apply the quantization matrix to the quantization coefficients.

When the intra mode, an intra predictor 240 may generate a prediction block by performing the spatial prediction using an already decoded pixel value of the block surrounding the current block.

The inter-prediction unit 250 may include a motion compensation. When the inter mode, the motion compensating section may generate a prediction block by performing motion compensation using the motion vector and the reference image. Reference images can be stored in the reference picture buffer 270.

The reconstructed residual block and the prediction block can be added by an adder 255. The adder 255 may generate a reconstructed by adding the reconstructed residual block and the prediction blocks.

The reconstructed block may be subjected to filter unit 260. Filter unit 260 may be applied to a de-blocking filter, SAO and the restored at least one or more block of the reconstructed picture or ALF. Filter unit 260 may output the reconstructed image. The reconstructed image is stored in the reference picture buffer 270 may be used for inter-prediction.

3 is a view schematically showing a partition structure of the image in coding and decoding an image.

In order to effectively partitions the image, for encoding and decoding, the coding unit; may be used (Coding Unit CU). Unit may be a term that refers to: 1) by combining the block, and 2) the syntax element (syntax element) including a video sample. For example, the "division unit" may mean "division of the block corresponding to the unit".

3, the image 300 includes a maximum coding unit; partition structure may be sequentially divided into a unit of (Largest Coding Unit LCU), image 300 may be determined according to the LCU. Here, the LCU coding tree unit; can be used in the same meaning as (Coding Tree Unit CTU).

Partition structure LCU (310) for efficiently encoding the, image in the, coding unit; may refer to a distribution of (Coding Unit CU). This distribution can be determined according to whether or not to divide one CU into four CU. Generated by dividing the horizontal size and the vertical size of the CU may be a half and half of the height of the horizontal size of the CU before division, respectively. Partitioned CU may be a horizontal size and vertical size divided into four CU cut in half recursively in the same manner.

At this time, the division of the CU can be accomplished recursively to predefined depth. Depth information may be information indicating the size of the CU. Depth information can be stored for each CU. For example, the depth of the LCU may be a 0, a minimum coded unit; the depth of (Smallest Coding Unit SCU) may be a predefined maximum depth. Here, LCU may be a coding unit CU having the maximum size, as described above, SCU may be a coding unit CU having the minimum size.

May be a division starts from the LCU (310), the depth of the CU each time the horizontal size and the vertical size of the CU by a dividing cut in half may be increased by one. For each depth, CU is not divided may have a size of 2Nx2N. In addition, in the case of the CU is divided, the CU size of 2Nx2N may be split into four CU having a size of NxN. The size of N may be reduced in half each time the depth is increased by one.

Referring to Figure 3, the depth is zero LCU may be a 64x64 pixels. 0 may be the least depth. SCU depth of 3 may be a 8x8 pixel. 3 can be a maximum depth. At this time, CU of the LCU is 64x64 pixels can be expressed by the depth of zero. CU of 32x32 pixels can be represented by a first depth. CU of 16x16 pixels can be represented by two depth. CU of SCU of 8x8 pixels can be expressed by the depth 3.

Furthermore, it can be expressed through the division information of the information for the CU whether CU is partitioned. Split information may be one bit of information. All except the SCU CU may include split information. For example, when the CU is not a division value of the division information of the CU it can be 0, if the value of the CU is partitioned division information of the CU can be one day.

4 is a diagram showing a form of a prediction unit (PU), which may include a coding unit (CU).

CU that is no longer split in the split from the LCU CU is at least one prediction unit; may be divided into (Prediction Unit PU). This division also, may be referred to as a partition (partition).

PU may be the primary basis for the forecast. PU may be encoding and decoding any of the skip (skip) mode, an inter mode and an intra mode in one. PU can be divided into different types according to each mode.

The skip mode may be divided in the CU does not exist. In the skip mode to the size of the PU and CU without partition it can be supported by the same 2Nx2N mode 410. The

In an inter mode it may be to support the divided form in the CU to eight. For example, in an inter mode 2Nx2N mode (410), 2NxN mode (415), Nx2N mode (420), NxN mode (425), 2NxnU mode (430), 2NxnD mode (435), nLx2N mode 440 and nRx2N mode can be supported (445).

In intra mode, the mode can be supported 2Nx2N 410 and modes NxN 425.

2Nx2N mode 410 can be the size of 2Nx2N PU is encoded. The size of 2Nx2N PU may refer to the same size as the CU PU. For example, the size of 2Nx2N PU may have a size of 64x64, 32x32, 16x16 or 8x8.

NxN mode 425 can be the size of the NxN PU is encoded.

For example, when in the intra-prediction, an 8x8 size of the PU, may be four partitioned PU are encoded. The size of the PU division may be 4x4.

If PU is to be encoded by the intra mode, PU may be encoded using either intra-prediction mode of a plurality of intra-prediction mode. For example, high-efficiency video coding; In (High Efficiency Video Coding HEVC) technology may provide the 35 intra prediction modes, PU may be encoded as a single intra-prediction mode of the 35 intra prediction modes.

PU will be encoded by any mode of 2Nx2N mode 410 and the NxN mode 425 is the rate-distortion cost may be determined by the (rate-distortion cost).

Encoding apparatus 100 may perform an encoding operation on the PU size of 2Nx2N. Here, the encoding operation may be to encode the PU in each of a plurality of intra-prediction mode in the encoding device 100 can be used. Through the encoding operation it may be the best intra prediction mode for the PU size of 2Nx2N derived. Optimum intra-prediction mode of at least the ratio of the encoding device 100, a plurality of intra prediction modes of the encoding of the 2Nx2N size PU available - may be the intra-prediction mode to generate a distortion cost.

In addition, the encoder 100 may perform sequential coding operation with respect to each of the divided PU PU by NxN. Here, the encoding operation may be to encode the PU in each of a plurality of intra-prediction mode in the encoding device 100 can be used. Through the encoding operation it may be the best intra prediction mode of the PU of NxN size derived. Optimum intra-prediction mode of at least the ratio of the encoding device 100, a plurality of intra prediction modes of the encoding of the size NxN PU available - may be an intra-prediction mode to generate a distortion cost.

The encoder 100 is 2Nx2N PU ratio of the size-can be determined whether to on the basis of a comparison of distortion cost of coding which one of the PU of the PU and NxN size of 2Nx2N size-distortion cost, and PU rate of the NxN in size.

5 is a diagram showing the form of a transformation unit (TU), which may be included in a coding unit (CU).

Conversion unit (Transform Unit; TU) may be a basic unit used for the process of transformation, quantization, inverse transform, inverse quantization, entropy encoding and entropy decoding in the CU. TU may have a square shape or a rectangular shape.

Of the partition from the LCU CU, CU is no longer divided into a CU may be divided into one or more TU. It may be a tree (quad-tree) structure - At this time, the division structure of the TU is a quad. For example, as shown in Figure 5, a CU (510) is a quad-can be divided one or more times according to a tree structure. Through a division, a CU (510) may consist of various sized TU.

6 is a view for explaining an embodiment of an intra-prediction process.

Figure 6 from the center of the graph, the arrow to the outside may indicate prediction directions of the intra prediction mode. In addition, the number that appears close to the arrow may represent an example of the mode values ​​are allocated to the prediction direction of the intra-prediction mode or intra prediction mode.

Intra-encoding and / or decoding may be performed using a reference sample in the vicinity of the target unit unit. The peripheral units can be close to a reconstruction unit. For example, the intra-encoding and / or decoding may be performed using the encoding parameters of the reference value or sample comprising a reconstruction unit of a peripheral.

The encoder 100 and / or decryption unit 200 may be based on the information of the samples in the current picture to generate a prediction block by performing intra prediction for the target unit. When performing the intra prediction encoding device 100 and / or the decoding apparatus 200 may generate a prediction block for the target unit by the basis of information of the samples in the current picture performs intra prediction. When performing the intra prediction encoding device 100 and / or the decoding apparatus 200 may perform the directional prediction and / or non-directional prediction based on at least one of the restored reference sample.

Prediction block may indicate a block generated as a result of the intra prediction performed. Prediction block may correspond to at least one of the CU, PU and TU.

Unit of the prediction block may be at least one of the size of the CU, PU and TU. Prediction block may be shaped in a square having a size of NxN size or of 2Nx2N. The size of NxN may comprise a 4x4, 8x8, 16x16, 32x32 and 64x64, etc.

Alternatively, a prediction block may be 2x2, 4x4, 16x16, 32x32, or may be a block in the form of a square having a size, such as 64x64, 2x8, a block of a rectangular shape having a size such as 4x8, 2x16, 4x16 and 8x16.

Intra-prediction may be performed in accordance with the intra-prediction mode for the target unit. The number of intra-prediction mode in the target unit may have a default value may be a fixed, may be different from a value determined according to the properties of the prediction block. For example, properties of the prediction block may include a type, such as the size and the prediction block of the prediction block.

For example, the number of intra prediction modes may be fixed pieces 35 regardless of the size of the prediction unit. Alternatively, for example, the number of intra-prediction mode is 3, 5, 9, 17, 34, 35 or 36, or the like.

The intra-prediction mode may include two or three non-directional mode and a 33-directional mode as illustrated in FIG. Two non-directional mode may include a DC (DC) mode and a planar (Planar) mode.

For example, if the vertical mode, the mode value is 26, based on the pixel value of the reference sample can be predicted is performed in the vertical direction. For example, if the mode value of the horizontal mode, the 10, can be predicted in the horizontal direction performed on the basis of pixel values ​​of the reference sample. For example, if the vertical mode, the mode value is 26, based on the pixel value of the reference sample can be predicted is performed in the vertical direction.

Even if the directional mode other than the above-mentioned mode, the encoding apparatus 100 and decoding apparatus 200 may perform intra prediction on the target units using a reference sample in accordance with the angle corresponding to the orientation mode.

The intra-prediction mode in the right side of the vertical mode may be referred to as a vertical-right mode (vertical-right mode). The intra-prediction mode in the lower level of the mode may be referred to as the male lower mode (below-horizontal mode). May be, for example, in Figure 6, the mode value is 27, 28, 29, 30, 31, 32, 33 and 34 of one of the intra prediction modes of Vertical mode, right side 613. Mode, value of 2, 3, 4, 5, 6, 7, 8 and 9 of one of the intra-prediction mode can be in the horizontal bottom of modes (616).

Non-directional mode may include a DC (DC) mode and a planar (planar) mode. For example, the mode value of the DC mode can be one day. Mode value of the planar mode may be zero.

Directional mode may include an angled (angular) mode. Mode, with the exception of the DC mode, and the planar mode of a plurality of intra prediction modes may be one-way mode.

If the DC mode, a prediction block may be generated based on the average of the pixel values ​​of a plurality of reference samples. For example, the value of the pixel of the prediction block may be determined based on a mean of the pixel values ​​of a plurality of reference samples.

The number and the mode value of each intra-prediction mode of intra prediction modes described above may be illustrative only. The number and the mode value of each intra-prediction mode of intra prediction modes described above are examples, and may be defined differently depending on the implementation and / or need.

The number of intra prediction modes may be different depending on the type of the color component (color component). For example, if the color components, the luminance (luma) signal or a number of prediction modes may be different depending on whether the color difference (chroma) signal.

7 is a view for explaining the position of a reference sample to be used in intra-prediction process.

Figure 7 illustrates the position of the reference sample used for the intra prediction of the target unit. S Referring to Figure 7, the reconstructed reference pixels used for the intra prediction of the current block, for example, the reference sample is the lower left reference (below-left), a sample 731, the left side (left) reference samples (733) It may include a top left (above-left) corner of reference samples (735), the top (above) and reference samples (737) and the upper right reference (above-right), a sample (739), and the like.

For example, the left reference sample 733 may represent a reconstructed neighboring the reference pixel on the left side of the target unit. The top of the reference sample (737) may represent a reconstructed neighboring the reference pixel at the top of the destination unit. The upper left corner of the reference pixel 735 may represent a reconstructed reference pixel in the upper left corner of the destination unit. Further, the lower left reference sample 731 may represent a reference sample at the bottom of the sample line from the left side of the sample located on the left side of the sample line and the same line, consisting of the left reference samples (733). The upper right reference sample 739 may represent the reference sample is located on the right side of the top line of pixels from the top of the sample line and the sample located in the same line consisting of the upper reference samples (737).

When the size of the target NxN unit, the lower left reference sample 731, the left reference samples (733), the top of the reference sample 737 and the upper right reference sample 739 can clear up N respectively.

There are predicted block can be generated through intra-prediction for the target unit. Generation of the prediction block may involve that the values ​​of the pixels of the prediction block decision. The size of the target unit and the predictive block may be the same.

Reference used for the intra prediction of the target sample units may vary according to the intra-prediction mode of the target unit. Direction of the intra-prediction mode may indicate a dependency between the pixels of the reference samples and the predicted block. For example, a value of a particular reference sample can be used as the value of one or more pixels of the prediction block specified. In this case, the particular reference samples and predicting the specific one or more pixels of the blocks may be a sample and a pixel that is specified in a straight line in the direction of the intra-prediction mode. That is to say, the value of a particular reference sample may be copied to the value of a pixel located in a direction opposite to the direction of the intra-prediction mode. Alternatively, the value of the pixel of the prediction block may be a value of a reference sample in the direction of the intra-prediction mode is determined by the location of the pixel.

For example, when the intra-prediction mode of the target unit in the vertical mode, the mode value is 26, the top of the reference sample 737 can be used for the intra prediction. When the intra prediction mode is the vertical mode, the values ​​of pixels of the predicted block can be a value of the reference pixels located on vertically relative to the position of the pixel. Accordingly, it is the top of the reference sample adjacent to the top of the target unit 737 can be used for intra-prediction. Further, the values ​​of the pixels of one row of the prediction block can be identical to the values ​​of the top reference samples (737).

For example, when the intra-prediction mode of the current block mode value is 10, the horizontal mode, the left reference sample 733 can be used for the intra prediction. When the intra-prediction mode, a horizontal mode, a value of the pixel of the prediction block may be a value of the reference pixel on the left side in the horizontal with respect to the pixel. Accordingly, it is the left reference sample adjacent to the left side to the target unit 733 can be used for intra-prediction. In addition, the values ​​of one column of pixels of the predicted block can be identical to the values ​​of the left reference samples (733).

For example, in at least some of the intra prediction of the current if the mode value of the intra-prediction mode of the block 18, see the left sample at least reference part, the upper left corner of the 733 samples (735) and a reference top samples 737 It can be used. If the mode value of the intra-prediction mode 18, the value of pixels of the predicted block can be a value of the reference pixel in the upper left diagonal with respect to the pixel.

Further, if the mode value is 27, 28, 29, 30, 31, 32, 33 or 34 in which the intra-prediction mode is used, at least a portion of the upper right corner of the reference pixel 439 can be used for intra-prediction.

The mode value is at least a portion of the 2, 3, 4, 5, 6, 7, 8 or 9 when the intra-prediction mode is used, the lower left reference pixel 431 may be used for intra-prediction.

In addition, the mode when the value is used by the intra-prediction mode, one of the 11 to 25 are, see the upper left corner of the sample 735 may be used in the intra prediction.

Reference that is used to determine a pixel value for one pixel of the prediction block sample may be one days, it may be more than two.

Pixel values ​​of the pixels of the prediction block as described above can be determined according to the position of the reference sample that is indicated by the position and orientation of intra-prediction mode of the pixel. If the reference sample that is indicated by the position and orientation of intra-prediction mode of a pixel position where an integer, the value of a reference sample in a constant position pointing can be used to determine a pixel value for a prediction block of pixels.

The case of the reference sample that is indicated by the position and orientation of intra-prediction mode of the pixel position is not a constant position, based on the two reference samples closest to the position of the reference sample interpolation (interpolated) reference samples may be generated have. The value of the interpolated reference sample can be used to determine a pixel value for a prediction block of pixels. That is to say, the position of reference that is indicated by the direction of the pixel location, and an intra-prediction mode of the prediction block sample has two reference to indicate the gap between the samples, an interpolated value based on the values ​​of the two samples of the to be generated can.

The predicted block generated by the prediction may not be identical to the original destination unit. That is to say, may be present in the predictive error difference between the target block and the prediction unit, the prediction error may be present even between the pixels of the pixel and the prediction block of the target unit. For example, in the case of intra prediction, the distance between the pixels and reference samples of the prediction block more more distant can cause greater residual error. Between the predicted block and the peripheral blocks, etc. generated by these prediction error may result in discontinuity.

There is a filter on the prediction block can be used for the reduction of the residual error. Filtering may be to apply a filter adaptively for an area that is considered as having a large prediction error of the prediction block. For example, areas that are considered as having a large prediction error may be a boundary of the prediction block. In addition, this region can be considered to have the largest prediction error of the prediction block according to the different intra-prediction mode, the characteristics of the filter may vary.

8 is a view for explaining an embodiment of an inter-prediction process.

The squares shown in Figure 8 can be shown an image (or picture). In addition, the arrow may indicate the prediction direction in Fig. That is, the image can be coded and / or decoded in accordance with the prediction direction.

Each image (or picture) may be classified into I picture (Intra Picture), P picture (Uni-prediction Picture), B-picture (Bi-prediction Picture) according to the coding type. Each picture may be encoded according to the encoding type of each picture.

If the target image of the encoding of the I picture, the image can be encoded for the image itself, without inter-prediction. If the target image of the encoding of the P picture, the image can be coded by inter prediction using a reference picture in the forward direction only. If the target image of the encoding of the B picture, may be encoded via inter prediction using a reference picture to both sides of the forward and reverse, it can be encoded by inter prediction using a reference picture in a direction of the forward and reverse directions.

By using a reference picture encoding and / or decoding a P picture and a B picture which can be regarded as the inter-picture prediction is used.

In the following, it is described in detail for the inter prediction of the inter mode, according to an embodiment.

In the inter mode, the encoding apparatus 100 and decoding apparatus 200 may perform the prediction and / or motion compensation for the video unit and the decoded units. For example, the encoder 100 or the decoding apparatus 200 may perform the prediction and / or motion compensation by using the motion information of the peripheral units is restored to the motion information of the encoding target unit or a decoded unit. Here, the encoded unit or a decoded unit may sense a forecast unit and / or the prediction unit partition.

The inter-prediction may be performed using the reference pictures and the motion information. In addition, inter-prediction can be used for the above-mentioned skip mode.

A reference picture (reference picture) may be at least one of the picture since a previous picture and a current picture for the current picture. At this time, the inter-prediction can be carried out for prediction to the blocks in the current picture based on the reference picture. Here, the reference picture can refer to the image used for the prediction of the block.

In this case, the reference area in the picture may be specified by using the like reference picture index (refIdx) and which will be described later, a motion vector (motion vector) indicating the reference picture.

The inter-prediction may select a reference block corresponding to the current block in the reference picture and the reference pictures, using the selected reference block may generate a prediction block for the current block. A current block may be a current or a current encoding target block in the decoding of the blocks of the picture.

The motion information may be derived from inter-predicted by each of the encoding apparatus 100 and decoding apparatus 200. In addition, the derived motion information may be used to perform inter-prediction.

In this case, the coding apparatus 100 and decoding apparatus 200 of a restored neighboring block (neighboring block), motion information and / or call block, the coding efficiency and / or decoding efficiency by using the motion information (collocated block col block) the can be improved. Call block is already reconstructed picture Call; may be a block corresponding to the current block in the (collocated picture col picture).

The reconstructed neighboring blocks are blocks in the current picture, yet, may be already coded and / or decoded the reconstructed block through. The reconstructed block may be a block neighboring the current block and / or blocks of a current block is located outside a corner adjacent to the. Here, the block is located outside a corner of the current block is, may be a block adjacent to the horizontally adjacent block adjacent to a block vertically adjacent to the current block or the vertically adjacent block adjacent to the current horizontal block.

For example, restoration of peripheral unit is the upper left of the target on the left side of the unit the unit, the unit at the top of the unit, the destination unit is located in the upper-right corner of the unit, the destination unit at the lower left corner unit or the destination unit It may be a unit at the corner.

Each of the encoding apparatus 100 and decoding apparatus 200 may determine the blocks existing in a position corresponding to the spatially the current block in the call-picture, it is possible to determine the predefined relative position on the basis of the determined block. Fixation of the relative position is spatially may be inside and / or outside the position of the blocks existing in a position corresponding to the current block. In addition, each of the encoding apparatus 100 and decoding apparatus 200 may derive the call block based on the predefined relative position is determined. Here, the call may be a picture one picture from among the at least one reference picture included in the reference picture list.

Block in the reference picture may be present in the spatially corresponding to the position of the current block in the reconstructed reference picture position. That is to say, the location of the block in the position of the current block and the reference picture in the current picture can correspond to each other. Motion information on a block included in the following, reference pictures may be referred to as temporal motion information.

Deriving how the motion information can vary depending upon the prediction mode of the current block. For instance, as a prediction mode to be applied to the inter-prediction, an improved motion vector predictors; may include (Advanced Motion Vector Predictor AMVP) and merged (merge).

For instance, as a prediction mode, when a AMVP applies to each of the encoding apparatus 100 and decoding apparatus 200 using the motion vector of a restored neighboring block motion vector and / or call block predictive motion vector candidate it can create a list. Motion vector and / or the motion vector of the call block of the reconstructed neighboring blocks may be used as a predictive motion vector candidate.

A bit stream generated by the encoding apparatus 100 may include a predicted motion vector index. Predicted motion vector index may indicate the optimum predicted motion vector selected from a predicted motion vector candidates included in the motion vector prediction candidate list. Predicted motion vector index with the bit stream can be transmitted to the decoding apparatus 200 from the encoder 100.

Decoding device 200 may select a predicted motion vector of the current block from the predicted motion vector candidates included in the list of predicted motion vector candidates by using the predicted motion vector index.

Encoding apparatus 100 includes a motion vector difference between the current block in the motion vector and predicted motion vector: and to calculate the (MVD Motion Vector Difference), can be encoded in the MVD. Bitstream may include the encoded MVD. MVD can be transmitted to the decoding apparatus 200 from the encoder 100 via a bitstream. At this time, decoding device 200 can decode the received MVD. Decoding device 200 over the sum of the decoded MVD and prediction motion vector can be derived for the current block of the motion vector.

Bitstream may include a reference picture index indicating the reference picture. Reference picture indexes can be transmitted to the decoding apparatus 200 from the encoder 100 via a bitstream. The decoding apparatus 200 may estimate the motion vector of the current block using the motion information of neighboring blocks, by using the predicted motion vector and the motion vector difference may derive a motion vector of the current block. Decoding device 200, based on the derived motion vector and the reference picture index information, it is possible to generate a predicted block for the current block.

Since the motion information for the restored peripheral units for the encoded unit and a decoded unit it can be used, in particular the inter-prediction mode, the encoder may not be 100 to encode the motion information for the target unit separately. If the target unit motion information is encoded, the bit amount can be reduced and transmitted to the decoding apparatus 200, and the coding efficiency can be improved. For example, as the inter-prediction mode, motion information on these target unit is not encoded, the skip mode may include (skip mode) and / or the merge mode (merge mode). In this case, the coding apparatus 100 and decoding apparatus 200 may be used to indicate whether the motion information on any of the restored peripheral units as a unit using the motion information of the target unit identifier and / or index.

Another example of a method of deriving motion information, the merged (merge). Merge may indicate merging of movement for a plurality of blocks. Remainder may mean that applies with the motion information of one block to the other block. If the merge is applied, each of the encoding apparatus 100 and decoding apparatus 200 is able to generate, using the motion information and / or motion information of the call block of the reconstructed neighboring blocks remaining candidate list (merge candidate list) have. The motion information may include at least one of 1) a motion vector, 2) index, and 3) predicting the direction of the reference image. Prediction direction can be unidirectional or bidirectional.

In this case, the remainder can be applied to a CU unit or PU unit. If the merge is performed in the CU unit or PU unit, the encoder 100 it may transmit the information of the default from the bit stream to the decoding device 200. Bitstream may include information of a default. Fixation of the information is, 1) block partitions (partition) information, 2) indicating whether to perform the remaining each may include information about whether to a certain block and the remaining one of the adjacent surrounding blocks to the current block. For example, the neighboring blocks of the current block may include a current, such as the left adjacent block of the block, and time (temporal) adjacent blocks in the upper adjacent block and the current block of the current block.

Merge candidate list may indicate a list of motion information are stored. In addition, the merging candidate list can be generated before the merge is performed. Merge the motion information stored in the list of candidates is 1) may be a call block (collocated block), motion information corresponding to motion information, or 2) the current block from the reference images of the neighboring blocks adjacent to the current block. In addition, the motion information stored in the merged list of candidates may be already new motion information generated by the combination of the motion information present in the remaining list of candidates.

A skip mode may be a mode in which same is applied to the current block information in the neighboring blocks. A skip mode may be one of a mode to be used for inter-prediction. When the skip mode, the encoder 100 may transmit only the decoding apparatus 200 through the bit stream information about whether to use the motion information of a certain block as the motion information of the current block. Coding apparatus 100 is different from the information may not be transmitted to the decoding device 200. For example, other information may be a syntax (syntax) information. The syntax information may include motion vector difference information.

9 shows a prediction error in the case where the source image according to an example of prediction in the vertical direction.

As described above, there is, after the operations for all possible PU and the prediction mode is carried out, one of the PU and the prediction mode when the encoding for the original image performed by the intra-prediction can be selected. When the prediction of the original image, performed according to the selected prediction mode and PU, the tendency of distortion of the prediction error in the direction of the remaining prediction mode is found.

Thus, by using one of the selected prediction mode from the plurality of prediction modes for the PU and then perform a first intra prediction of the first, the second intra-prediction of the secondary can be carried out. Here, the intra prediction of the second may be a limited area in the intra-prediction of the particular pixel in the direction of the prediction mode of the first intra-prediction. That is to say, the prediction mode of intra prediction of the secondary may be the same as the prediction mode of the first intra-prediction. On the other hand, the first intra-prediction, but is applicable for the whole of the PU, the intra prediction of the secondary can be applied only to a part of the PU.

The encoder can determine the part of the PU is intra-prediction of a secondary effect. For example, the encoding apparatus, if the ratio of the intra prediction of the second candidate is applied to the area of ​​the candidate region for each of the plurality of candidate regions can calculate the distortion cost. The encoder is the smallest rate - may determine the candidate regions of the second distortion cost as part of the intra-prediction is applied PU car.

PU may include a plurality of pixels. A plurality of candidate regions may be a part determined by the particular way of the plurality of pixels in the PU. For example, the plurality of candidate regions can be determined by a prediction direction of the first intra-prediction.

A first prediction error for the PU by applying the intra-prediction and the second intra-prediction can be further reduced compared to the case where only the first intra-prediction applies.

The following embodiments are able to improve the performance of intra prediction by using the above-mentioned tendency more efficiently and presents a method and apparatus for reducing the prediction error

10 is a structural diagram of an encoding apparatus according to an embodiment.

Encoding apparatus 1000 may be a general purpose computer system to perform the encoding.

As described in Fig. 10 shown, the encoder 1000 includes at least one processor 1010, a memory 1030, a user interface communicating with each other via a bus (1090); an input device (1050) (User Interface UI), It may include an UI output device 1060, and storage 1040. The coding apparatus 1000 may further include a communication section 1020 that is connected to a network (1099). Processor 1010 includes a central processing unit; may be a semiconductor device for executing the processing (processing) command (instruction) is stored in (Central Processing Unit CPU), a memory 1030 or storage 1040. Memory 1030 and storage 1040 may be a variety of volatile or non-volatile storage medium. For example, the memory may include at least one of ROM (ROM) (1031) and RAM (RAM) (1032).

Processor 1010 is coded inter-prediction unit 110 of the apparatus 100, the intra predictor 120, a switch 115, a subtracter 125, a conversion unit 130, a quantization unit 140, entropy decoding unit 150, may include inverse quantization unit 160, inverse transformer 170, adder 175, filter unit 180 and the reference picture buffer 190.

The recording medium may store the at least one module for the operation of the coding apparatus 1000. Memory 1030 may store at least one module may be configured to be executed by the at least one processor (1010).

According to one embodiment, the encoder 100, the inter prediction unit 110, intra prediction unit 120, a switch 115, a subtracter 125, a conversion unit 130, a quantization unit 140, entropy decoding of unit 150, at least a portion of the inverse quantization unit 160, inverse transformer 170, adder 175, filter unit 180 and the reference picture buffer 190 may accept the program module, an external device or system and it can communicate. Program modules may be included in the coding apparatus 1000 in the form of an operating system, applications, modules, and other program modules.

Program modules may be physically stored on a storage device of a number of well-known. In addition, at least some of these program modules may be stored in the encoding device 1000 and the remote storage device can communicate.

Program modules performing functions or operations in accordance with one embodiment, or an embodiment routine (routine) for implementing abstract data types according to the subroutine (subroutine), programs, objects (object), a component (component) and data structure to cover the like (data structure), but are not limited to.

Program modules may be composed of a command (instruction) or code (code) to be performed by the processor 1010.

Features related to communication of data or information of the encoder 1000 may be performed via the communication unit 1020.

11 is a flowchart of an encoding method according to an embodiment.

Embodiment of the coding method can be carried out by the encoder 100 or encoder 1000.

In step 1110, the prediction unit can be based on the current block, a first prediction and a second prediction signal to generate a residual of the current block.

Predicting unit may include an inter-prediction unit 110, an intra predictor 120 and a subtractor 125. The

For example, the residual signal may indicate the residual block.

For example, the residual block may be the difference between the current block and the prediction block. Alternatively, the residual block may be generated based on the difference between the current block and the prediction block.

For example, the prediction block can be a sum of the first prediction block and second prediction block. Residual block may be one minus the sum of the first prediction block and second prediction block from the current block.

For example, the second prediction can be predicted for the prediction error of the first prediction.

First prediction block may be a block produced by the first prediction. Second prediction block may be a block generated by the second predicted. When the second prediction is not used a second predictive block may be a null (null) block. Channel block may be a block of values ​​of all the pixels in the block are "0".

Prediction unit may generate a residual block of the current block by performing inter prediction or intra prediction.

In one embodiment, the prediction unit may generate a second prediction block based on the current block and / or the first prediction block.

In one embodiment, the prediction unit may perform the first prediction for the current block and generates a residual signal of a current block by performing a second prediction for the first prediction error of the result of the first prediction. Here, the first prediction error may be a current block and a difference between the first prediction signal. Residual signal may be the first prediction error and the difference between the two prediction signals.

In one embodiment, the first prediction and a second prediction may be different from each other type of prediction.

In one embodiment, the first prediction and a second prediction may be a prediction of the same type each other. For example, each of the first prediction and a second prediction may be an intra-prediction. If the first prediction of an intra prediction, the second predictive value may be set to the intra prediction.

In one embodiment, the prediction direction of the first direction and the second prediction prediction prediction of may be the same. Prediction direction of the second prediction may be set equal to the predicted direction of the first prediction.

For example, in one embodiment, each of the first prediction and a second prediction may be an inter-prediction.

In one embodiment, the block that is the target of the first prediction target block and the second prediction of may be different from each other. Here, the "block that is the target of prediction" can mean whether "prediction" generates a prediction block for some blocks.

For example, the type of block that is the target of the type of block that is the target of first prediction and a second prediction may be different from each other. Type of block may be a source block, the luma (luma) blocks, the color difference (chroma) block, the depth (depth) blocks and residual blocks.

For example, the subject block of the first prediction may be a current block or the original block.

For example, the first prediction error produced by the first prediction may indicate a first prediction error block. The first prediction error signal may be the first residual. In addition, the first residual signal may indicate the first residual block. Block that is the target of the second prediction may be a first prediction error block or the first residual block. In this case, the first and subject to the first residual signal is produced by a prediction, the first by the second prediction can be a second prediction error generated in the second prediction. The second prediction error signal may be the second residuals. The second residual signal may indicate the second residual block.

For the prediction reference block it can be used. In one embodiment, the first reference block and the second reference block in the second prediction of the first prediction may be different from each other. Reference block can be different from each other in terms of the type and / or location.

In one embodiment, the position of the reference block of the first predicted position and the second prediction of the first reference block may be different from each other. Here, the position of the first reference block may be a relative position with respect to the block that is the target of the first prediction. The position of the reference block may be a relative position with respect to the block that is the target of the second prediction.

In one embodiment, the first reference block of the first prediction may be a plurality. The second reference block in the second prediction may be a plurality. The plurality of first reference of the first prediction block and a plurality of the at least one of the second reference block portion of the second prediction may be different from each other.

The locations of the position and a plurality of second reference block of the plurality of first reference blocks in the first prediction and a second plurality of the second reference block, at least the position of a block of a plurality of first reference blocks in the prediction only one of which may be included.

In one embodiment, the two types of the reference blocks of the first prediction reference blocks first type and a second prediction of may be different from each other. Type of block may be a reconstructed blocks, a restored luma (luma) blocks, a restored color-difference (chroma) block, the reconstructed depth (depth) block, restoring the first residual block and the reconstructed residual block to the second.

For example, the first reference block of the first prediction block may be reconstructed. The second reference block in the second prediction may be reconstructed first residual block. See also the first prediction of the first sample may be a pixel of the restored block. The second of the two prediction reference samples may be of the reconstructed residual block to the first pixel.

For example, a first reference block in the first prediction may be a current block reconstructed neighboring blocks adjacent to. The second reference block in the second prediction may be reconstructed neighboring to the current block a first residual block.

The reconstructed residual block to the adjacent neighbor to a current block may be one obtained by adding the second prediction block to a residual block of the restored restoration of the reconstructed neighboring blocks neighboring blocks. Alternatively, the reconstructed residual block to the adjacent neighbor to a current block may be the difference between the first prediction block of the reconstructed neighboring blocks and restore neighboring blocks.

For example, when the destination of the first prediction of the current block, the reconstructed block in the vicinity of the current block may be used as reference blocks for the prediction of the current block. Claim 2 has a first prediction residual block to the case, the first residual as the second reference block to a prediction for a block reconstructed first residual block surrounding the current block or the first block of residuals can be used.

In one embodiment, the region and the region of the second prediction of the first prediction may be different from each other. Here, the "region of the predicted" may represent a region where the generated predicted value of the block that is the target of prediction. Alternatively, the "region of the predicted" may represent a region in which the predicted values ​​generated by the prediction of the prediction block corresponding to the block that is the target of prediction assignment.

For example, the first prediction can be predicted on the area of ​​the current block. Second prediction can be predicted for a portion of the area. At least a portion of the area of ​​the current block may be excluded from the second prediction. Block that is the target of the first prediction block may be in the region of the square. Second prediction can be made about some of the area of ​​the rectangle. At least a portion of the area of ​​the square may not be the target of the second prediction.

If the viewing area of ​​the block that is the target of the first prediction region and the second prediction of the target block to be the same in the area of ​​the first prediction may be subject to all of the blocks of the first prediction. Region of the second prediction may be part of a block that is the target of the second prediction. Alternatively, the second prediction may be skipped for at least part of the block that is the target of the second prediction. That is to say, the first predictions may be performed for all of the block that is the target of the first prediction. Second prediction may be performed on the selected part of the block that is the target of the second prediction.

For example, a prediction value determined by the first prediction only for the pixels within the region of the first prediction of the first prediction block that is the target of may be assigned. Pixel region other than the first prediction of the first prediction block that is the target of may not be assigned to the predicted value of the first prediction. Alternatively, the pixels outside the region of the first prediction of the first prediction block that is the target of may be a predefined value of the assignment. The value of the default can be a "0".

For example, the prediction value can be determined by the second prediction only for the pixels within the region of the second prediction of the target block of the second prediction are assigned. Pixel region other than the second prediction of the first prediction block that is the target of may not be assigned to the predicted value of the second prediction. Alternatively, the pixels outside the region of the second prediction of the first prediction block that is the target of may be a predefined value of the assignment. For example, the value of default may be a "0".

For example, when the size of the block that is the target of a first NxN prediction, can be an NxN pixels prediction value determined by the first prediction for each of the allocations. That is to say, there is a prediction value determined by the first prediction to all of the pixels in the block that is the target of the first prediction can be assigned.

For example, when the size of the block that is the target of the second NxN prediction, for one or more pixels of the NxN pixels can not be assigned to the prediction value determined by the first prediction. Or, when the size of the block that is the target of the second NxN prediction, for one or more pixels of the NxN pixels can be the value of a given assignment. For example, the value of default may be a "0".

In one embodiment, the region of the first prediction may be determined based on the type of the first prediction. For example, the region of the first prediction may be determined based on whether or not the first prediction is inter-prediction. Alternatively, the region of the first prediction may be determined based on whether or not the first prediction is an intra-prediction. Alternatively, the region of the first prediction may be determined based on a prediction of the first prediction direction.

In one embodiment, the region of the second prediction may be determined based on the type of the second prediction. For example, the region of the second prediction may be determined based on whether the second predicted inter-prediction. Alternatively, the region of the second prediction may be determined based on whether the second predicted intra-prediction. Alternatively, the region of the second prediction may be determined based on the prediction direction in the second prediction.

In one embodiment, the second prediction can be performed using the second reference samples in the selected range.

In one embodiment, the area of ​​the second reference sample is used for the first area and the second prediction of the first reference sample is used for the first prediction may be different from each other.

As described above, the area of ​​the first region and the second prediction of the prediction can be different from each other. The area of ​​the reference sample may be a region of the reference sample required for generating prediction values ​​for the region of the first prediction on the first prediction. The area of ​​the second reference sample may be a region of the reference sample required for generating prediction values ​​for the region of the second prediction from the second prediction.

As described above, regions of the first prediction may be the whole of the block that is the target of the first prediction. Region of the second prediction may be part of a block that is the target of the second prediction. In this case, the area of ​​the second reference sample may be a part of the region of the first reference sample.

In step 1120, the encoding unit may perform encoding for the residual signal. The encoding unit is able to generate information based on the encoded residual signal by performing encoding on the residual signal.

Encoding unit may include a converting unit 130, a quantization unit 140 and entropy encoding unit 150. The

Step 1120 may include the step (1121, 1122 and 1123).

In step 1121, the conversion unit 130 may generate the transform coefficients of the residual signal.

In step 1122, the quantization unit 140 may generate the quantized transform coefficients by performing quantization of the transform coefficients.

In step 1123, the entropy coding unit 150 may generate information on the encoded residual signal by performing entropy coding on the quantized transform coefficients.

In step 1130, the entropy coding unit 150 may generate a bit-stream including information on the encoded residual signal.

The bitstream may comprise information about the encoding residual signal may include information associated with the prediction.

The entropy coding unit 150 may include information related to prediction to the bitstream. Information concerning the prediction may be entropy encoded.

For example, information related to a prediction method may include prediction information indicating that the current block is encoded in any way.

For example, the prediction method information may indicate whether the current block is encoded in some prediction of the intra-prediction and intra-prediction. Alternatively, the prediction method information may indicate whether the current block is encoded by the intra-prediction. Alternatively, the prediction method information may indicate whether the current block is coded by the inter-prediction.

For example, information relating to the prediction may include the intra-prediction mode information indicating a prediction mode of intra prediction.

For example, information relating to the prediction may include a second prediction information used in the encoding of the current block that indicates whether a second prediction.

For example, information relating to the prediction of the second prediction block type information indicative of a first prediction block type information, the type of the second prediction block that indicates the type of the current block type information, the first prediction block that indicates the type of the current block, the the first representative of the kind of the first reference block reference block type information, the first reference indicating the type of the second reference block type information, the first reference sample representative of the kind of the second reference block sample type information and the type of the first reference sample a second may include a second reference sample type information indicating.

For example, information relating to the prediction may include a second prediction region information indicating the first predicted area information and / or the second prediction region of the second prediction indicating a region of the first prediction.

Information relating to the prediction may include the information used for the prediction described in the embodiments. The entropy coding unit 150 may be included in the bitstream according to the order of creation according to the procedure or Example describes the information related to prediction in the embodiment.

In step 1140, communication unit 1020 may transmit the bitstream to the decoding unit 200 or the decryption unit 1700.

In step 1150, the restored residual signal generating unit is able to generate a residual signal restored on the basis of information about the residual signal.

The restored residual signal generation unit may include inverse quantization unit 160 and inverse transform unit 170. The

Step 1150 may include steps (1151 and 1152).

In step 1151, an inverse quantization unit 160 may generate the restored transform coefficients by performing an inverse quantization on the quantized transform coefficients.

In step 1152, the conversion unit 130 may generate a reconstructed residual signal by performing a conversion on the restored transform coefficients.

In step 1160, the reconstructed block generation unit may generate a reconstructed residual signal, a first prediction and a second prediction based on the restored block.

The reconstructed block generation unit may include an addition unit (175).

For example, the sum of the restored residual signal and the prediction signal may represent a reconstructed block. Alternatively, the reconstructed block may be generated based on the sum of the restored residual signal and the prediction signal.

For example, the reconstructed block may be a sum of the reconstructed residual block and the prediction block. Prediction block may be a sum of the first prediction block and second prediction block. Alternatively, the reconstructed block may be generated based on a sum of the reconstructed residual block and the prediction block.

For example, the reconstructed block may be generated by a restored residual block and the second prediction block and the first block prediction block. The restored block generating unit by adding the restored residual signal and the second prediction signal to generate a reconstructed first residual signal. The restored block generating unit by adding the reconstructed first residual signal and the first prediction signal to generate a reconstructed block.

Or, it is possible to generate a reconstructed first residual block by adding the restored block generator includes restoring the residual block and the second prediction block. By adding the restored block generator includes: a first restoring the residual block and the first prediction block may generate a reconstructed block.

The restored block and the reconstructed first residual signal may be used as a reference block for the coding of the other block.

12 shows a first prediction and a second prediction using the intra-prediction in accordance with one embodiment.

Also in the step 1110 described above with reference to 11, a first prediction and a second prediction may be an intra-prediction. Figure 12, illustrates an example of a first prediction and a second prediction of a current block is coded in the case of intra prediction.

The current block may be a PU. A current block in FIG. 12, it was shown that the size of 2Nx2N PU.

First, one can determine whether to use which one of prediction modes for the current block is encoded, the prediction unit a plurality of prediction modes using the first prediction.

Using each of a plurality of prediction modes prediction unit may perform the first prediction. For example, prediction directions of a plurality of prediction modes may be different from each other.

In the first prediction, and the restored block of the periphery of the current block may be used as reference blocks. Further, the pixels of the restored block of the periphery of the current block may be used as reference sample.

The forecasting unit comprises: a first prediction may generate a first prediction block of a plurality of prediction modes.

Further, it is possible to produce first predictive error of the plurality of prediction modes via the predicting unit comprises: a first prediction. For one of the prediction mode, the first prediction block and the first prediction error it can be computed.

A first prediction error may indicate the current block and the difference (residual) of the first prediction block. A first prediction error may represent a first residual block.

If the first prediction error is generated, the prediction unit may select the first prediction with reference to the error, the second predicted area is applied in a first residual block. These regions may be part of the first residual block.

Next, one can determine whether the first two prediction is applied for any portion of the first residual block, the prediction unit for the current block of the coded.

Second prediction can be applied to a portion of the first residual block in the selected area. Article Various methods may be used to select the second region prediction is applied.

Prediction region of the second prediction may be specified by the range of the reference sample used in the second prediction.

In the second prediction, a reconstructed residual block neighboring the current block may be used as reference blocks. Further, the pixels of the reconstructed residual block neighboring the current block may be used as reference sample.

Prediction of the first prediction direction and a prediction direction of the second prediction can be the same. That is to say, the intra-prediction mode of the first prediction mode, intra prediction and a second prediction of may be the same.

In Figure 12, the mode value is 30, the intra-prediction mode is shown by way of example.

In general, if the first prediction direction and a prediction direction of the second prediction of the first prediction same, the range of the reference sample in the first prediction reference sample range and the second prediction of the may be the same.

Second prediction can be applied to a portion of the first residual block. The range of reference samples of the second prediction may be part of a range of reference samples of the first prediction.

12, the range of reference samples of the second prediction is shown in the range between α and α + β.

α may point to a first reference sample to be used in the second prediction of the reference samples for the prediction direction of the second prediction. Or, α may indicate the starting position of the second prediction.

α + β may indicate the last sample of the reference to be used for prediction of a second reference sample for the prediction direction of the second prediction. Or, α + β may indicate the end position of the second prediction.

β may represent the number of reference samples used in the second prediction. Or, β may represent the length of the range or the second prediction of the first prediction.

When the range of reference samples determined, the region of the second prediction may be specified according to the range and direction of prediction of the determined reference sample. The specific region in the dashed lines in the residual block is shown in FIG. The reference sample may be in the range between the two predicted regions of α and α + β are generated region, as progress in the prediction direction.

Prediction unit of the reference samples for the prediction direction can be selected from the scope of reference samples used in the second prediction. Using the prediction reference samples of a selected portion of the full range of the reference samples for the prediction direction may perform a second prediction. Here, the reference sample of the whole may correspond to the reference sample in the first prediction.

In a second prediction, the value of the reference sample of the outside of the selected range may be considered to be a null (NULL) or "0". For example, the previous value of the reference sample of the α may be considered to be zero, the value of the reference sample after the α + β may be assumed to be zero.

The forecasting unit comprises: a second prediction may generate a second prediction block for a plurality of candidate regions.

A plurality of candidate regions may be different from each other. A plurality of candidate regions may be a different portion of the first residual block.

For example, the area of ​​the first residual block may be specified as reference samples used in the second prediction. In this case the area of ​​the second prediction of the residual part or the first block of the first residual block may be represented by ordered pairs (α, β). Ordered pairs of (α, β) corresponding to the plurality of candidate regions may be different from each other. That is to say, it ordered pairs (α, β) may represent a range of reference samples used in the second prediction.

A plurality of candidate regions can accept all the different areas, which can be selected by a particular manner with respect to the first residual block. For example, a particular way can be the prediction direction.

For example, the area of ​​the first residual block may be specified as reference samples used in the second prediction. When the number of the reference samples in the prediction mode 2 in the prediction can be used for the whole of the first residual block n, α can be up to more than 0 n-1. β may be less than 0 n-1-α. Furthermore, the when a reference sample in the prediction mode of the second prediction may be used for the whole of the first residual block of n days, the number of ordered pairs of (α, β) may be a (n + 1) n / 2. That is to say, when the first reference in the prediction mode 2 in the prediction can be used for the whole of the first residual block clear up sample n, the number of the plurality of candidate regions may be the (n + 1) n / 2.

For example, the plurality of candidate regions may correspond respectively to the range of the reference sample used in the second prediction.

α and β may be an index (index) in the list of reference samples in each of the specified prediction direction.

In order, by using the α and β indicate the position in the list of reference samples can be defined an order of the reference sample.

In one embodiment, first, the smaller the value of the X coordinate of the reference sample may be faster, the order of the reference sample. Next, a sample from the same reference of the value of the X coordinate is larger the value, the Y coordinate of the reference samples, the order of the reference sample may be faster. For example, the lower left (below-left) reference samples (731), the left (left) reference samples (733), top left (above-left) refer to the corner samples (735), the top (above) Reference sample 737, and references in the order of upper right (above-right) reference samples (739), the sequence of the sample can be given. The lower left reference sample 731 in can have a faster procedure than the reference sample is located on the reference sample on the bottom. The left reference sample 733 in can have a faster procedure than the reference sample is located on the reference sample on the bottom. From the top of the reference samples (737) may have a faster procedure than the reference sample is located on the right side of the reference sample on the left side. The reference upper-right sample 739 in can have a faster procedure than the reference sample is located on the right side of the reference sample on the left side.

The order of the above reference sample is for one example, may be used another method flow diagram of. For example, a sequence of reference samples may be assigned in the reverse order of the above order.

In addition, the order of the reference sample may be given with respect to the reference sample determined in accordance with the prediction direction. That is to say, in the bottom of the above-described left reference (below-left) samples (731), the left (left) reference samples (733), top left (above-left) refer to the corner samples (735), the top (above) Reference sample 737, only the portion of the reference sample determined in accordance with the predicted direction of the second prediction of the sample 739 and the upper right reference (above-right) may be used for the prediction of the total of the first residual block. It may be the order given with respect to some reference samples determined.

For example, when the prediction direction of the second predicted one vertical reference that can be used for the whole of the first residual sample block may be in the top of reference samples (737). Sequence when the size of the current block is 2Nx2N, see the left-most of the top of the reference samples 737 samples may be a 0, the order of the reference sample of the right-most of the samples (737) See above is 2N-1 day can.

Alternatively, for example, when the prediction direction of the second predicted one horizontal direction, the reference, which can be used for the whole of the first residual sample block may be of the left reference samples (733). The lowest order of the reference sample, the size of the current block when one 2Nx2N, the left reference sample 737 may be a 0, the order of the top left reference samples of the reference samples (737) may be a 2N-1.

The forecasting unit comprises: a second prediction may generate a second prediction error for a plurality of candidate regions. For one area, the second prediction block and the second prediction error can be computed.

A second prediction error may indicate a difference (residual) of the first residual block and the second prediction block. A second prediction error may represent a second residual block.

As described above, the prediction unit may generate a plurality of first prediction error for a plurality of prediction modes In addition, for each of the first prediction errors of the plurality of first prediction error, the forecasting unit comprises: a second predicting the claim for a plurality of candidate regions of a first residual block indicated by the first prediction error may generate second predictive error.

Through the process described above, prediction unit may generate a second prediction error with respect to the entire candidate region of a plurality of prediction modes.

The minimum rate of one of the prediction unit generates the second prediction error - may select a second prediction error having a distortion cost. The candidate region corresponding to the second prediction error selected one of the plurality of candidate regions may be a part subject to a second prediction of the current block. Further, a prediction mode corresponding to the second prediction error selected one of a plurality of prediction modes may be a prediction mode to be used by the first prediction. That is to say, any second prediction error for which areas of the prediction mode is the minimum rate - according to gatneunga distortion cost, the prediction unit can determine the prediction mode and the region of the second prediction of the first prediction to be used for coding the current block have.

Or, first, the prediction unit of the minimum rate of a plurality of prediction mode may select the prediction mode having the distortion cost. In this case, the prediction is carried out can be 2 with respect to the first prediction error or residual block of claim 1 produced by the selected prediction mode. You can select a second prediction error having a distortion cost - Next, the prediction unit comprises: a minimum of the ratio of the two predictive error. That is to say, prediction unit which a first prediction error is a minimum rate - may determine the gatneunga distortion cost thus prediction mode of the first prediction to be used for the current block is encoded in, which a second prediction error is a minimum rate-waeguk cost according to the gatneunga it may determine the region of the second prediction to be used for coding the current block.

Reference to Figure 11, information relating to the aforementioned prediction may include information about the prediction of the optional secondary screen described above.

For example, the prediction in the prediction information regarding the optional second screen (Secondary Secondary Intra-Prediction; SSIP) may include a flag which indicates whether a SSIP_flag.

For example, information relating to the prediction may include information indicating that represents the above-described information α and β described above. That is to say, the information relating to the prediction may include information indicating the number of reference samples used for the prediction information and the second points to the first reference samples of 2 used for the prediction. Alternatively, the information related to prediction may contain information that points to the last sample of the reference used for the information and the second prediction points to the first reference sample to be used in the second prediction.

Alternatively, the information related to prediction may include information that indicates whether to proceed to the position and the second predicted position to which the second prediction is started. Information relating to the prediction may include information indicating the position of the second predicted position and the second predicted starting end.

For example, information indicating a position at which the information or the second prediction indicating the start α may be a SSIP_start_point, information indicating the information or the second prediction represents the β will proceed to a certain location may be SSIP_length.

Through the above-mentioned decision is a prediction error of the current block, as compared to performing only a conventional first prediction, can be further reduced. Second prediction may reduce the effective prediction error with respect to the remaining distortion tendency of the prediction error in the direction of the prediction modes described above with reference to FIG. That is to say, by using the second prediction to the distortion tendency of the prediction error remains in the direction of the prediction modes may be efficiently improve the performance of the prediction, the prediction error can be reduced.

13 is a flow diagram of a method for generating a residual block according to one embodiment.

Also the step 1110 described above with reference to 11 may include the step (1310, 1320, 1330, 1340, 1350, 1360, 1370 and 1380).

In one embodiment, the target block of the first prediction may be a current block. First prediction may be an intra-prediction.

In step 1310, the prediction unit may generate a first prediction signal by performing a first prediction of the current block.

First prediction signal may represent the first prediction block.

As Figure 12 with reference to the above, the prediction unit may perform the first prediction for each prediction mode of a plurality of prediction modes in order to find the optimal prediction mode of the first prediction for the coding of the current block. That is to say, the steps (1310, 1320 and 1325) can be carried out for each prediction mode of a plurality of prediction modes.

A plurality of prediction modes may correspond to different prediction directions.

In step 1320, the prediction unit may generate a first prediction error, based on the current block and the first predicted signal.

A first prediction error may represent a first prediction error block or the first residual block.

First residual block may represent a current block and a difference (residual) of the first prediction block. That is to say, prediction unit may generate a first prediction error by subtracting the first prediction block is a first prediction signal representative of a block from the current.

One can calculate the distortion cost - in step 1325, by using the prediction unit comprises: a first prediction error rate of the first prediction.

Prediction unit calculated a first prediction error and / or the rate of the first prediction-distortion cost may be stored in memory 1030, storage 1040, or buffer.

As Figure 12 with reference to the above, the prediction unit may perform the first prediction for each prediction mode of a plurality of prediction modes in order to find the optimal prediction mode of the first prediction for the coding of the current block. Prediction unit ratio of the plurality of prediction modes using the first prediction error of a plurality of prediction modes can calculate distortion cost. Can be stored as distortion cost RDcost_1-distortion cost of the first minimum rate-distortion cost of the minimum of the rate-prediction unit calculates a rate. For example, a minimum of RDcost_1 rate in the prediction of the primary screen-distortion cost may be - of having a rate distortion cost mode.

Prediction unit may store a first prediction error and RDcost_1 corresponding to RDcost_1.

Second prediction can be carried out selectively on the basis of predefined criteria.

In step 1330, it can determine whether to perform a prediction unit predicting a second.

In one embodiment, the determination of step 1330 Final determination that the current block is encoded using the second prediction may not. In step 1130, the prediction unit may judge whether the second prediction is enabled for the current block, or the first residual block.

For example, the case is determined not to perform the second predicted current block can be encoded without using the second prediction. In the case determined by carrying out the second prediction, the second by the prediction second prediction signal and the but may be generated such as second prediction error, rate of step 1360 - through comparison of distortion cost in the encoder of the current block the can be determined whether to use the second prediction.

As also with reference to 12 and described above, prediction unit performing a second prediction after performing the first prediction for each prediction mode of a plurality of prediction modes to find the prediction mode of the optimal first prediction for the coding of the current block whether or not it can be determined. That is to say, step 1330 may be performed for each prediction mode of a plurality of prediction modes.

When it is determined not to perform the second prediction residual block of the current block may be a block that represents a first prediction error. That is to say, the first prediction error may be provided as a residual block.

In the case determined by carrying out the second prediction residual block of the current block may be a second prediction error signal is block diagram will be described later. That is to say, the second prediction error signal may be provided as a residual block.

Prediction unit piece can determine whether to perform a second prediction based on a predefined condition.

In one embodiment, it can be based on a prediction unit-prediction mode of the first prediction to determine whether or not to perform the second prediction.

For example, a prediction unit predicting a prediction mode of the first is non-directional mode may not perform the second prediction. Alternatively, a prediction mode of the prediction unit if the first prediction direction mode may perform the second prediction.

If the prediction mode of the first prediction of the non-directional mode of the plurality of prediction modes, the execution of the prediction for the optional area or range may not be possible because the direction of prediction modes. Therefore, there is a prediction mode of the first prediction could be a second prediction performed only when the one-way mode.

In one embodiment, the prediction unit may determine whether to perform a second prediction based on the position of the current block.

For example, the prediction unit if the current block is adjacent to the boundary may not perform the second prediction. For example, the prediction unit if the current block is adjacent to the boundary and the boundary of the left at the top may not perform the second prediction. Boundary may be a boundary of the boundary or border of the tile, a slice of the picture.

If the current block is adjacent to the boundary and the boundary of the left at the top, a first residual block to the second prediction it may not be sufficient. If the first residual block is low, the second reference sample for performing a second prediction can not be produced intact.

In one embodiment, the prediction unit may decide whether or not on the basis of the number of adjacent blocks in the reconstructed current block performing a second prediction.

For example, the prediction unit if the number of adjacent blocks in the reconstructed current block is less than two may not perform the second prediction.

In the case determined by carrying out the second prediction, the step 1340 can be performed.

When it is determined not to perform a re-prediction 2, step 1370 or step 1120 can be performed. If step 1120 is performed, the residual signal of step 1120 may be the first prediction error. That is to say, the second when the first prediction only performed without prediction, a first prediction error produced by the first prediction may be used as a residual signal of the current block.

By in step 1340, it performs a second prediction unit for predicting a first prediction error may generate a second prediction signal.

Second predicted signal may represent a second prediction block.

Second prediction may be an intra-prediction, the prediction direction of the second prediction may be identical to the prediction of the first prediction direction.

Prediction unit may perform the second prediction with respect to a partial region of the first residual block. Area of ​​the part can be specified by a reference sample to be used in the second prediction.

The reference area of ​​the samples of the second prediction may be part of the area of ​​the reference sample in the first prediction.

As Figure 12 with reference to the above, the prediction unit may perform the second prediction for each candidate region of a plurality of candidate regions to search for the optimal area of ​​the second prediction for the coding of the current block.

Further, as seen with reference to FIG. 12, described above, prediction unit to find the current block is the optimal prediction mode of the first prediction and the optimum region of the second prediction for the encoding of, for each candidate region of each prediction mode, the It can perform two predictions. That is to say, step 1340 may be performed for each candidate region of each prediction mode.

Here, the prediction mode of the optimal first prediction and the zone of optimum second prediction of the minimum rate of a plurality of candidate regions of a plurality of prediction mode can be a prediction mode, and a region corresponding to a candidate region having a distortion cost have. That is to say, about a prediction mode, and a certain area of ​​the plurality of candidate regions of a plurality of prediction modes first prediction and the minimum rate when the second prediction is performed - based on whether distortion cost is generated prediction unit encoding the current block It may determine the region of the first prediction-prediction mode and the second prediction for.

A plurality of candidate regions may correspond respectively to the range of the reference sample used in the second prediction.

Prediction unit of the reference samples for the prediction direction can be selected from the scope of reference samples used in the second prediction. Using the prediction reference samples of a selected portion of the full range of the reference samples for the prediction direction may perform a second prediction. In a second prediction, the value of the reference samples in a selected range outside can be regarded as "0".

In step 1350, based on a prediction unit comprises: a first prediction error and the second prediction signal to generate a second prediction error.

A second prediction error may represent a second prediction error block or the second residual block.

Second residual block of the flowchart illustrations may represent the difference (residual) of the first residual block and the second prediction block. That is to say, the prediction unit by subtracting the second prediction block indicated by the second prediction signal from the first residual block indicated by the first prediction error may generate a second prediction error signal.

One can calculate the distortion cost - in step 1355, by using the prediction unit comprises: a second prediction error rate of the second prediction.

Prediction unit calculated a second prediction error and / or the rate of the second prediction-distortion cost may be stored in memory 1030, storage 1040, or buffer.

As Figure 12 with reference to the above, the prediction unit may perform the second prediction for each region of the plurality of regions to find the best region of the second prediction for the coding of the current block. Prediction unit ratio of the plurality of regions using the second prediction error of the plurality of regions can calculate distortion cost. Can be stored as distortion cost RDcost_2-distortion cost of the second minimum rate-distortion cost of the minimum of the rate-prediction unit calculates a rate. For example, an optional RDcost_2 the minimum rate in the two-screen prediction-rate of the region having a distortion costs - may be a distortion cost.

Prediction unit may store a second prediction error and RDcost_2 corresponding to RDcost_2.

Second prediction can be selectively used on the basis of predefined criteria.

In step 1360, the prediction unit may determine whether to use the second prediction in the encoding of the current block.

Prediction unit may decide whether or not on the basis of predefined criteria using the second prediction in the encoding of the current block.

For example, by using the prediction unit predicting a second rate-distortion cost when a is further reduced it may decide to use the second prediction. Even with a prediction unit predicting a second rate-distortion cost is not the case further reduced it may decide not to use the second prediction.

Minimum rate if the distortion cost is not used, the second prediction-for example, a prediction unit at least in the ratio of the case of using the second prediction, if smaller than the distortion costs may determine the use of the second prediction.

In one embodiment, based on comparison of the prediction unit and RDcost_1 RDcost_2 may determine whether to use the second prediction in the encoding of the current block.

It is RDcost_2 is smaller than RDcost_1 rate of coding of the current block according to the use of the second prediction was used compared to when only the first prediction-may indicate that there is a distortion cost can be further reduced.

For example, if less than the RDcost_1 RDcost_2, prediction unit may decide not to use the second prediction for the coding of the current block. RDcost_1 is greater than RDcost_2, prediction unit may decide to use the second prediction for the coding of the current block.

When the second prediction is not used for the current block can be encoded in the step 1370 is performed.

When the second prediction to be used for coding the current block can be step 1380 is performed.

In step 1370, the prediction unit can perform the setting indicating that the claim does not use the second prediction.

To indicate that prediction unit using the second prediction information is not using the second prediction may set the value of the second prediction information used. For example, if the second prediction value of the usage information "0" and using the second prediction information may indicate that it does not use the second prediction.

If step 1370 is performed, the residual signal of step 1120 may be the first prediction error. That is to say, in the case that the second prediction is not performed for the current block, a first prediction error, based on the generated first prediction may be used as the residual signal.

In step 1380, it is possible to perform the setting indicating that the prediction using the second prediction unit.

Prediction unit using the second prediction information that can be used to set the value of the second prediction information to indicate that using the second prediction. For example, if the second prediction uses the value "1" of the information, using the second prediction information it may indicate that using the second prediction.

If step 1380 is performed, the residual signal of step 1120 may be a second prediction error. That is to say, in the case that the second prediction is performed on a present block, a first prediction and a second prediction residual error of the second base to produce a can be used as a residual signal.

14 is a flow chart of a method of generating the restored block, according to an example.

Also the step 1160 described above with reference to 11 may include the following steps (1410, 1420 and 1430).

In step 1410, section generates the restored block may determine whether the second prediction is used for coding the current block.

When the second prediction for the coding of the current block is used it can be done by a step 1420.

If now that the second prediction for the coding of the block is not used may be step 1430 is performed, when the second prediction is not used, can be a signal reconstructed first residual error the restored current block residual.

In step 1420, the reconstructed block generation unit may generate, based on the restored residual signal and the second prediction signal to restore a first prediction error.

The reconstructed block generation unit may, based on the second prediction signal for the area of ​​a portion of the residual signal and the current block of the current block to generate a first prediction error to restore the current block.

The restored first prediction error can be the sum of the restored residual signal and the second prediction signal. Alternatively, the reconstructed first residual error may be generated based on the restored residual signal and the second prediction signal.

In step 1430, the reconstructed block generation unit may generate the restoration based on the first prediction error and the first prediction signal recovery block.

The restored block generating unit is able to generate signals of the reconstructed blocks on the basis of the first prediction signal for the restored first prediction error, and the current block of the current block.

That is to say, if the second prediction is used, the signal indicative of the reconstructed block may be the sum of the restored residual signal and the second prediction signal and the first prediction signal. Or, if the second prediction is used, the signal indicative of the reconstructed block may be generated based on the restored residual signal and the second prediction signal and the first prediction signal.

That is to say, when the second prediction is not used, the signal indicative of the reconstructed block may be the sum of the restored residual signal and the first prediction signal. Or, when the second prediction is not used, the signal indicative of the reconstructed block may be generated based on the restored residual signal and the first prediction signal.

Signal of the restored block may be a sum of the reconstructed first residual error and the first prediction signal. Alternatively, the reconstructed first residual error may be generated based on the restored residual signal and the second prediction signal.

Figure 15 illustrates an encoding process according to an embodiment.

15, the + and - is attached

Figure PCTKR2017000965-appb-I000001
Symbol may be expressed to produce the error signals for the two signals. From the input signal input to the + symbol - the input signal input to the symbol - as it is the error signal may be generated.

+ +, And the attached

Figure PCTKR2017000965-appb-I000002
Symbol may be expressed to produce a combined signal of the two signals. + Is by adding the input signals inputted to the symbol is a sum signal of the two signals can be generated.

With reference to Fig. 11, it may include a prediction unit comprises: a first predictor and a second predictor above.

Process within the dotted line may represent a SSIP.

Diamond may represent a quarter according to the condition.

In the formula, such as "A = B", "A" may indicate that the target has been assigned a value, "B" may represent the value allocated to the destination.

TQ may represent a transformation and quantization. (TQ) -1 may represent the inverse transformation and inverse quantization.

e 1 may represent the signal of the first prediction error.

Figure PCTKR2017000965-appb-I000003
It may represent the signal of the reconstructed first residual error.

e 2 may represent a signal of the second prediction error.

Figure PCTKR2017000965-appb-I000004
It may represent the signal for the restored second prediction error.
Figure PCTKR2017000965-appb-I000005
If a not shown in Figure 15, the second prediction is used,
Figure PCTKR2017000965-appb-I000006
It is
Figure PCTKR2017000965-appb-I000007
And it can be the same. When the second prediction is not used,
Figure PCTKR2017000965-appb-I000008
It is
Figure PCTKR2017000965-appb-I000009
And it can be the same.

e it may represent the residual signal.

Figure PCTKR2017000965-appb-I000010
It may represent a reconstructed residual signal.

Figure PCTKR2017000965-appb-I000011
It may represent a first prediction signal.
Figure PCTKR2017000965-appb-I000012
It may represent the second prediction signal.

16 is a structural diagram of a decoding apparatus according to an embodiment.

Decryption device 1600 may be a general purpose computer system to perform the decoding.

As shown in FIG. 16, the decryption device 1600 comprises at least one processor 1610, a memory 1630, a user interface communicating with each other via a bus (1690); an input device (1650) (User Interface UI), It may include an UI output device 1660, and storage 1640. Further, the decoding device 1600 may further include a communication section 1620 that is connected to a network (1699). Processor 1610, a central processing unit; may be a semiconductor device for executing the processing (processing) command (instruction) is stored in (Central Processing Unit CPU), a memory 1630 or storage 1640. Memory 1630 and storage 1640 may be a variety of volatile or non-volatile storage medium. For example, the memory may include at least one of ROM (ROM) (1631) and RAM (RAM) (1632).

Processor 1610, an entropy decoding unit 210, an inverse quantization unit 220, an inversion unit 230, the intra predictor 240, an inter prediction unit 250, the adder 255 of the decoding apparatus 200, filter unit 260 and the reference may include a picture buffer 270.

The recording medium may store the at least one module for the operation of the decoding device 1600. Memory 1630 may store at least one module may be configured to be executed by the at least one processor (1610).

According to one embodiment, the entropy decoding unit 210, an inverse quantization unit 220, an inversion unit 230, the intra predictor 240, an inter prediction unit 250, the adder 255 of the decoding device 1600 , at least a portion of the filter unit 260 and the reference picture buffer 270 may accept the program module, may communicate with external devices or systems. Program modules may be included in the decoding apparatus 1600 in the form of the operating system, applications, modules, and other program modules.

Program modules may be physically stored on a storage device of a number of well-known. In addition, at least some of these program modules may be stored in a decoding apparatus 1600 in communication with remote storage device possible.

Program modules performing functions or operations in accordance with one embodiment, or an embodiment routine (routine) for implementing abstract data types according to the subroutine (subroutine), programs, objects (object), a component (component) and data structure to cover the like (data structure), but are not limited to.

Program modules may be composed of a command (instruction) or code (code) to be performed by the processor 1610.

Features related to communication of data or information of a decoding apparatus 1600 can be carried out via the communication unit 1620.

17 is a flow chart of a decoding method according to an embodiment.

Embodiment of the decoding method can be performed by the decoding apparatus 200 or the decoding apparatus 1600.

In step 1710, communication unit 1620 may receive a bitstream from the encoder 100 or the encoder 1100.

The bitstream may comprise information about the encoding residual signal may include information associated with the prediction.

Information on the encoded residual signal may include a quantized transformed coefficients to entropy encoding.

Information relating to the prediction may be entropy decoding.

In step 1720, the entropy decoding unit 210 may generate the quantized transform coefficients by performing entropy decoding on a bitstream. Further, the entropy decoding unit 210 may generate information relating to the prediction by performing entropy decoding on a bitstream.

For example, information related to a prediction method may include prediction information indicating that the current block is encoded in any way.

For example, the prediction method information may indicate whether the current block is encoded in some prediction of the intra-prediction and intra-prediction. Alternatively, the prediction method information may indicate whether the current block is encoded by the intra-prediction. Alternatively, the prediction method information may indicate whether the current block is coded by the inter-prediction.

For example, information relating to the prediction may include the intra-prediction mode information indicating a prediction mode of intra prediction.

For example, information relating to the prediction may include a second prediction information used in the encoding of the current block that indicates whether a second prediction.

For example, information relating to the prediction of the second prediction block type information indicative of a first prediction block type information, the type of the second prediction block that indicates the type of the current block type information, the first prediction block that indicates the type of the current block, the the first representative of the kind of the first reference block reference block type information, the first reference indicating the type of the second reference block type information, the first reference sample representative of the kind of the second reference block sample type information and the type of the first reference sample a second may include a second reference sample type information indicating.

For example, information relating to the prediction may include a second prediction region information indicating the first predicted area information and / or region of the second prediction indicating a region of the first prediction.

Further, it is possible to predict the information relevant to include the information used for the prediction described in the embodiments. Information relating to the prediction can be included in the bitstream according to the order of creation according to the order or in the examples described in the embodiment.

In step 1730, to the reconstructed residual signal generating section based on quantized transform coefficients may generate a reconstructed residual signal of the current block.

The restored residual signal generation unit may include inverse quantization unit 220 and inverse transform unit 230. The

Step 1730 may include steps (1731 and 1732).

In step (1731), the inverse quantization unit 220 may generate an inverse quantized transform coefficients by performing an inverse quantization on the quantized transform coefficients.

In step 1732, the conversion unit 230 may generate the residual signal restored by performing an inverse transformation on the transform coefficients dequantized.

Via the step (1710, 1720 and 1730) and the residual signal restoring of the current block it may be generated.

In step 1740, the reconstructed block generation unit may generate a reconstructed residual signal, the second prediction and the first decoding based on the predictive block.

The reconstructed block generation unit may include an addition unit 225, an intra predictor 240, an inter prediction unit 250, filter unit 260 and the reference picture buffer 270.

The restored residual signal can be reconstructed residual block.

For example, the reconstructed block may be a sum of the reconstructed residual block and the prediction block. Alternatively, the reconstructed block may be generated based on a sum of the reconstructed residual block and the prediction block.

For example, the prediction block can be a sum of the first prediction block and second prediction block. Alternatively, the prediction signal may be a sum of the first predictive signal and the second prediction signal.

For example, the reconstructed block may be a sum of the reconstructed residual block, the first prediction block and second prediction block. The reconstructed signal may be a sum of the reconstructed residual signal, a first prediction signal and the second prediction signal. Alternatively, the reconstructed block may correspond to a sum of the reconstructed residual block, the first prediction block and second prediction block.

First prediction block may be a block diagram showing the prediction signal produced by the block or the first prediction generated by the first prediction. Second prediction block may be a block diagram showing the prediction signal generated by the block prediction or the second generation by the second prediction.

When the second prediction is not used, the second prediction signal is a null (null) can be a signal, a second prediction block can be a null (null) block. Channel block may be a block of values ​​of all the pixels in the block are "0".

May be for example, a current second prediction of the cost with reference to Figure 14 above is the restored residual signal to restore a second prediction error, if used for the encoding of the block. Or, can be a signal for the coding of the current block, the second prediction when used with reference to the restored residual signal 11 by the above-mentioned second prediction error reconstructed by the inverse quantization and inverse transform.

It may be for example, a current second prediction is not a cost with reference to Figure 14 above is the restored residual signal to restore a first prediction error that is used for the encoding of the block. Or, can be a signal for the coding of the current block by a second prediction is the restored residual signal, see FIG. 11 described first prediction error when unused restored by the inverse quantization and inverse transform.

The restored block generating unit is able to generate a reconstructed block by performing inter prediction or intra prediction.

In one embodiment, the first prediction and a second prediction may be different from each other type of prediction.

In one embodiment, the first prediction and a second prediction may be a prediction of the same type each other. For example, each of the first prediction and a second prediction may be an intra-prediction. If the first prediction of an intra prediction, the second predictive value may be set to the intra prediction.

In one embodiment, the prediction direction of the first direction and the second prediction prediction prediction of may be the same. Prediction direction of the second prediction may be set equal to the predicted direction of the first prediction.

For example, in one embodiment, each of the first prediction and a second prediction may be an inter-prediction.

In one embodiment, the block that is the target of the first prediction target block and the second prediction of may be different from each other. Here, the "block that is the target of prediction" can mean whether "prediction" generates a prediction block for some blocks.

For example, the type of block that is the target of the type of block that is the target of first prediction and a second prediction may be different from each other. Type of block may be a source block, the luma (luma) blocks, the color difference (chroma) block, the depth (depth) blocks and residual blocks.

For example, the block that is the target of the first prediction block may be reconstructed. Block that is the target of the second prediction may be a prediction error block.

For the prediction reference block it can be used. In one embodiment, the first reference block and the second reference block in the second prediction of the first prediction may be different from each other. Reference block can be different from each other in terms of the type and / or location.

In one embodiment, the position of the reference block of the first predicted position and the second prediction of the first reference block may be different from each other. Here, the position of the first reference block may be a relative position with respect to the block that is the target of the first prediction. The position of the reference block may be a relative position with respect to the block that is the target of the second prediction.

In one embodiment, the first reference block of the first prediction may be a plurality. The second reference block in the second prediction may be a plurality. The plurality of first reference of the first prediction block and a plurality of the at least one of the second reference block portion of the second prediction may be different from each other.

The locations of the position and a plurality of second reference block of the plurality of first reference blocks in the first prediction and a second plurality of the second reference block, at least the position of a block of a plurality of first reference blocks in the prediction only one of which may be included.

In one embodiment, the two types of the reference blocks of the first prediction reference blocks first type and a second prediction of may be different from each other. Type of block may be a reconstructed blocks, a restored luma (luma) blocks, a restored color-difference (chroma) block, the reconstructed depth (depth) block, restoring the first residual block and the reconstructed residual block to the second.

For example, the first reference block of the first prediction block may be reconstructed. The second reference block in the second prediction may be reconstructed first residual block. See also the first prediction of the first sample may be a pixel of the restored block. The second of the two prediction reference samples may be of the reconstructed residual block to the first pixel.

For example, a first reference block in the first prediction may be a current block reconstructed neighboring blocks adjacent to. The second reference block in the second prediction may be reconstructed neighboring to the current block a first residual block.

The reconstructed residual block to the adjacent neighbor to a current block may be one obtained by adding the second prediction block to a residual block of the restored restoration of the reconstructed neighboring blocks neighboring blocks. Alternatively, the reconstructed residual block to the adjacent neighbor to a current block may be the difference between the first prediction block of the reconstructed neighboring blocks and restore neighboring blocks.

For example, when the destination of the first prediction of the current block, the reconstructed block in the vicinity of the current block may be used as reference blocks for the prediction of the current block. Claim 2 has a first prediction residual block to the case, the first residual as the second reference block to a prediction for a block reconstructed first residual block surrounding the current block or the first block of residuals can be used.

In one embodiment, the region and the region of the second prediction of the first prediction may be different from each other. Here, the "region of the predicted" may represent a region where the generated predicted value of the block that is the target of prediction. Alternatively, the "region of the predicted" may represent a region in which the predicted values ​​generated by the prediction of the prediction block corresponding to the block that is the target of prediction assignment.

For example, the first prediction can be predicted on the area of ​​the current block. Second prediction can be predicted for a portion of the area. At least a portion of the area of ​​the current block may be excluded from the second prediction. Block that is the target of the first prediction block may be in the region of the square. Second prediction can be made about some of the area of ​​the rectangle. At least a portion of the area of ​​the square may not be the target of the second prediction.

If the viewing area of ​​the block that is the target of the first prediction region and the second prediction of the target block to be the same in the area of ​​the first prediction may be subject to all of the blocks of the first prediction. Region of the second prediction may be part of a block that is the target of the second prediction. Alternatively, the second prediction may be skipped for at least part of the block that is the target of the second prediction. That is to say, the first predictions may be performed for all of the block that is the target of the first prediction. Second prediction may be performed on the selected part of the block that is the target of the second prediction.

For example, a prediction value determined by the first prediction only for the pixels within the region of the first prediction of the first prediction block that is the target of may be assigned. Pixel region other than the first prediction of the first prediction block that is the target of may not be assigned to the predicted value of the first prediction. Alternatively, the pixels outside the region of the first prediction of the first prediction block that is the target of may be a predefined value of the assignment. For example, the value of default may be a "0".

For example, the prediction value can be determined by the second prediction only for the pixels within the region of the second prediction of the target block of the second prediction are assigned. Pixel region other than the second prediction of the first prediction block that is the target of may not be assigned to the predicted value of the second prediction. Alternatively, the pixels outside the region of the second prediction of the first prediction block that is the target of may be a predefined value of the assignment. For example, the value of default may be a "0".

For example, when the size of the block that is the target of a first NxN prediction, can be an NxN pixels prediction value determined by the first prediction for each of the allocations. That is to say, there is a prediction value determined by the first prediction to all of the pixels in the block that is the target of the first prediction can be assigned.

For example, when the size of the block that is the target of the second NxN prediction, for one or more pixels of the NxN pixels can not be assigned to the prediction value determined by the first prediction. Or, when the size of the block that is the target of the second NxN prediction, for one or more pixels of the NxN pixels can be the value of a given assignment. For example, the value of default may be a "0".

In one embodiment, the region of the first prediction may be determined based on the type of the first prediction. For example, the region of the first prediction may be determined based on whether or not the first prediction is inter-prediction. Alternatively, the region of the first prediction may be determined based on whether or not the first prediction is an intra-prediction. Alternatively, the region of the first prediction may be determined based on a prediction of the first prediction direction.

In one embodiment, the region of the second prediction may be determined based on the type of the second prediction. For example, the region of the second prediction may be determined based on whether the second predicted inter-prediction. Alternatively, the region of the second prediction may be determined based on whether the second predicted intra-prediction. Alternatively, the region of the second prediction may be determined based on the prediction direction in the second prediction.

In one embodiment, the second prediction can be performed using the second reference samples in the selected range.

In one embodiment, the area of ​​the second reference sample is used for the first area and the second prediction of the first reference sample is used for the first prediction may be different from each other.

As described above, the area of ​​the first region and the second prediction of the prediction can be different from each other. The area of ​​the reference sample may be a region of the reference sample required for generating prediction values ​​for the region of the first prediction on the first prediction. The area of ​​the second reference sample may be a region of the reference sample required for generating prediction values ​​for the region of the second prediction from the second prediction.

As described above, regions of the first prediction may be the whole of the block that is the target of the first prediction. Region of the second prediction may be part of a block that is the target of the second prediction. In this case, the area of ​​the second reference sample may be a part of the region of the first reference sample.

18 is a flow chart of a method of generating the restored block, according to one embodiment.

Also the step 1740 described above with reference to 17 it may include the following steps (1810, 1820, 1830, 1840 and 1850).

Second prediction can be carried out selectively on the basis of predefined criteria.

In step 1810, the reconstructed block generation unit may determine whether to use the second prediction for the generation of the reconstructed block.

The reconstructed block generation unit may determine whether to use the second prediction for the generation of the reconstructed blocks on the basis of predefined criteria.

In one embodiment, it is possible to produce a restored block unit based on the prediction mode of the first prediction to determine whether to use the second prediction.

For example, the reconstructed block generation unit may obtain the intra-prediction mode information indicating a prediction mode of intra prediction from the bitstream. If the prediction mode of the restored block generator includes: a first non-directional prediction mode may not use the second prediction.

In one embodiment, the reconstructed block generation unit may determine whether to use the second prediction based on the position of the current block.

For example, the reconstructed block generation unit may not use a second prediction when the current block is adjacent to the boundary. For example, the prediction unit if the current block is adjacent to the boundary and the boundary of the left at the top may not be using the second prediction. Boundary may be a boundary of the boundary or border of the tile, a slice of the picture.

In one embodiment, the reconstructed block generation unit may, based on the number of adjacent blocks in the reconstructed current block to determine whether to use the second prediction. The reconstructed block may be PU.

For example, when the restored block generator comprises a number of adjacent reconstructed block in the current block is less than two may not be using the second prediction.

In one embodiment, the prediction mode, the second prediction for the second predicted by a number of positions and / or adjacent to the reconstructed blocks of the current block to the next made of the determination of whether to use the restored block generator includes: a current block encoding of this is based on whether or not to decide whether to use the second prediction been used.

For example, the reconstructed block generating unit is able to obtain a second prediction using information from the bitstream. If the reconstructed block generator includes a second prediction using information indicating that the prediction using the second may be a second prediction. If the reconstructed block generator includes a second prediction using information indicates that it does not use the second prediction may not be using the second prediction.

When using the second prediction for coding the current block it can be done by a step 1820.

If not the second prediction used for the coding of the current block may be step 1840 is performed, when the second prediction is not used, can be a signal reconstructed first residual error the restored current block residual.

In step 1820, it is possible to restore the block generating unit to perform a second prediction to generate a second prediction signal.

Second prediction may correspond to the second prediction of the coding of the current block is described above with reference to FIGS. 11 and 13. For example, the second predicted signal may correspond to a second prediction of the coding of the current block is described above with reference to FIGS. 11 and 13.

Second predicted signal may represent a second prediction block.

Second prediction may be an intra-prediction, the prediction direction of the second prediction may be identical to the prediction of the first prediction direction.

Reference block of the second prediction may be reconstructed residual block. For example, the reconstructed block generation unit may generate a second prediction signal by performing a second prediction block as a reference the reconstructed residual block of a block neighboring the current block.

In one embodiment, the region of the second prediction as described above may be part of the area of ​​the current block. The restored block generating unit is able to obtain a second prediction region information indicating a prediction region of the second prediction from the bitstream. The restored block generating unit is able to perform a second prediction of the first prediction region.

In one embodiment, the second area is predicted to be applied may be specified on the basis of the range of reference samples used in the second prediction. The second region is predicted to be applied may be specified based on the prediction direction in the second prediction. For example, the second region is predicted to be applied may be specified by a prediction direction of a reference sample and a second range of prediction used in the second prediction.

In one embodiment, it can be restored block generating unit by using a reference sample of the selected range to the total number of reference samples of the prediction direction to perform the second prediction. In a second prediction, the value of the reference samples in a selected range outside can be regarded as "0".

In one embodiment, the reconstructed block generation unit may acquire information indicating the number β of the reference sample used for the first and second prediction information α that points to the first reference samples of 2 used for prediction from the bitstream. The range of the reference sample used in the second prediction may be specified by α and β.

In one embodiment, the range of the reference sample may be specified by information indicating the number β of the reference sample used for the second prediction information α and the second prediction points to the first reference sample to be used in. The reference sample may be in the range between the two predicted regions of α and α + β are generated region, as progress in the prediction direction.

In a second prediction, the value of the reference sample of the outside of the selected range may be considered to be a null (NULL) or "0". For example, the previous value of the reference sample of the α may be considered to be zero, the value of the reference sample after the α + β may be assumed to be zero.

In step 1830, the reconstructed block generation unit may generate, based on the restored residual signal and the second prediction signal to restore a first prediction error. By adding the second prediction signal to the restored residual signal to restore a block generation unit may generate the reconstructed first residual error.

The reconstructed block generating unit based on the second prediction signal for the region of the restored residual signal and part may generate the reconstructed first residual error of the current block.

The restored first prediction error can be the sum of the restored residual signal and the second prediction signal. Alternatively, the reconstructed first residual error may be generated based on the restored residual signal and the second prediction signal. Alternatively, the reconstructed first residual error may be generated based on the sum of the restored residual signal and the second prediction signal.

In step 1840, the reconstructed block generation unit may generate the first prediction signal by performing a first prediction of the current block.

First prediction signal may represent the first prediction block.

In the first prediction, and the restored block of the periphery of the current block may be used as reference blocks. Further, the pixels of the restored block of the periphery of the current block may be used as reference sample.

In step 1850, the reconstructed block generating unit is able to generate signals on the basis of the reconstructed first residual error and the first prediction signal recovery block. By adding the first prediction signal to the restored block generating unit to restore the first prediction error it may produce a signal of the restored block.

According to the above-described step (1810, 1820, 1830, 1840 and 1850), in the case the second prediction is used, the restored block may be the sum of the restored residual signal and the second prediction signal and the first prediction signal. Or, if the second prediction is used, the signal indicative of the reconstructed block may be generated based on the restored residual signal and the second prediction signal and the first prediction signal.

According to the above-described step (1810, 1820, 1830, 1840 and 1850), the signal indicating the restored block, if two prediction is not used may be a sum of the restored residual signal and the first prediction signal. Or, when the second prediction is not used, the signal indicative of the reconstructed block may be generated based on the sum of the restored residual signal and the first prediction signal.

The restored block and the reconstructed first residual signal may be used as reference blocks for the decoding of another block.

Figure 19 illustrates a decoding process according to an embodiment.

19, the + and + is attached

Figure PCTKR2017000965-appb-I000013
Symbol may be expressed to produce a combined signal of the two signals. + Is by adding the input signals inputted to the symbol is a sum signal of the two signals can be generated.

With reference to Fig. 19, the reconstructed block generation unit described above may comprise a first predictor and the second predictor.

Process within the dotted line may represent a SSIP.

Diamond may represent a quarter according to the condition.

(TQ) -1 may represent the inverse transformation and inverse quantization.

Figure PCTKR2017000965-appb-I000014
It may be a prediction error. or,
Figure PCTKR2017000965-appb-I000015
It may be a signal reconstructed residual.

Figure PCTKR2017000965-appb-I000016
It may represent the signal of the reconstructed first residual error.

Figure PCTKR2017000965-appb-I000017
It may represent the signal for the restored second prediction error. In the case the second prediction is used,
Figure PCTKR2017000965-appb-I000018
It is
Figure PCTKR2017000965-appb-I000019
And it can be the same. Or, in the case the second prediction is used,
Figure PCTKR2017000965-appb-I000020
It is
Figure PCTKR2017000965-appb-I000021
It may be. When the second prediction is not used,
Figure PCTKR2017000965-appb-I000022
It is
Figure PCTKR2017000965-appb-I000023
And it can be the same. Or, in the case where 2 prediction is not used,
Figure PCTKR2017000965-appb-I000024
It is
Figure PCTKR2017000965-appb-I000025
It may be.

Figure PCTKR2017000965-appb-I000026
It may represent a first prediction signal.
Figure PCTKR2017000965-appb-I000027
It may represent the second prediction signal.

According to the above-mentioned embodiment, the second prediction is used in the case, a portion of the remainder being a first portion and a second prediction is the prediction is applied to the selected applied can be separated.

In one embodiment, the case where 2 prediction is used, the first prediction can be applied to the whole of the current block. In this case, the second prediction to the reconstructed block for the selected portion to be applied is based on a sum of the first predicted signal generated by a generated by the reconstructed residual block to the second predicted second prediction signal and the first prediction It can be generated. Further, the recovered block for the remaining portion of the second prediction is not applied can be generated on the basis of the sum of the first predicted signal generated by the reconstructed residual block and the first prediction.

In this case, the second prediction can be predicted for the residual. Second prediction may be a current block and a prediction for the difference between the first residual signal between the first predicted signal. Reference block of the second prediction may be a reconstructed residual block, the second prediction reference pixel may be a pixel of the reconstructed residual block. That is to say, the second through the prediction, and the difference between the second residual signal between the first residual signal and the second prediction signal can be generated, the using the second residual signal, the encoded information for the current block to the first residual signal produced It can be.

Or, in this case, the recovered block for the selected portion of the second prediction is applied can be restored with a residual block, the second prediction signal and the weights of the first prediction signal given sum (weighted-sum). Information relating to the prediction may comprise a second weight, and a third weight of the first prediction signal of the first weight, a second prediction signal for the reconstructed residual block. Claim 2 The prediction is not used can be considered to be a second weight of the second prediction signal is set to zero. Alternatively, the second weight of the second prediction signal for the remaining portion of that second prediction is not applied, may be considered to be set to zero.

Description of the encoding of the above-described present block may be applied to decoding of the current block. Duplicate explanation will be omitted. Further, the description of the decoding of the block above the current can be applied to coding of the current block. Duplicate explanation will be omitted.

In the above embodiments, methods have however been described with reference to the flow chart as a series of steps or units, the present invention is not limited to the order of the steps, which steps may occur in different orders and the other steps as described above or at the same time can. In addition, that those skilled in the art without being steps in the flowchart are exclusive, or contain other steps, one or more steps of the flow diagram may be deleted without affecting the scope of the invention it will be appreciated.

Embodiment according to the invention described above are implemented for example in the form of a program that may be performed through a variety of computer components, the command can be recorded in a computer-readable recording medium. The computer readable recording media may also include, alone or in combination with the program instructions, data files, data structures, and the like. The program instructions recorded on a computer-readable recording medium may be one well-known and available to those skilled in things or computer software, and specifically designed for the present invention. Examples of the computer readable recording medium, such as hard disks, floppy disks, and magnetic tape media, CD-ROM, such as an optical recording medium, flop tikeol disk (floptical disk) such as DVD magneto-optical medium (magneto-optical storing program instructions, such as media), and ROM, RAM, flash memory, hardware devices that are specially configured to, perform. Examples of program instructions include, the use of such as both machine code, such as produced by a compiler, interpreter to contain higher level code that may be executed by a computer. The hardware devices may be configured to act as one or more software modules in order to perform the process according to the invention, and vice versa.

While the invention has been above described by the specific details and exemplary embodiments and drawings, such as specific components, which is only be provided to assist the overall understanding of the invention, the present invention is not limited to the above embodiments , those of ordinary skill in the art can be achieved with various changes and modifications to the described.

Accordingly, the scope of the invention is jeonghaejyeoseo is not restricted to the embodiment described above, in all, as well as the claims, which will be described later patent equivalents and claims as, or equivalently modified ones scope of the spirit of the invention I will say.

Claims (20)

  1. Based on the current block, a first prediction and a second prediction step of generating a residual signal of a current block; And
    Generating information for a residual signal coding by performing encoding on the residual signal
    And including,
    The second prediction is the prediction, the coding method for the residual error of the first prediction.
  2. The restored residual signal generating unit for generating the restored residual signal of the current block; And
    The restored residual signal and the second prediction and the first prediction on the basis of the restored block to generate a reconstructed block of the current block generator
    And decoding method comprising a.
  3. Generating a reconstructed residual signal of the current block; And
    Based on the reconstructed residual signal, the second prediction and the first prediction step of generating the reconstructed blocks of the current block
    And decoding method comprising a.
  4. 4. The method of claim 3,
    The reconstructed block is a decoding method corresponding to the sum of the second predicted signal generated by the first predicted signal and the second prediction generated by the first predicted the reconstructed residual signal.
  5. 4. The method of claim 3,
    The first prediction and a second prediction of the intra-prediction, the decoding method.
  6. 3. The method of claim 2,
    Prediction direction of the second prediction is the same, a decoding method and the first prediction direction.
  7. 4. The method of claim 3,
    A first prediction of a plurality of first reference block and a plurality of second reference block at least some of the decoding method different from each other, of the second prediction.
  8. 4. The method of claim 3,
    The first type of the second kind, see block and the second prediction of the first prediction reference block of the decoding method different from each other.
  9. 4. The method of claim 3,
    A first reference block in the first prediction is the reconstructed neighboring blocks adjacent to the current block,
    The second reference block of the second prediction residual is the restored neighboring block neighboring the current block,
    The reconstructed residual block is adjacent the differential decoding method between the reconstructed neighboring blocks, and a first prediction block of the reconstructed neighboring blocks.
  10. 4. The method of claim 3,
    Region of the reference sample of the second prediction part, the decoding method of the area of ​​the reference sample of the first prediction.
  11. 4. The method of claim 3,
    The second prediction is applied area is specified on the basis of the range of reference samples used for said second prediction, the decoding method.
  12. 4. The method of claim 3,
    The second prediction is applied to that region is specified based on the prediction direction of the second predicted decoding method.
  13. 12. The method of claim 11,
    The range of the reference sample, the second prediction information and the first reference that points to the sample specified by the information indicating the number of reference samples used for said second prediction, the decoding method to be used for.
  14. 12. The method of claim 11,
    The second area is predicted to be applied is a region of a decoding method for generating, as reference samples in the above range are in progress in the prediction direction of the second prediction.
  15. 12. The method of claim 11,
    The value of the reference sample of the outside of the above range, the decoding method is considered a null or zero.
  16. 4. The method of claim 3,
    If the current block is coded according to the use of a second prediction indicates whether to use the second prediction information indicates that it is not using the second prediction and the second prediction is not used, the decoding method.
  17. 4. The method of claim 3,
    The prediction mode of the first prediction is non-directional mode, the second prediction is not used, the decoding method.
  18. 4. The method of claim 3,
    When the present block adjacent to the boundary of the boundary between the upper and the left and the second prediction is not used,
    The boundary is a boundary, the decoding method of the boundary or border of the tile, a slice of the picture.
  19. 4. The method of claim 3,
    The second prediction is used whether it is determined whether it is based on a number of neighboring reconstructed blocks in the current block, the decoding method.
  20. 4. The method of claim 3,
    The second prediction is used if it is not the reconstructed block is the first decoding method is generated based on the predicted signal restored by the residual signal and the first prediction.
PCT/KR2017/000965 2016-01-27 2017-01-26 Method and device for encoding and decoding video by using prediction WO2017131473A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
KR10-2016-0010260 2016-01-27
KR20160010260 2016-01-27
KR10-2017-0012160 2017-01-25
KR20170012160A KR20170089777A (en) 2016-01-27 2017-01-25 Method and apparatus for encoding and decoding video using prediction

Publications (1)

Publication Number Publication Date
WO2017131473A1 true true WO2017131473A1 (en) 2017-08-03

Family

ID=59398440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2017/000965 WO2017131473A1 (en) 2016-01-27 2017-01-26 Method and device for encoding and decoding video by using prediction

Country Status (1)

Country Link
WO (1) WO2017131473A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090095407A (en) * 2008-03-05 2009-09-09 경희대학교 산학협력단 Method and apparatus for up/down sampling of image block in the integer DCT domain
JP2010045853A (en) * 2009-11-20 2010-02-25 Mitsubishi Electric Corp Moving image coding apparatus and moving image coding method
KR101411315B1 (en) * 2007-01-22 2014-06-26 삼성전자주식회사 Method and apparatus for intra/inter prediction
KR20150027530A (en) * 2013-09-04 2015-03-12 한국전자통신연구원 High efficiency video coding intra frame prediction apparatus and method thereof
KR20150093633A (en) * 2010-11-08 2015-08-18 성균관대학교산학협력단 Methods of encoding and decoding using multi-level prediction and apparatuses for using the same

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101411315B1 (en) * 2007-01-22 2014-06-26 삼성전자주식회사 Method and apparatus for intra/inter prediction
KR20090095407A (en) * 2008-03-05 2009-09-09 경희대학교 산학협력단 Method and apparatus for up/down sampling of image block in the integer DCT domain
JP2010045853A (en) * 2009-11-20 2010-02-25 Mitsubishi Electric Corp Moving image coding apparatus and moving image coding method
KR20150093633A (en) * 2010-11-08 2015-08-18 성균관대학교산학협력단 Methods of encoding and decoding using multi-level prediction and apparatuses for using the same
KR20150027530A (en) * 2013-09-04 2015-03-12 한국전자통신연구원 High efficiency video coding intra frame prediction apparatus and method thereof

Similar Documents

Publication Publication Date Title
WO2012173415A2 (en) Method and apparatus for encoding motion information and method and apparatus for decoding same
WO2011021838A2 (en) Method and apparatus for encoding video, and method and apparatus for decoding video
WO2009151232A2 (en) Image-encoding method and a device therefor, and image-decoding method and a device therefor
WO2011019249A2 (en) Video encoding method and apparatus and video decoding method and apparatus, based on hierarchical coded block pattern information
WO2011019253A2 (en) Method and apparatus for encoding video in consideration of scanning order of coding units having hierarchical structure, and method and apparatus for decoding video in consideration of scanning order of coding units having hierarchical structure
WO2011049396A2 (en) Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
WO2012005549A2 (en) Methods and apparatuses for encoding and decoding motion vector
WO2012087077A2 (en) Method and device for encoding intra prediction mode for image prediction unit, and method and device for decoding intra prediction mode for image prediction unit
WO2013002556A2 (en) Method and apparatus for coding video and method and apparatus for decoding video, accompanied with intra prediction
WO2011068360A2 (en) Method and apparatus for encoding/decoding high resolution images
WO2011087295A2 (en) Method and apparatus for encoding and decoding video by using pattern information in hierarchical data unit
WO2012005520A2 (en) Method and apparatus for encoding video by using block merging, and method and apparatus for decoding video by using block merging
WO2013002554A2 (en) Video encoding method using offset adjustments according to pixel classification and apparatus therefor, video decoding method and apparatus therefor
WO2011096770A2 (en) Image encoding/decoding apparatus and method
WO2011090314A2 (en) Method and apparatus for encoding and decoding motion vector based on reduced motion vector predictor candidates
WO2011126282A2 (en) Method and apparatus for encoding video by using transformation index, and method and apparatus for decoding video by using transformation index
WO2013062391A1 (en) Method for inter prediction and device therefor, and method for motion compensation and device therefor
WO2011040795A2 (en) Method and apparatus for encoding/decoding image using split layer
WO2011087321A2 (en) Method and apparatus for encoding and decoding motion vector
WO2012023796A2 (en) Video encoding method and apparatus using transformation unit of variable tree structure, and video decoding method and apparatus
WO2011126287A2 (en) Method and apparatus for performing interpolation based on transform and inverse transform
WO2012023806A2 (en) Method and apparatus for encoding video, and decoding method and apparatus
WO2013002557A2 (en) Method and apparatus for encoding motion information, and method and apparatus for decoding same
WO2013069990A1 (en) Method and device for determining motion vector for video coding or video decoding
WO2013039357A2 (en) Method and device for encoding and decoding videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17744594

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE