WO2016072775A1 - Video encoding method and apparatus, and video decoding method and apparatus - Google Patents

Video encoding method and apparatus, and video decoding method and apparatus Download PDF

Info

Publication number
WO2016072775A1
WO2016072775A1 PCT/KR2015/011873 KR2015011873W WO2016072775A1 WO 2016072775 A1 WO2016072775 A1 WO 2016072775A1 KR 2015011873 W KR2015011873 W KR 2015011873W WO 2016072775 A1 WO2016072775 A1 WO 2016072775A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
inter
weight
intra
unit
Prior art date
Application number
PCT/KR2015/011873
Other languages
French (fr)
Korean (ko)
Inventor
박찬율
민정혜
Original Assignee
삼성전자 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201462075987P priority Critical
Priority to US62/075,987 priority
Application filed by 삼성전자 주식회사 filed Critical 삼성전자 주식회사
Publication of WO2016072775A1 publication Critical patent/WO2016072775A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Abstract

The present invention discloses a method and an apparatus that perform combined prediction by taking account of one or more of the distance between a reference picture and the current picture, the size of a current block, and characteristics of inter-prediction and intra-prediction, and by obtaining weights for intra-prediction and inter-prediction. A video decoding method according to the present invention comprises the steps of: parsing from a bit stream combined prediction information indicating whether to predict by combining intra-prediction and inter-prediction with respect to the current block; determining whether to perform combined prediction with respect to the current block based on the combined prediction information; obtaining a first prediction value by performing inter-prediction with respect to the current block, and obtaining a second prediction value by performing intra-prediction with respect to the current block if the combined prediction is performed; determining a weight for the inter-prediction and a weight for the intra-prediction based on one or more of the distance between a reference picture and the current picture, the size of the current block and characteristics of inter-prediction and intra-prediction; and performing combined prediction based on the weight for inter-prediction, the weight for intra-prediction, the first prediction value and the second prediction value.

Description

Video encoding method and apparatus and video decoding method and apparatus

This disclosure relates to an intra-prediction and inter-prediction results resulting still image or video encoding and decoding for combining.

According to the high resolution or high quality video content to the reproduction, development and spread of hardware that can be stored, there is a need for a video codec for effectively encoding a high resolution or high quality video content or decryption is increased.

Conventional image decoding / encoding video codec predicts the current block by using one intra prediction and inter prediction. Intra prediction is a prediction technique such that only the spatial reference, inter prediction is a coding method with reference to an image encoded in the previous time by eliminating duplication of data. The image decoding / encoding video codec has a problem hayeoteumeuro using only one of the reference spatial or temporal reference does not reflect all of the spatial characteristics, and temporal characteristics of the video.

This disclosure, In combination prediction reference picture with consideration of at least one of a distance, the current size and characteristics of the inter-prediction and intra-prediction of blocks of a current picture with respect to a method and apparatus for obtaining a weight for the intra-prediction and inter-prediction It discloses. Also it discloses a technique for applying a binding prediction on the video codec of the free structure.

Method for decoding a video according to an embodiment of the present disclosure includes the steps of parsing the intra prediction and the combined prediction information (combine prediction information) that indicates whether or not coupled to predict the inter-prediction from the bitstream for the current block; Determining, based on the combined prediction information, it determines whether to perform a combined prediction for the current block; When performing a combined prediction, the method comprising: obtaining a first predicted value by performing the inter prediction for the current block and obtaining a second predicted value by performing the intra prediction for the current block; Reference picture and determining a distance, the size of the current block and the inter prediction based on at least one of the properties of the intra-prediction and inter-prediction weights for the weight for the intra prediction of the current picture; And the weight for the inter-prediction, based on the weight, the first predicted value and the second predicted value of the intra prediction may include the step of performing the binding prediction.

In addition, a method for decoding a video according to an embodiment of the present disclosure includes the steps of: parsing information regarding possible modes used from the bitstream; Used on the basis of the information about the available mode selecting the available mode of the plurality of modes associated with the prediction direction is included in the intra-prediction; And determining a weight for each of the available modes may be further included.

In addition, a method for decoding a video according to an embodiment of the present disclosure includes the steps of: parsing information regarding possible modes used from the bitstream; The step of using the basis of the information about the available mode selecting a plurality of mode of the available modes of corresponding to a plurality of reference blocks of the reference current block included in the inter-prediction; And determining a weight for each of the available modes may be further included.

In addition, performing a combined prediction may include the step of calculating the (inter-prediction weight X on the first predicted value) + (weight X second predicted value of the intra-prediction).

In addition, it is performing a combined prediction, performing a combined prediction for the luma (luminance) channel; For and chroma (chrominance) channels may include the step of performing one of the inter-prediction or intra prediction.

In addition, a method for decoding a video according to an embodiment of the present disclosure includes the steps of parsing the information about the accuracy of a motion vector from the bit stream; And on the basis of the information about the accuracy of a motion vector, setting the accuracy of the motion vector of the inter-prediction of the current block to one of the half--pel (half-pel), the integer -pel (integer-pel) and 2-pel the may further include.

Further, determining a weight includes the steps of: parsing the weight information for the current block from the bitstream; And it may be the basis of the weight information includes the step of determining a weight for weight, and intra-prediction for inter-prediction.

Also, the current block is prediction unit comprises a prediction unit for use in the prediction unit and the intra prediction is used in the inter prediction, and in the inter-prediction can be determined independently of the prediction unit used in the intra prediction.

Also, the method comprising: determining a weight comprises determining the initial weight of the reference weight for the inter-prediction; Reference picture for inter prediction, and determining a reference distance of the current picture including a current block; Reference picture for inter prediction, and determining a distance to the reference distance difference between the current picture including a current block; On the basis of the difference between the reference weight and the distance can include the step of determining a weight for inter-prediction.

Program for implementing a method for decoding an image according to an embodiment of the present disclosure may be written in a computer-readable recording medium.

Further, the reception unit for parsing an image decoding apparatus according to an embodiment of the present disclosure is directed to an intra-prediction and the combined prediction information (combine prediction information) that indicates whether or not coupled to predict the inter-prediction for the current block from the bitstream; And on the basis of the combined prediction information, when determining whether to perform a combined prediction for the current block, and performing a binding prediction, to obtain a first predicted value by performing the inter prediction for the current block, and the intra respect to the current block performing prediction to obtain a second prediction value, and the reference picture, and determine a distance, the size of the current block and the inter prediction and based on at least one of the properties of the intra-prediction weights for the inter prediction and the weights for the intra prediction of the current picture and it may include a decryption performing a combined prediction weights for the inter-prediction, based on the weight, the first predicted value and the second predicted value of the intra-prediction.

The video encoding method according to an embodiment of the present disclosure includes the steps of: obtaining a first predicted value by performing the inter prediction for the current block; Obtaining a second predicted value by performing intra prediction with respect to the current block; Reference picture and determining a distance, the size of the current block and the inter prediction based on at least one of the properties of the intra-prediction and inter-prediction weights for the weight for the intra prediction of the current picture; Weight for the inter-prediction, and performing a binding prediction on the basis of the weight, the first predicted value and the second predicted value of the intra-prediction; Determining a combined prediction information (combine prediction information) about whether or not to perform the binding prediction for the current block; And it may include the step of transmitting a bitstream including at least one of the weight information using the combined information and prediction weights.

In addition, the combination information and a prediction step of entropy coding in a lower position at least one of the weights of information than a result of intra prediction and inter prediction may be further included.

Further, determining a weight may include the step of determining a weight based on a sample value, the first predicted value and second predicted values ​​of the original pixels in the current block.

Further, it is possible to determining a weighting, comprising: calculating a weight based on a ratio expected value of the samples of the expected value and the original pixel sample values ​​of the source pixels and the ratio of the first predicted value with the second prediction value.

In addition, the image encoding apparatus according to an embodiment of the present disclosure is directed to obtain a first predicted value by performing the inter prediction for the current block and obtaining a second predicted value by performing the intra prediction for the current block and the reference picture and the current picture of the distance, the current block size and the inter-prediction and based on at least one of the properties of the intra-prediction and determine a weight for weight, and intra-prediction for inter-prediction, the weight for the inter-prediction, the weights for the intra prediction, the first performing a combined prediction based on the prediction value and the second prediction value, and encoding unit for determining a combined prediction information (combine prediction information) about whether or not to perform the binding prediction for the current block; And it may include a transmitter for transmitting a bitstream including at least one of the weight information using the combined information and prediction weights.

Figure 1 shows a flow diagram of an image decoding method according to an embodiment of the present disclosure.

Figure 2 shows a block diagram of an image decoding apparatus according to an embodiment of the present disclosure.

Figure 3 illustrates a flow diagram of the image encoding method according to an embodiment of the present disclosure.

Figure 4 illustrates a block diagram of a video encoder according to an embodiment of the present disclosure.

Figure 5 shows a flow chart of the image encoding method according to an embodiment of the present disclosure.

Figure 6 illustrates a flow diagram of an image decoding method according to an embodiment of the present disclosure.

7 is a flow chart showing the combined prediction, in accordance with an embodiment of the present disclosure.

8 is a view to a method of encoding the available prediction modes in accordance with an embodiment of the present disclosure.

9 is a view on the way to lower the accuracy of motion vector prediction in accordance with one embodiment of the present disclosure.

Figure 10 illustrates the concept of coding units according to an embodiment of the present disclosure.

Figure 11 shows a block diagram of the image encoding portion based on coding units, according to one embodiment of the present disclosure.

Figure 12 shows a block diagram of the image decoding unit based on coding units, according to one embodiment of the present disclosure.

Figure 13 shows the depth of each coding unit, and partitions, according to one embodiment of the present disclosure.

Figure 14 illustrates a relationship between a coding unit and a transformation unit, according to one embodiment of the present disclosure.

Figure 15 illustrates the, field-specific coded information in accordance with one embodiment of the present disclosure.

Figure 16 shows a field-specific coding unit according to an embodiment of the present disclosure.

Figure 17 illustrates a relationship between a coding unit, a prediction unit and the conversion unit according to an embodiment of the present disclosure.

Figure 18 shows a relationship between a coding unit, a prediction unit and the conversion unit according to an embodiment of the present disclosure.

Figure 19 illustrates the relationship between a coding unit, a prediction unit and the conversion unit according to an embodiment of the present disclosure.

Figure 20 illustrates the relationship between the coding unit, a prediction unit and a translation unit according to the coding mode information in Table 2.

Below with reference to FIG. 1 to FIG. 9 to be described in one embodiment the video encoding apparatus and video decoding apparatus, video encoding method and video decoding method according to the present invention.

Figure 1 shows a flow diagram of an image decoding method according to an embodiment of the present disclosure.

The image decoding method according to an embodiment may be performed by the image decoding apparatus 200. The image decoding apparatus 200 may perform step 110, which parses the intra-prediction and the combined prediction information (combine prediction information) that indicates whether or not coupled to predict the inter-prediction from the bitstream for the current block. The image decoding apparatus 200 on the basis of the combined prediction information, it is possible to perform the step 120 of determining whether to perform a combined prediction for the current block. Further, the image decoding device 200 when performing a combined prediction, the method comprising: obtaining a first predicted value by performing the inter prediction for the current block and obtaining a second predicted value by performing the intra prediction for the current block (130 ) can be performed. Further, the image decoding apparatus 200 includes a step of determining a reference picture and the distances of the current picture, the size of the current block and the inter prediction and based on at least one of the characteristics of the intra prediction by weight for the inter prediction and the weights for the intra-prediction It may perform 140. The video decoding apparatus 200 may perform step 150 to perform the combined prediction based on a weight, the weight, the first predicted value and the second predicted value of the intra prediction for inter-prediction.

A current block may be a basic block diagram of the coding or decoding of an image. It can also be a block for the current block is predicted. In addition, the current block may be a block for the conversion. For example, be a current block coding unit (coding unit), prediction unit (prediction unit) or the translation unit (transformation unit). In the encoding unit, 10 to 20 for the prediction unit and the conversion unit will be described in more detail.

And (prediction within or display) the intra-prediction, see the sample in the vicinity of a prediction technique such that only the spatial reference, for coding a block by means a method of predicting a current block.

In addition, inter-prediction (or inter picture prediction) is a coding method with reference to an image encoded in the previous time by eliminating duplication of data. In general, the image is used that because it has a higher correlation than the temporal FIG spatial correlation, to produce a prediction signal having higher similarity by referring to an image encoded in the previous time.

Combined forecast is predicted to join the intra prediction and inter prediction. For example, the image decoding apparatus 200 is given to the inter-prediction obtain a sample value of the first prediction value and the sampled value of the second predicted value restored by the intra-prediction restored by, the first predicted value and the second prediction value, respectively multiplied by the weight, it may be in addition to the first prediction value and a second weighted prediction value predicted sample values. And it can also be represented by the formula shown in the following expression (1).

Combining the predicted value = (inter-prediction weight - a first prediction value for a) + (weight * a second predicted value of the intra-prediction) ... (1)

Inter prediction utilizes the correlation between screens. Therefore, the distance between the reference picture, including the current picture block and the current reference block containing the current block moved away, it is possible to reduce the accuracy of prediction. Thus such a case, the image decoding device 200 can predict the sample values ​​at a higher accuracy by predicting with a large weight to the intra prediction using the information in the same picture.

In addition, the intra-prediction using the information on the already reconstructed block in the screen, to predict the sample values ​​in the current block. In addition, because the already reconstructed block is located on the left side or the upper side of the current block, the more sample values ​​in the lower right side in the current block, the accuracy of prediction based on the intra prediction sample value may fall. Thus such a case, the image decoding apparatus 200 may be a higher weight with predicted in the inter-prediction to predict the right lower side of the sample values ​​at a higher accuracy.

Binding prediction information indicates whether the prediction by combining the intra-prediction and inter-prediction. Combining forecast information may be flags (flag). For example, binding prediction information is set to '1' ANN right image decoding device 200 can perform the combined prediction. Also, if the combined prediction information is '0', the image decoding device 200 can not perform the combined prediction. Instead, the image decoding apparatus 200 may perform intra prediction or inter-prediction based on the predetermined information.

The image decoding apparatus 200 according to an embodiment of the present disclosure may parse the division information from the bit stream indicates whether the current coding unit echoed split to a lower depth. The image decoding apparatus 200 may decide to split the current coding unit to a smaller coding unit based on the division information. It indicates that the division information is no longer split the current coding unit, the image decoding apparatus 200 may parse the combined prediction information from the bitstream. In addition, the image decoding device 200 may determine whether to perform a combined prediction based on a combination prediction information.

In addition, it is possible to parse the skip information indicating whether the image decoding apparatus 200 according to another embodiment of the present disclosure is directed to whether the skip for the current coding unit from a bit stream. Skip information is information other than the index for the near future (merge) indicates whether or not aneulji additional syntax element signaling. It indicates that the skip information is not Skip is the current coding unit, the image decoding apparatus 200 may parse the combined prediction information from the bitstream. In addition, the image decoding device 200 may determine whether to perform a combined prediction based on a combination prediction information. In addition, if information indicating that a skip is skipping the current coding unit, the image decoding apparatus 200 may parse the combined prediction information.

The image decoding device 200 may determine the distance of the current picture and the reference picture, the size of the current block and the inter prediction based on at least one of the characteristics of the intra prediction by weight for the inter prediction and the weight for the intra prediction.

The image decoding device 200 may determine the initial weight of the reference weight for the inter prediction. The image decoding device 200 can see the picture of the inter-prediction and determine a reference distance of the current picture including a current block. The video decoding apparatus 200 may determine the difference between the inter prediction reference picture and the distance to the reference distance of the current picture including a current block. Further, the image decoding device 200 can be based on the difference between the reference weight and the distance to determine the weights for the inter prediction.

Of the reference picture and a current picture it can be represented by the distance difference between Picture Order Count (POC). POC represents the relative order of the output picture present in the same CVS (Coded Video Sequence). POC of the reference picture and the larger the difference between the POC of the current picture is far from the reference picture and a current picture. In addition, the image decoding device 200 may be the distance of a current picture containing the reference picture to the current block of the inter-prediction to be more distant, applying the function to reduce the weight for the inter prediction. Function may be the same as the equation (2).

Weight for the inter-predicted = based on weight + (by distance-reference picture and the distances of the current picture) * k ... (2)

Where k is the amount of change in weight due to changes in distance. k can have a positive real number. The image decoding device 200 can be included on the basis of the functions far that the distance of the reference picture and a current picture to reduce the weight of the inter-prediction. In contrast the image decoding device 200 can be increased the closer the distance between the reference picture and the current picture is the weight on the inter-prediction.

The image decoding device 200 may determine the basis weight. Based on the weights it may be a value of the initial weight is used in the binding prediction. Based on the weight it may include a reference weight for weight basis, and the inter prediction for the intra prediction. Reference weight may be included in a bitstream. The image decoding device 200 can obtain a reference weight for the case parses the standard weights for the intra-prediction and subtracting the reference weight for the intra prediction in the "1" in the inter-prediction. In addition, the reference weight is not included in the bit stream, and the predetermined value preset in the image decoding apparatus 200 and a video encoder 400. The

In addition, the image decoding device 200 may determine a reference distance. Based on the distance between the reference picture and the distance by which the current picture. Current when the distance of the picture and the reference picture is equal to the reference distance, the weight of the intra prediction may be equal to the reference weight for the intra prediction. In addition, the current when the distance of the picture and the reference picture is equal to the reference distance, the weight of the inter-prediction may be equal to the reference weight for the inter prediction. Criteria may include the distance between the bit stream. In addition, based on the distance image decoding apparatus it may be a value of the predetermined pre-set to 200 and the video encoder 400. The

For example, the image decoding apparatus 200 may set the reference weight for the weight and initial intra-prediction for the initial inter-prediction to 0.5, respectively. In addition, the image decoding device 200 can be increased as a reference picture closer to the predetermined distance from the reference distance, the distance of the current picture by the weighting factor for the inter-prediction 0.1. In contrast the image decoding device 200 can be lowered by 0.1 weights for the intra prediction.

In addition, the image decoding device 200 can be based on the magnitude of the current block to determine a weight for weight, and intra-prediction for inter-prediction. For example, the size of the modern block can gradually reduce the accuracy of intra prediction toward the lower right in the case larger than the predetermined size, the current block. Thus, the image decoding apparatus 200 the larger the block size of the current can be increased weights on the inter-prediction. Thus, the image decoding apparatus 200 may increase the influence of the inter-prediction in the combined prediction.

For example, the image decoding apparatus 200 may set the reference weight for the weight and initial intra-prediction for the initial inter-prediction to 0.5, respectively. In addition, the image decoding device 200, the current block size can be increased larger by the weight of the inter-prediction of 0.1 than the predetermined size.

Further, the image decoding apparatus 200 on the basis of the characteristics of the inter-prediction and intra-prediction, one can determine a weight for weight, and intra-prediction for inter-prediction. In the case of inter-prediction, as described above, it can drop the current block, the current picture and the accuracy of the distance increases, when the prediction of the reference picture including a current block to the reference block including a. In the case of intra-prediction, the sample value the more the accuracy of the sample value prediction based on the intra prediction in the lower right side in the current block may be less. Thus, the image decoding device 200 if the combined predicted within the current block, as the go into the lower right side of the pixel to increase a weight for the inter-prediction can reduce the weight for the intra prediction. On the other hand, the image decoding device 200 can increase the current if the combined prediction in the block, the more to go to the upper left pixel to lower the weight of the inter-prediction weights for the intra prediction.

For example, the image decoding apparatus 200 may set the reference weight for the weight and initial intra-prediction for the initial inter-prediction to 0.5, respectively. In addition, the image decoding apparatus 200 when the current predicted beulrokreul, more go right lower side pixel can be increased more and a weight on the inter-prediction.

Above, but not the basis weight of 0.5 is not limited thereto. The video decoding apparatus 200 may determine the basis weight to the weights for the intra-prediction or inter-prediction.

Further, the image decoding apparatus 200 may parse the weight information for the current block from the bitstream. Weight information may include the reference weight, weight for weight, and intra-prediction for inter-prediction of the current block. For example, the weight information may comprise a weight for all pixels in the current block. Or weight information may include, disconnect the current block into a plurality of areas, the weight for each area. In addition, because it is the sum of the weights for the weight and the intra-prediction for inter-prediction is 1, the image decoding apparatus 200 may obtain only one of the weights for the weight or the intra prediction for inter-prediction.

The weight includes the weight information may be displayed in a matrix (matrix). For example, the weight information may be the information of the image of the current value of the weight of the coordinates in the block in a matrix. May also be a weighted information is information representing the current value of the weight corresponding to the distance of the picture and the reference picture in the matrix.

In addition, the weight information may be represented as a function. For example, the weight information may be a function showing the amount of the weight according to the distance of the current picture and the reference picture. In addition, the weight information may be a function showing the amount of the weight according to the coordinate in the current block.

In addition, the weight information may include information for acquiring a weight for weight, and intra-prediction for inter-prediction. The image decoding device 200 can determine the weights based on the weight information. For example, the weight information may include the reference weight. The image decoding apparatus 200 based on at least one of the characteristics of the reference weight, the reference picture and the distance, size, and inter-prediction and intra-prediction for the current block of the current picture may determine a weight for each pixel. In addition, the weight information can be received block by block or slice (slice).

Intra-prediction and inter-prediction, there may be a plurality of modes. For example, the intra prediction may include a plurality of modes associated with the planar mode, DC mode and direction. In addition, when the inter-prediction video coding apparatus 200 may generate a unidirectional motion prediction candidate, a two-way movement may generate a prediction candidate. In addition, when the inter-prediction video coding apparatus 200 includes a reference block of the current block can be on the left side, the upper or first reconstructed picture than the current picture.

In addition, the image decoding apparatus 200 includes a plurality of modes associated with the prediction direction is included in the intra-prediction (e.g., planar mode, the plurality of modes relating to the DC mode, and the direction (intra_Angular)) of the available modes of the weight for each to be determined. In addition, the image decoding device 200 may determine a weight for each of the available modes of a plurality of modes associated with a plurality of reference blocks for the current block is included in the inter prediction reference. In addition, the image decoding device 200 may determine a weight for each of the one-way prediction or two-way predictive candidate mode using one of a plurality of modes associated with the candidate to be included in the inter-prediction.

The image decoding device 200 can be based on bitstream select the available modes. For example, the image decoding apparatus 200 may parse the information regarding possible modes used from the bitstream. The video decoding apparatus 200 may select the available mode, a plurality of mode of the available modes of the related prediction directions included in the intra-prediction based on the information regarding the. The video decoding apparatus 200 may determine a weight for each of the available modes.

In addition, the image decoding device 200 may select the available modes available mode from the mode included in the plurality of inter-prediction based on the information regarding the. For example, the image decoding device 200 may select the available modes of a plurality of modes corresponding to the plurality of reference blocks for the current reference block. In addition, the image decoding device 200 may select the available modes of a plurality of modes related to one-way or two-way prediction candidate prediction candidates included in the inter-prediction. The video decoding apparatus 200 may determine a weight for each of the available modes.

Intra prediction may have a plurality of modes associated with the planar mode, DC mode and direction. A plurality of modes related to the direction may be 33 branches. In addition, the index for the planar mode may be the "0" index for the DC mode can be "1". In addition, an index for a plurality of modes associated with each direction may be '2' to '34'. The image decoding device 200 may receive information regarding possible modes used from the bitstream.

According to one embodiment of the disclosure, information about the available modes may be given at intervals. If the information about the available mode given to the "interval 2", the image decoding apparatus 200 includes a Planar mode ( '0'), DC mode ( '1'), 2, 4, 6, 8, 10, 12, 14, it is possible to select a 16, 18, 20, 22, 24, 26, 28, 30, 32, 34 to the available mode. In addition, if the information about the available mode is given as 'range 4', the image decoding apparatus 200 includes a Planar mode, DC mode, 2, 6, 10, 14, 18, 22, 26, 30, 34 of the available modes You can choose to. In addition, if the information about the available mode given to the "distance 8 ', the image decoding device 200 may select a Planar, DC, 2, 10, 18, 26, 34 to the available mode.

Further, according to another embodiment of the disclosure, information about the available modes may be given by the number of modes. In addition, the image decoding device 200 can be selected as the available modes in sequence from the front in the list shown in Table 1. Listed in Table 1 may be in the order of infrequently used mode from being frequently used mode. Mode in the brackets in the list in Table 1 may select the image decoding apparatus 200 includes a random mode, so in the same order. Or the image decoding device 200 can receive the additional information from the bitstream to select a portion of the modes in the same rank.

Table 1

(Planar, DC), (10, 26), (34, 2), 18, (6, 14, 22, 30), (4, 8, 12, 16, 20, 24, 28, 32), (3 , 5, 7, 9, 11, 13, 15, 17, 19, 21, 23, 25, 27, 29, 31, 33)

For example, when using information about the available modes given as "10", the video decoding apparatus 200 may select a usable mode 10 mode. That is the image decoding device 200 may select a Planar, DC, 10, 26, 34, 2, 18, 6, 14, 22 to the available mode. Where (6, 14, 22, 30) may, except for the image decoding apparatus 200 includes a random mode "30", so the same rank. Or the image decoding apparatus 200 parses the additional information from the bitstream, may exclude the "14". In this case, the image decoding device 200 may select a Planar, DC, 10, 26, 34, 2, 18, 6, 22, 30 to the available mode.

In addition, the image decoding device 200 may determine a weight for each of the selected availability mode. In addition, the image decoding device 200 may be based on the determined weight, do the combined prediction.

When the image decoding device 200 and the image portion treasures device 400 to restrict the available modes as above deumeu to the information to be transferred in a bit stream decreases, the higher the transmission efficiency. In this connection it shall be further described in Fig.

In addition, the image decoding device 200 can perform the combined predicted according to equation (3).

Combining the predicted value = {(a1 * X1) + (a2 * X2) + ... + (aN * XN)} + {(b1 ​​* Y1) + (b2 * Y2) + ... + (bM * YM)} ... (3)

Here, N is the number of available modes of inter-prediction. M is the number of available intra prediction modes. aN represents the weight according to the inter-prediction mode. XN represents the predicted value of the inter-prediction mode. bM represents the weight according to the intra-prediction mode. YM represents the predicted value of the intra-prediction mode.

The image decoding device 200 can perform the combined prediction based on inter prediction weights, a second prediction value obtained by performing a first prediction value and an intra-prediction obtained by performing the weighted inter-prediction to the intra prediction for the . For example, the binding prediction can be performed on the basis of the relational expression (1) to equation (3) described above.

Figure 2 shows a block diagram of an image decoding apparatus according to an embodiment of the present disclosure.

The image decoding apparatus 200 according to one embodiment includes a receiving unit 210 and decoding unit 220. The Of the description of the image decoding apparatus 200 is described a method for decoding an image as described in the first redundant is omitted.

Receiving unit 210 is a combination of intra-prediction and inter-prediction combined prediction information (combine prediction information) indicating whether the prediction for the current block is parsed from the bitstream.

In addition, the decoding unit 220 determines on the basis of the combined prediction information, to perform a combined prediction for the current block. In addition, if the decoding unit 220 performs the combined prediction, to obtain a first predicted value by performing the inter prediction for the current block and to obtain a second predicted value by performing the intra prediction for the current block. Also decoding unit 220 determines the reference picture and the distances of the current picture, the size of the current block and the inter prediction based on at least one of the characteristics of the intra prediction by weight for the inter prediction and the weight for the intra prediction. In addition, the decoding unit 220 on the basis of the weight, the weight, the first predicted value and the second predicted value of the intra prediction for inter-prediction performs a combined prediction.

Figure 3 illustrates a flow diagram of the image encoding method according to an embodiment of the present disclosure.

The image encoding method may be performed by the video encoder 400. The The video encoder 400 may perform step 310 to obtain a first predicted value by performing the inter prediction for the current block. In addition, the video encoder 400 may perform step 320 to obtain a second predicted value by performing the intra prediction for the current block. In addition, the image encoding device 400 includes a step of determining a reference picture and the distances of the current picture, the size of the current block and the inter prediction and based on at least one of the characteristics of the intra prediction by weight for the inter prediction and the weights for the intra-prediction ( 330) can be performed. Further, the image encoding apparatus 400 may perform step 340 to perform the combined prediction based on a weight, the weight, the first predicted value and the second predicted value of the intra prediction for inter-prediction. Further, the image encoding apparatus 400 may perform the step 350 of determining the combined prediction information (combine prediction information) about whether or not to perform the binding prediction for the current block. In addition, the image encoding apparatus 400 may perform the step 360 of transmitting a bitstream including at least one of the weight information using the combined information and prediction weights.

Determining a weight (330) may include determining a weight based on a sample value, the first predicted value and second predicted values ​​of the original pixels in the current block. That is, the image encoding apparatus 400 may compare the first prediction value and the sample value of the second prediction value of the original pixels of the current block. In addition, the image encoding apparatus 400 may increase the weight to the predicted value close to the sample value of the original pixel of the first predicted values ​​and second predicted values. In addition, it is possible to perform the combined prediction based on the first predicted values ​​and second predicted values.

For example, step 330 for determining a weight includes the step of calculating weights on the basis of the expected value of the ratio of ratios expected value and the sample value and the second predicted values ​​of the original pixel of the first predicted value and the sample value of the original pixel can do. Which may be the same as the equation (4).

w1 (i, j) = f (E {x (i, j) / x1 (i, j)}, E {x (i, j) / x2 (i, j)}) ... (4)

Here, w1 is a weight for the intra prediction. i is the X coordinate of the pixel. j is the Y coordinate of the pixel. f () is a predetermined function. E () denotes the expectation. x is a sample value of the original pixel. x1 is a second predicted value obtained by performing the intra prediction. x2 is a first predicted value obtained by performing inter-prediction. In addition, the weight w2 for the inter-prediction can be the same as the relation (5).

w2 (i, j) = 1 - w1 (i, j) ... (5)

For example, E (x (i, j) / x1 (i, j)) denotes the expected value of x (i, j) / x1 (i, j). Current block may include a plurality of pixels. The video encoder 400 is the expected value can be determined against the ratio of the sampled values ​​restored by the intra-prediction to the original sample values ​​s (x) and a plurality of pixels for a plurality of pixels (that is, the second predicted value) . Further, the image encoding apparatus 400 obtain the expected value with respect to the ratio of the sampled values ​​restored by the inter-prediction of the original sample value s (x) and a plurality of pixels for a plurality of pixels (that is, the first predicted value) can. Each expected value may be obtained by the following expression (6).

Figure PCTKR2015011873-appb-I000001
... (6)

Here x_n (i, j) denotes a sample value of the original pixel in the n-th block in the last N blocks having the same size as the current block. Further, i means the coordinates along the x axis. Further, j means the coordinate value of the y-axis. Thus x_n (i, j) denotes a sample value of the original pixel in the position (i, j) in the n-th block. x1_n (i, j) is the x_n (i, j) refers to the restored second predicted value by the intra-prediction. x2_n (i, j) refers to a first prediction value the restored by the (i, j) x_n inter prediction.

In addition, f () in the expression (4) represents a predetermined function. May be, for example, f (X, Y) is X ^ 2 / (X ^ 2 + Y ^ 2). Thus equation (4) it may be as the following expression (7).

w1 (i, j)

= E ^ 2 {x (i, j) / x1 (i, j)} / (E ^ 2 {x (i, j) / x1 (i, j)} + E ^ 2 {x (i, j) / x2 (i, j)}) ... (7)

The video encoder 400 may perform 340 a combined prediction. May also a video encoder 400 is coupled forecasts, determine whether to perform the inter-prediction in the combined prediction results and the image decoding apparatus 200, as compared to the intra prediction result 350. For example, the image encoding apparatus 400 may determine 350 whether to perform a combined prediction based on bit efficiency. The image encoding apparatus 400 may compare the number of bits used when performing the bit and the intra-prediction or inter-prediction is used when performing the binding prediction. The image encoding apparatus 400 may determine 350 the combined prediction information based on the comparison result. For example, if a small number of bits are combined using prediction for the same image, the image encoding apparatus 400 may set the combination prediction information to "1". The image decoding device 200 can perform the combined prediction based on a combination prediction information.

Further, the image encoding apparatus 400 may compare the difference between the reconstructed image and the original image, when performing a combined prediction. In addition, the image encoding apparatus 400 may compare the difference between the reconstructed image and the original image, when performing the intra prediction or the inter prediction. In addition, the image encoding apparatus 400 may determine (350) coupled prediction information based on the comparison result. The image decoding device 200 may receive a combined prediction information. In addition, the image decoding device 200 can perform one of the combined prediction, inter-prediction and intra-prediction for the current block based on the combined prediction information.

The image encoding apparatus 400 may determine whether to combine prediction information to perform a combined prediction. Combining forecast information may be flags (flag). For example, if the combined prediction information '1' may indicate to operate the combined prediction. Also, if the combined prediction information '0' may indicate the combined prediction is not performed.

The image encoding device 400 uses a weight for weight or intra prediction for inter-prediction can acquire weight information. The weights for the inter-prediction can be the weights for the intra-prediction information according to the determined surface expression (5). Weight information may comprise a weight for all pixels in the current block. Or weight information may include, disconnect the current block into a plurality of areas, the weight for each area. In addition, the weight information may comprise different weights of each block included in a single image. In addition, the weight information may comprise different weights by the slices included in a picture. In addition, the weight information may comprise different weights for each image.

In addition, the video encoder 400 may use a weight for weight, and intra-prediction for inter-prediction obtaining weight information. For example, the image encoding apparatus 400 may determine, based on the weight of the weight or the determined intra prediction on the determined inter-prediction, the reference weight. In addition, the image encoding apparatus 400 may transmit the image decoding device 200, including in the bitstream based on the weights as weight information. The image decoding apparatus 200 based on at least one of the characteristics of the reference weight, the reference picture and the distance, size, and inter-prediction and intra-prediction for the current block of the current picture may determine a weight for each pixel.

For example, the image encoding apparatus 400 may determine the basis weight for the inter-prediction to 0.4 (by weight). The image encoding apparatus 400 may transmit a reference weight to the image decoding apparatus as weight information. In addition, the image decoding apparatus 200 based on the reference weight set the reference weight for the inter-prediction to 0.4, and may set the reference weight for the intra-prediction to 0.6. In addition, increasing the image decoding apparatus 200 is as close as a reference picture and a predetermined distance from the reference distance, the distance of the current picture by the weighting factor for the inter prediction of 0.1, can be reduced by 0.1 weights for the intra prediction.

Fixed to the video decoding apparatus 200 on the basis of the current block size and the characteristics of the inter-prediction and intra-prediction of the weight is hayeoteumeuro already been described above a detailed description thereof will be omitted.

The image encoding apparatus 400 may transmit (360) a bitstream including at least one of the binding prediction information, and weight information. The image encoding apparatus 400 may transmit (360) including the only bitstream combined prediction information. For example, video encoder 400 can be transmitted to the image decoding apparatus 200, including the only bitstream combined prediction information. The image decoding apparatus 200 may decide to combine the prediction by parsing the binding prediction information. In addition, the image decoding device 200 did not receive the weight information, by setting the predetermined basic weight, and based on the reference picture and the distances of the current picture, at least one of the size and nature of the inter-prediction and intra-prediction of the current block each it is possible to determine the weights of the pixel. For information on how to set the weight hayeoteumeuro described above, a detailed description thereof will be omitted.

In addition, the image encoding apparatus 400 may include only the bit stream transmission weight information (360). The image decoding apparatus 200 may decide to perform when it receives the weight information, the engaging projections. Also it can be the basis of the weight information obtaining weights for the inter-prediction and intra-prediction for the current block.

The video encoder 400 may entropy-coded by combining prediction information and the weight information lower-ranked than the at least one result of the intra-prediction and inter-prediction of. For example, the video encoder 400 may avoid intro coding information related to the binding predictions than the information relating to the inter-prediction and intra-prediction in a lower position. In addition, the image encoding apparatus 400 may include a bit stream using a number of bits than the information relating to the information related to the binding prediction in inter prediction and intra prediction. When blood intro coding information to a lower position associated with the combined prediction, it can give the priority to the inter-prediction and intra-prediction. In addition, if the intro coding information related to the blood coupled with the low rank prediction, the image encoding apparatus 400 and the image decoding apparatus 200 according to this disclosure may have a conventional codec compatibility.

FIG image decoding method, and program for implementing the video encoding method described with reference to Fig. 3 described in (1) can be written in a computer-readable recording medium.

Figure 4 illustrates a block diagram of a video encoder according to an embodiment of the present disclosure.

The image encoding device 400 according to one embodiment comprises a coding section 410, and a transmission unit 420. The Of the description of the image encoding device 400 it is described how the image encoding and duplication described in Figure 3 will be omitted.

Encoding unit 410 may obtain a first predicted value by performing the inter prediction for the current block. In addition, the encoding unit 410 may obtain a second predicted value by performing the intra prediction for the current block. In addition, encoder 410 may determine the distance of the current picture and the reference picture, the size of the current block and the inter prediction based on at least one of the characteristics of the intra prediction by weight for the inter prediction and the weight for the intra prediction. Encoding unit 410 may perform the prediction based on the combined weight, the weight, the first predicted value and the second predicted value of the intra prediction for inter-prediction. In addition, the coding unit 410 may determine the combined prediction information (combine prediction information) about whether or not to perform the binding prediction for the current block.

In addition, transmission section 420 may transmit a bit stream including at least one of the weight information using the combined information and prediction weights.

Figure 5 shows a flow chart of the image encoding method according to an embodiment of the present disclosure.

Figure 5 is a further embodiment of the image coding method described in FIG. The image encoding apparatus 400 may determine 510 whether to perform intra-mode. The video encoder 400 may, to determine (510) whether to perform an intra mode in order to perform an intra mode and an inter mode in sequence. Thus, if to handle intra mode and an inter mode in parallel, and whether to perform intra mode can not be determined (510).

The picture coding apparatus 400 decides to perform intra-mode, it is possible to determine whether or not 520, to perform a combined prediction. When performing a combined prediction, the image encoding device 400 includes a combination intra-may be carried out 530, the inter-prediction. Combining intra-performing unit 530 to the inter prediction may include a step 310 to step 350 of FIG. If it does not perform a combined prediction, the image encoding apparatus 400 may perform 540, the intra-prediction. In addition, the video encoder 400 is coupled intra- may obtain at least one of the binding prediction information, or weight information by performing inter prediction or intra prediction.

The video encoder 400 is coupled intra- may perform entropy coding (535) for the result of the inter prediction or intra prediction. The image encoding device 400 determines a hayeoteumeuro perform intra-mode, the video encoder 400 may perform (535) the entropy coding to the intra block. For example, the video encoder 400 may perform entropy coding for at least one of binding prediction information, or weight information. Performing (535) the entropy coding, it can be included in step 360 of FIG. Entropy coding and entropy coding for the inter-block for the intra blocks is the quantization level or the post-processing filter of the transform coefficients may be different from each other.

The picture coding apparatus 400 determines that the user does not perform the intra mode, the image encoding apparatus 400 may determine 525 whether to perform a combined prediction. When performing a combined prediction, the image encoding device 400 includes a combination intra-may be carried out 550, the inter-prediction. Combining intra-performing unit 550 to the inter prediction may include a step 310 to step 350 of FIG. Combining intra-bond and performing (530) the inter-prediction intra-performing unit 550 to the inter-prediction may perform different entropy coding. For example, combining the intra-prediction method comprising: performing an inter-unit 530 to the entropy coding performed for the intra blocks (535). However, combining the intra-prediction method comprising: performing an inter-550 for entropy coding of the inter-block performed (555).

If it does not perform a combined prediction, the image encoding apparatus 400 may perform 560, the inter-prediction. In addition, the video encoder 400 is coupled intra- may obtain at least one of the binding prediction information, or weight information by performing the inter prediction or the inter prediction.

The video encoder 400 is coupled intra- may perform entropy coding (555) for the result of the inter-prediction or inter-prediction. The image encoding device 400 determines a hayeoteumeuro perform inter mode, the video encoder 400 may perform (555) the entropy coding for inter blocks. For example, the video encoder 400 may perform entropy coding for at least one of binding prediction information, or weight information. Performing (555) the entropy coding, it can be included in step 360 of FIG. Entropy coding and entropy coding for the inter-block for the intra blocks is the quantization level or the post-processing filter of the transform coefficients may be different from each other. Further information relating to the predicted binding to the entropy coding as described above may have a lower priority than the information related to non-binding prediction.

In addition, the video encoder 400 is coupled intra-based inter prediction to perform (530, 550), it performs the intra prediction unit 540 and inter-prediction performed 560 may select the most efficient coding method. In addition, the image encoding apparatus 400 may select the most efficient coding method in consideration of performing entropy coding 555 for entropy coding is performed 535 and inter blocks of the intra block. It can also transmit the selected result to the image decoding device 400, including in the bitstream.

FIG current block as described in the first may comprise any one of a coding unit, a prediction unit and the conversion unit. Further, it is possible to the current block includes a prediction unit for use in the prediction unit and the intra prediction is used in inter prediction. The video encoder 400 may can perform a coding for the various prediction unit. In addition, the image encoding apparatus 400 may select the most efficient prediction unit of the many prediction unit. The image encoding apparatus 400 may be determined independently of the prediction unit used for prediction unit used in the inter-prediction in the intra prediction. For example, the image encoding apparatus 400 may divide the prediction unit used in inter prediction based on the most efficient, the inter-prediction coding unit. The video encoder 400 is to determine the prediction unit used in the inter-prediction may not take into account the prediction unit used in the intra prediction.

Flow diagram of Figure 5 is not limited to only a start date of this disclosure. For example, in Figure 5 coupled intra-performing the step of performing the inter prediction (530, 550) twice. The image encoding device 400 in order to avoid such duplication is a bond intra-without performing the inter-prediction unit 550, combining intra-performing inter-prediction can be used 530 as a result. In addition, the combined intra-since the inter-prediction performed included in the intra-prediction or inter-prediction (530, 550), the video encoder 400 is coupled intra-performing inter prediction on the basis of the result of the inter-prediction performed (530, 550, 540) or to perform the intra prediction unit 530 can obtain the result. In addition, at least one of the, if possible parallel processing method comprising: determining whether to whether to perform intra mode decision performs the 510 step and the combined prediction (520 and 525) as described above can not be performed . For example, the video encoder 400 is coupled intra- may perform inter prediction, intra prediction and inter prediction at the same time.

Figure 6 illustrates a flow diagram of an image decoding method according to an embodiment of the present disclosure.

The image decoding apparatus 200 may parse (610) the intra / inter type from a received bit stream. In addition, the image decoding device 200 may determine 620 whether or not the intra mode on the basis of intra / inter type. If determined to be the intra mode, the image decoding apparatus 200 may parse (630) the combined prediction information. The method comprising parsing (630) the combined prediction information may correspond to step 110 of Fig. In addition, the image decoding device 200 can determine whether or 635 to perform the combined prediction based on a combination prediction information. Determining (635) whether to perform a combined prediction may correspond to step 120 in FIG.

In addition, when performing a combined prediction, the video decoding apparatus 200 may perform a plurality of inter prediction. For example, it is possible to perform a first inter-prediction unit 640 to the L inter-prediction. For the plurality of inter-prediction described in connection with Figure 1 hayeoteumeuro detailed description thereof will be omitted. In addition, the image decoding apparatus 200 may perform a plurality of intra prediction. For example, it is possible to perform a first luma intra predictor 642 to the N-th luma intra prediction. In addition, it is possible to perform a first chroma intra-prediction 644) to (K chroma intra prediction. Performing steps and intra-prediction for performing inter-prediction may correspond to step 130 of FIG. For the plurality of intra-prediction is also described in detail with 1 hayeoteumeuro detailed description thereof will be omitted.

In addition, when performing a combined prediction, the video decoding apparatus 200 may perform the combined predictions for the luma (luminance) channel. Further, the image decoding device 200 can perform one of the inter-prediction or intra-prediction for chroma (chrominance) channels. I.e., the image decoding apparatus 200 may perform the prediction only for the luma channel and combined, do one of the inter-prediction or intra-prediction, such as conventional for the chroma channel. For example, the image decoding device 200 can be based on a combined chroma prediction information parsed from the bitstream to determine whether to perform a combined prediction against the chroma channel. Further, the image decoding device 200 can not perform the combined predictions for the chroma channels on the basis of the chroma prediction information.

When performing a combined prediction only for the luma channel image decoding apparatus 200, so only the reception weight for the luma channel from the video coding apparatus 400, it is possible to increase the transmission efficiency of the bit stream.

In the case of performing the combined prediction, the video decoding apparatus 200 may set the different weights for the luma channel and the chroma channel. In this case, the image decoding device 200 can provide a reconstructed image of higher image quality relative to the luma channel and the chroma channel. In contrast the image decoding device 200 after it receives the weight of one set from the image coding apparatus 400 can be used as a weight for the luma channel and the chroma channel. The image decoding device 200 may receive the weight information, the weight information may include a flag as to whether to apply a different weight for the luma channel and the chroma channel.

In addition, the image decoding device 200 can perform (646) the combined prediction based on a plurality of intra-prediction and inter-prediction. Performing (646) the combined prediction may also include the step of the first (140) and (150).

Further, if there are not perform the combined prediction, the video decoding apparatus 200 may perform intra prediction. For example, the image decoding apparatus 200 may perform a first luma intra prediction 650 and the first chroma intra-prediction 655.

If determined to be the inter mode, the image decoding apparatus 200 may parse (660) the combined prediction information. The method comprising parsing (660) the combined prediction information may correspond to step 110 of Fig. In addition, the image decoding device 200 may determine 665 whether to perform a combined prediction based on a combination prediction information. Determining (665) whether to perform a combined prediction may correspond to step 120 in FIG.

In addition, when performing a combined prediction, the video decoding apparatus 200 may perform a plurality of intra prediction. For example, it is possible to perform a first luma intra predictor 670 to the N-th luma intra prediction. In addition, it is possible to perform a first chroma intra-prediction 672) to (K chroma intra prediction. For the plurality of intra-prediction is also described in detail with 1 hayeoteumeuro detailed description thereof will be omitted. The video decoding apparatus 200 may perform a plurality of inter prediction. For example, it is possible to perform a first inter-prediction 674 to the L inter-prediction. For the plurality of inter-prediction described in connection with Figure 1 hayeoteumeuro detailed description thereof will be omitted. Performing steps and intra-prediction for performing inter-prediction may correspond to step 130 of FIG.

In addition, the image decoding device 200 can perform (676) the combined prediction based on a plurality of intra-prediction and inter-prediction. Performing (676) the combined prediction may also include the step of the first (140) and (150).

Further, if there are not perform the combined prediction, the video decoding apparatus 200 may perform inter-prediction. For example, the image decoding apparatus 200 may perform a first inter-prediction 680 and the first chroma intra-prediction 655.

Current block may include a prediction unit for use in the prediction unit and the intra prediction is used in inter prediction. The image decoding device 200 may determine a prediction unit based on a bit stream. The image decoding apparatus 200 may decide independently of the prediction unit used for prediction unit used in the inter-prediction in the intra prediction. For example, the image decoding device 200 can parse the information to the prediction unit used in the information and the intra-prediction for a prediction unit used in the inter-prediction from the bitstream, respectively. The image decoding device 200 can not take into account the prediction unit used in the intra prediction in determining the prediction unit used in the inter-prediction.

Further, when performing the combined prediction, it is possible to perform both inter-prediction and intra-prediction. The image decoding apparatus 200 when inter-prediction is used for predicting unit used in inter-prediction and intra prediction can be used when a prediction unit that is used for intra prediction. In addition, the image decoding apparatus 200 may use a separate prediction unit for predicting binding.

7 is a flow chart showing the combined prediction, in accordance with an embodiment of the present disclosure.

Binding prediction value as described in (1) can be calculated as shown in equation (1). For example, the image decoding apparatus 200 may perform 710, the intra-prediction on the basis of information (m1) of the intra-prediction. Information (m1) of the intra-prediction may be information about a prediction direction. In addition, the image decoding device 200 can perform (710) the intra-prediction to obtain a second prediction value (x1).

Further, the image decoding device 200 can be based on information (m2) on the inter-prediction to perform (720) the inter-prediction. Information about inter prediction may be a motion vector. In addition, the image decoding device 200 performs 720 the inter-prediction may obtain a first prediction value (x2). In addition, it is possible to the image decoding apparatus 200 includes a weight (w2), weights for the intra-prediction (w1), the first prediction value and a weighted sum (730) based on a second prediction value for the inter-prediction. Weighted agreement result 740 may be identical to the relation (1).

Weight can be determined according to at least one of the temporal distance of the reference picture including a current picture and the reference blocks including the position and the current block of pixels contained in the current block. Therefore, described in more detail the relation (1) can be equal to the expression (8).

x (i, j) = w (i, j) a (t) x1 (i, j) + (1-w (i, j)) b (t) x2 (i, j) ... (8)

Where i is the X coordinate of the pixel. j is the Y coordinate of the pixel. x is the combined prediction value. x1 is a second predicted value obtained by performing the intra prediction. x2 is a first predicted value obtained by performing inter-prediction. w is a weight for the intra prediction values. w is a function that the i and j as a variable. Thus, w has the value may vary depending on the pixel position. In addition, a and b is a function of the temporal distance of the current picture and the reference picture. In addition, a is one of the weight related to intra-prediction. It may have a (t) polynomial (polynomial). For example, a (t) may be given by a quadratic function as in the c * t ^ 2 + d * t + e. In addition, as a reference picture and the distances of the current picture far as described above, (t is the greater) weight of the intra-prediction can be increased. That is, a (t) may be a monotonically increasing function. In addition, a (t) may have a value of 0 to 1. b (t) is 1 - it can be a (t).

Reference picture may be calculated that when the N individual image coding apparatus 400 or the image decoding apparatus 200 or the video encoder 400 is ak (tk) (However, k = 1, 2, ..., N) . In addition, the image decoding apparatus 200 or the video encoding apparatus 400 may calculate the a (t) of equation (8) using the ak (tk). For a (t) may be a relational expression (9).

a (t) = {a1 (t1) + a2 (t2) + ... + aN (tN)} / {a1 (t1) + a2 (t2) + ... + aN (tN) +1} .. (9)

b (t) In addition, the relational expression (8) may be an expression (10).

b (t) = 1 / {a1 (t1) + a2 (t2) + ... + aN (tN) +1} ... (10)

8 is a view to a method of encoding the available prediction modes in accordance with an embodiment of the present disclosure.

The video encoder 400 may use a variable length coding and fixed-length coding to reduce the amount of data passed to the image decoding device 200. In addition, the image encoding apparatus 400 may use the statistical properties of the image and the intra-prediction mode to represent fewer bits of the intra-prediction mode, up to 35 branches. Typically, one block and the neighboring block when the divided nature image into blocks of a certain size will have a similar visual characteristics. Therefore, there is a high probability that the intra prediction mode, also, the same as or close to block or similar mode for the current block. The image encoding apparatus 400 may encode the mode of the current block based on the intra-prediction mode of the left block and the top block relative to the current block in consideration of these characteristics.

8, the video encoder 400 may perform variable length coding. For example, the video encoder 400 may be in accordance with (Most Probable Mode) MPM, assigned to the intra-prediction mode of the left block in the A0. Further, the image encoding apparatus 400 may assign an intra-prediction mode of the upper block A1. In addition, the video encoder 400 may assign one of the planar mode A2, DC mode and vertical mode. By the variable length coding A0 to A2 may have a different number of bits. The image encoding apparatus 400 may encode the A0 mode to '10'. The image encoding apparatus 400 may encode the A1 mode to '110'. The image encoding apparatus 400 may encode the A2 mode to '111'.

The intra-prediction mode of the current block as described above can be similar to the prediction modes in the neighboring blocks. In other words, there is a high, the probability result in the same prediction mode, the prediction mode of adjacent blocks when performing the intra prediction of the current block. Thus, the video encoder 400 may allocate a small number of bits in these prediction modes to increase the transmission efficiency of the bit stream.

Further, the image encoding apparatus 400 may perform a fixed-length encoding with respect to the other modes are not assigned to the variable length encoding mode. For example, the image encoding apparatus 400 may encode the B0 mode to '00 ... 00 '. Similarly, it is possible to code the BN-1 mode to '01 ... 11 '. The image decoding apparatus 200 checks the first bit of the encoded bit string (bit string) can be distinguished from that that the variable length coding performed using a fixed encoding encoding. For example, the first bit of A0 ~ A2 mode is "1" is the first bit of the B0 ~ BN-1 mode is a "0". The image decoding device 200, it can be seen that the first bit and a '1' ANN right variable length coding.

The video decoding apparatus 200 as described above may receive information regarding possible modes used from the bitstream. For example, used when the information regarding possible modes given in "range 4 ', the image decoding device 200 using a Planar mode, DC mode, 2, 6, 10, 14, 18, 22, 26, 30, 34 You can select an available mode. The Planar mode, DC mode, and "2" mode may be encoded using a variable length coding. , And the other 6, 10, 14, 18, 22, 26, 30 and 34. Mode (8 modes) may be encoded using a fixed length coding. In this case, the image encoding apparatus 400 may perform a fixed-length encoding with only three bits for the 8 modes. When not used, so the selection mode will need to perform a fixed-length encoding with respect to the 32-bit mode is required 5. Therefore, it is possible to perform coding using fewer bits by selecting an available mode. In addition, the image encoding apparatus 400, the efficiency is improved because the transmitting fewer bits in the image decoding device 200.

9 is a view on the way to lower the accuracy of motion vector prediction in accordance with one embodiment of the present disclosure.

If the size of an image increases there is a need to lower the accuracy of the motion vector it may be to increase the data processing speed. There is also a need to lower the accuracy of the motion vector may be wihaeseodo reduce the size of the data, the image encoding device 400 is sent to the image decoding device 200.

Referring to (a) of Figure 9, the image may be up-scaling (up-scaling). Black points are the original pixel, white point is interpolated (interpolation) pixels by upscaling. Of the interpolated pixel it is referred to as sub-pixels. The image decoding device 200 can parse the information about the accuracy of a motion vector from the bit stream. In addition, the image decoding apparatus 200 on the basis of the information about the accuracy of a motion vector, the inter-prediction accuracy of a motion vector half -pel (half-pel) for the current block, the integer -pel (integer-pel) and 2 You can set one of -pel.

Wherein pel is a mobile unit of the means pixels, and motion vectors. For example, a black dot in (a) of FIG. 9 refers to wonbok pixels. The movement between the source pixels, because that moves by one pixel, can be expressed as integer-pel (1-pel).

The image decoding apparatus according to an embodiment of the present disclosure 200 may set the accuracy of a motion vector on the basis of the information about the accuracy of a motion vector by 2-pel. In this case, for the pixel in the 2-pel distance from the starting pixel 910 of the search, the motion vector can be set. The video decoding apparatus 200 may set the motion vector so as to face any one of the pixels 912, the pixel 916 and pixel 914 from the starting pixel (910). Further, the image decoding device 200 does not detect the accurate motion vectors than the 2-pel. FIG image decoding apparatus 200 in the 9 (a) may select a motion vector 920 directed to the pixels 916 from the pixel 910. The image decoding device 200 may select a motion vector 920 based on the information parsed from the bitstream.

This setting of the motion vector according to another embodiment of the present disclosure are disclosed in Figure 9 (b). The image decoding device 200 can be based on information about the accuracy of a motion vector set the accuracy of a motion vector to an integer -pel (integer-pel). Also in this case to select a motion vector, such as 9 (a), the image decoding apparatus 200 may further perform a search as described below. For a pixel in the integer -pel interval from starting pixel 930 of the search, the motion vector can be set. Further, the image decoding apparatus 200 includes a pixel 931, pixel 932, pixel 933, pixel 934, pixel 935, pixel 936, pixel 937, and from starting pixel 930 It may set the motion vector so as to face one of a pixel 938. In addition, the image decoding apparatus 200 may set the motion vector to one of a pixel 931, pixel 933, pixel 935 and pixel 937 from starting pixel 930 according to the method of searching effort. The image decoding device 200 may select a motion vector 940 based on the bit stream. The image decoding device 200 may select a motion vector 940 directed to the pixels 933 from the pixel 930.

This setting of the motion vector according to another embodiment of the present disclosure are disclosed in Figure 9 (c). The image decoding device 200 can be based on information about the accuracy of a motion vector set the accuracy of a motion vector of a half -pel (half-pel). Also in this case to select a motion vector, such as the 9 (b) the image decoding apparatus 200 may further perform a search as described below. For a pixel in the halftone -pel interval from starting pixel 950 of the search, the motion vector can be set. Further, the image decoding device 200 may select a motion vector (960) based on the bitstream. The image decoding device 200 may select a motion vector 960 directed to the pixels 951 from the pixel 950.

This setting of the motion vector according to another embodiment of the present disclosure are disclosed in Figure 9 (d). The image decoding apparatus 200 may set the accuracy of a motion vector in quarter -pel (quarter-pel) on the basis of the information about the accuracy of a motion vector. Also in this case to select a motion vector, such as 9 (c), the image decoding apparatus 200 may further perform a search as described below. For a pixel in the quarter -pel interval from starting pixel 970 of the search, the motion vector can be set. Further, the image decoding device 200 may select a motion vector (980) based on the bitstream. The image decoding device 200 may select a motion vector 980 directed to the pixels 971 from the pixel 970.

Figure 10 illustrates the concept of coding units according to an embodiment of the present disclosure.

Examples of the encoding unit, the size of the coding unit is represented by width x height, from the size of 64x64 coding unit, may include a 32x32, 16x16, 8x8. With the size of the encoding unit of 64x64 in size 64x64, 64x32, 32x64, can be divided into the 32x32 partitions, a coding unit of size 32x32 size 32x32, 32x16, 16x32, 16x16 partitions, of sizes 16x16 coding unit size 16x16 , 16x8, 8x16, 8x8 with the partition, the size of the 8x8 coding unit may be divided into a size of 8x8, 8x4, 4x8, 4x4 partition. In addition, although not shown in Figure 10, the encoding unit may have a size greater than 64x64. For example, the encoding unit may have a size, such as 128x128, 256x256. If the encoding unit is increased in proportion to the 64x64 also increases at the same rate, the size of the partition.

For the video data 1010, the resolution is the maximum size of 1920x1080, the encoding unit 64, the maximum depth is set to 2. For the video data 1020, the resolution is the maximum size of 1920x1080, the encoding unit 64, the maximum depth is set to three. For the video data 1030, the resolution is the maximum size of 352x288, the encoding unit 16, the maximum depth is set to 1. The maximum depth shown in Fig. 10 indicates a total division number to the minimum coding units from the maximum coding unit.

When large amount of the resolution is high and the data is preferred that the maximum size of the coding size is relatively large as well as for improvement in encoding efficiency to accurately banhyeong the image characteristic. Therefore, compared with the video data 1030, the high-definition video data (1010, 1020) may be the maximum size of the coding size selection to 64. It may also be selected larger than this.

Since the maximum depth of the video data 1010 is 2, from the encoding unit 1015 is a maximum coding unit is longitudinal size 64 of the video data 1010, the two parts, and so the depth is deeper two layers is the major axis size of 32, 16 It may contain up to the encoding unit. On the other hand, since the maximum depth of the video data 1030 it is 1, the coding unit 1035 of the video data 1030 is divided once, from the encoding unit a long axis size of 16, and the depth is so deep one layer has the long axis size of 8 It may contain up to the encoding unit.

Since the maximum depth of the video data 1020 is three, from the maximum coding unit encoding unit 1025 of the video data 1020 is in a long axis size of 64, three times division, and so the depth is deeper three-layer has the long axis size of 32, 16 , it may contain up to 8, the encoding unit. The depth may be deepened the expressive power of the details can be improved.

Figure 11 shows a block diagram of an image coding unit 1100, based on coding units, according to one embodiment of the present disclosure.

Video coding unit 1100 according to one embodiment, it includes undergoing operation to encode the image data in the encoding unit 410 of the video encoder 400 of Fig. That is, the intra-prediction unit 1110, the current frame 1105 performs intra prediction on coding units in an intra mode, and the motion estimation unit 1120 and the motion compensation unit 1125, the current frame 1105 of the inter mode of and using the reference frame (1195) and performs inter estimation and motion compensation. In addition, the combined prediction unit 1130 may perform the combined prediction based on performing intra-prediction and inter-prediction result. Binding prediction unit 1130 in more detail in the combined prediction 3 to 5 for performing in hayeoteumeuro described herein need not be described in detail here.

Data output from the intra-prediction unit 1110, a motion estimation unit 1120 and the motion compensation unit 1125 may be coupled via a coupling prediction predictor 1130. In addition, the intra-prediction unit 1110, a motion estimation unit 1120, a motion compensation unit 1125 and a combined data output from the predictor 1130, the quantized transform coefficients through the conversion section 1130 and a quantization unit 1140, It is output to. After the quantized transformation coefficient is restored through the inverse quantization unit 1160, inverse transformation section 1170 with the data from the spatial domain, the data for the restored space region through the deblocking unit 1180 and the loop filtering unit 1190 the processing is output as a reference frame (1195). The quantized transformation coefficient may be output as a bit stream 1155 via the entropy encoding unit 1150.

In order to be applied to the image encoding apparatus 400 in accordance with one embodiment, intra, which are components of the image coding unit 1100, the prediction unit 1110, a motion estimation unit 1120, motion compensator 1125, the conversion unit ( 1130), a quantization unit 1140, entropy coding unit 1150, an inverse quantization unit 1160, inverse transformer 1170, the deblocking unit 1180 and the loop filtering unit 1190 are all, for each maximum coding unit max considering the field will be required to perform operations based on each coding unit of the coding unit according to a tree structure.

In particular, intra-prediction unit 1110, a motion estimation unit 1120 and the motion compensation unit 1125, is the partition of the current maximum coding maximum size and for each taking into account the maximum depth of coding units in accordance with the tree structure of an encoding unit of the unit and determining the prediction mode, the converting unit 1130 is to be determined the size of the transformation unit in each coding unit of the coding unit according to a tree structure.

The image encoding unit 1100 restores the pixels belonging each to classify pixels according to the edge-type (or band type) each have a maximum coding unit in the reference frame (1195), and determines the edge direction (or start band position), the category of it may determine the average error value. Each maximum coding unit, the coding of each of the offset merge information, the offset type and the offset value may be signaled.

Figure 12 illustrates a block diagram of an image decoding unit 1200, based on coding units, according to one embodiment of the present disclosure.

The information on the bit stream 1205 is necessary for the image data coding and decoding through a decoding target parsing unit 1210 encoding is parsed. The video data is encoded through the entropy decoding unit 1220 and an inverse quantization unit 1230 is output to the inverse quantized data, video data through the inverse transformation unit 1240, a spatial domain is restored.

With respect to the video data of the spatial domain, the intra-prediction unit 1250 in the encoding unit of the inter mode using together perform intra prediction on coding units in an intra mode and a motion compensation unit 1260, a reference frame (1285) to perform motion compensation.

Data of an intra predictor 1250 and the rough motion compensation unit 1260, a spatial domain is predicted may be used in combination in the combined prediction unit 1270. In Figure 12, but it is not data of the area subjected to intra-prediction unit 1250 and the motion compensation unit 1260, the region is shown to be unconditionally through the joint prediction, like. Based on the combined prediction information received by the image encoding apparatus 400 it may not perform the binding prediction. The data in this space when subjected to intra-prediction unit 1250 and the motion compensation unit 1260 area may be output to the de-blocking unit (1275). For the combined prediction Fig. 1, 2 and hayeoteumeuro already described in detail in Figure 6 a detailed description thereof will be omitted.

Data in the rough bond predictor 1270 the spatial domain is then processed through the deblocking unit 1270 and the loop filtering unit (1280) may be output as reconstructed frame (1295). Further, the data processing and then through the deblocking unit 1270 and the loop filtering unit (1280) may be output as a reference frame (1285).

To decode the video data in the decoding unit 220 of the video decoding apparatus 200, it may be performed step-by-step operation after the parsing unit 1210, the video decoding unit 1200 according to one embodiment.

In order to be applied to the image decoding apparatus 200 according to one embodiment, the parsing unit 1210, which are components of the image decoding unit 1200, an entropy decoding unit 1220, an inverse quantization unit 1230, inverse transformation unit (1240 ), based on the intra-prediction unit 1250, a motion compensation unit 1260, a deblocking unit 1270 and the loop filtering unit (1280) that both, the encoding unit according to the tree structure for each maximum coding unit should do the task do.

In particular, intra-prediction unit 1250, a motion compensation unit 1260 for each of the coding unit according to the tree structure determines the partition and a prediction mode, the inverse transformation unit 1240 is to be determined the size of the transformation unit for each coding unit .

An image decoding unit 1200 can extract the offset parameter of the maximum coding units from the bitstream. Based on the current offset merge information from the offset parameter of the maximum coding unit by using the offset parameter of the maximum coding unit of a neighboring may restore the current offset parameter. For example, the same as the offset parameter of the maximum coding unit of a neighboring may restore the current offset parameter. Current using the offset type and the offset value from the offset parameter of the maximum coding unit, it may be adjusted by an offset value corresponding to the maximum coding categories based on the edge type or a band type for each of the reconstructed pixel for each unit of a reconstructed frame (1295).

Figure 13 illustrates the depth of each coding unit, and partitions, according to one embodiment of the present disclosure.

The image decoding apparatus according to one embodiment of the image encoding apparatus 400 and an embodiment according to the (200) uses a hierarchical coding units to take into account the image characteristics. The maximum height and width, the maximum depth of coding units may be determined adaptively according to the characteristics of the image, and may be variously set according to the user's requirements. According to the maximum number of pre-set encoding units, the size of the depth of each coding unit may be determined.

An exemplary hierarchical structure 1300 of a coding unit according to the example is the 64 maximum height and width of the coding units, there is shown the case where the maximum depth of 3. At this time, the maximum depth indicates a total division number to the minimum coding units from the maximum coding unit. The longitudinal axis of one embodiment of the encoding hierarchy 1300 of the unit according to the example therefore becomes a deeper depth of field is divided respectively the height and width of each encoding unit. Further, along the horizontal axis of the hierarchical structure 1300 of a coding unit, a prediction unit and partitions, each of the field is the basis of predictive coding of each coding unit is shown.

That is, the encoding unit 1310 is a field of zero as the maximum coding units of a hierarchy 1300, an encoding unit, the encoding unit size, that is, the height and width of 64x64. Becomes the deeper depth along the longitudinal axis, there is a size of 32x32 field 1 of the encoding unit 1320, of size 16x16 of field 2 of an encoding unit 1330, the size coding unit 1340 of the 8x8 field 3. An encoding unit 1340 of the third field size 8x8 is the minimum coding unit.

Along the horizontal axis for each field, it is prediction unit and partitions of the coding units are arranged. That is, the field if the size coding unit 1310 of the 64x64 0 prediction unit, the prediction unit is the partition 1310, with a size 64x32 partition 1312 of size 64x64 included in the coding unit 1310 of size 64x64, the size the 32x64 partitions 1314 may be divided into a partition of 1316 of size 32x32.

Similarly, a prediction unit of the coding unit 1320 of size 32x32 of depth 1, size 32x32 of a partition 1320, with a size 32x16 partition 1322, a partition of size 16x32 included in the coding unit 1320 of the size 32x32 in 1324, it may be divided into partitions to 1326 of size 16x16.

Similarly, the size of prediction units of 16x16 of a coding unit 1330, a field 2, the partition 1330, the size of 16x16 included in the coding unit 1330 of size 16x16, the size of 16x8 partition 1332, a partition of size 8x16 in 1334, it may be divided into partitions to 1336 of size 8x8.

Similarly, the predicting unit of the size 8x8 coding unit 1340 of the field 3, the partition 1340, the size of 8x8 included in the coding unit 1340 of the size 8x8, the size of 8x4 partition 1342, a partition of size 4x8 in 1344, it may be divided into partitions to 1346 of size 4x4.

Encoding unit 410 of the video encoder 400 in accordance with one embodiment, the encoder to determine the depth of the maximum coding unit 1310, for each encoding unit of each depth included in the maximum coding unit 1310 It shall be carried out.

The number of field per coded unit for storing the data in the same range and size, are shown the depth is deeper increase in the number of field per unit of encoding. According to one embodiment of the present disclosure, with respect to data including a dog a coding unit of depth 1, depth 2, the encoding unit may be required are four. Accordingly, in order to compare encoding results of the same data for each field, it may be each encoded using an encoding unit of one field of the first encoding unit and the four-depth two.

In accordance with another embodiment of the present disclosure, with respect to data including a dog a coding unit of field 1 and field 2 of an encoding unit it may require two. Accordingly, in order to compare encoding results of the same data for each field, it may be each encoded using an encoding unit of one field of the first encoding unit and two field 2.

For each field by coding, in accordance with the horizontal axis of the hierarchical structure 1300 of the coding units, by performing encoding for each prediction unit in the field by an encoding unit, chatter is the smallest coding error is representative of encoding errors in the field is selected . Further, it becomes a deep depth along the longitudinal axis of the hierarchical structure 1300 of the coding units, by performing encoding for each depth, the minimum coding error can be detected by comparing the representative coded error by the depth. There is a minimum depth and a partition which coding error occurs in the maximum coding unit 1310 may be selected as depth and a partition mode of the maximum coding unit 1310.

Figure 14 illustrates a relationship between a coding unit and a transformation unit, according to one embodiment of the present disclosure.

The image decoding device 200, the video encoder according to the (400) or an embodiment in accordance with one embodiment, a maximum coding unit for each encoding or decoding an image encoded in units of a size equal to or less than the maximum coding units. The size of the encoding process for the conversion of the conversion unit may be selected based on data units that are larger than the respective encoding units.

For example, in the image decoding apparatus 200 according to one embodiment of the image encoding apparatus 400 or the embodiment according to, the conversion unit 1420 of, 32x32 size when the current coding unit (1410) 64x64 size the conversion may be carried out using.

Also, 64x64 and then the data of the coding unit 1410 in size, respectively performing conversion encoding in the 32x32, 16x16, 8x8, transform unit of a 4x4 size of 64x64 in size or less, the error is the least conversion units of the original selection It can be.

The image decoding device 200 may determine at least one transformation unit for using information about the partition type of the conversion unit parsed from the bitstream encoded from the divided unit. The image decoding apparatus 200 may divide the conversion unit hierarchically in the same manner as the coding unit mentioned above. An encoding unit may include a plurality of conversion units.

Conversion unit may have a square shape. Length of one side of the conversion unit may be the greatest common divisor of the lengths of the long and the coding unit of the height of the coding unit width. For example, if a coding unit having a size of 24x16 greatest common divisor of 24 and 16 is eight. Therefore conversion unit may have a square shape having a size of 8x8. Further, the encoding unit size of 24x16 may include a conversion unit 6 of the 8x8 size. Conventionally hayeoteumeuro using a conversion unit of a square shape, it is possible, it does not require an additional ground (basis) If a conversion unit into a square.

However, not limited to this, the image decoding device 200 can determine the converting unit included in the coding unit in any of a rectangular shape. In this case, the image decoding apparatus 200 may have a base (basis), corresponding to the rectangular shape.

Further, the image decoding apparatus 200 may divide the depth of the conversion unit includes at least one of a current field and sub-field from a coding unit on the basis of the information about the partition type of the conversion unit in a hierarchical manner. For example, if a coding unit having a size of 24x16, the image decoding device 200 can be divided into coding units into a conversion unit having a size of six 8x8. In addition, the image decoding apparatus 200 may divide at least one of the transformation unit of the six conversion unit into a unit of 4x4 transform.

Further, the image decoding apparatus 200 may parse the coded information indicating whether transform coefficients exists for the coding units from the bitstream. The coding information may be parsed from, the image decoding apparatus 200 includes a sub-coding information to a bit stream that indicates whether a transform coefficient exists for each of the conversion units included in the coding unit indicate that the transform coefficient presence .

For example, encoding information can indicate that the transform coefficients for the encoding unit does not exist, the image decoding device 200 does not parse the encoded sub-information. Further, when the encoding information, which indicates that there is a conversion factor for the coding unit, the image decoding apparatus 200 may parse the encoded sub-information.

Figure 15 illustrates the, encoded information in accordance with one embodiment of the present disclosure.

Transmission unit 420 of the video encoder 400 according to one embodiment is a division information, for each encoding unit of each of the depth information 1500 about the partition mode, information 1510 about the prediction mode, the transformation unit size It can be transmitted by encoding the information 1520 about.

Information 1500 about the partition mode, a data unit for prediction encoding the current coding unit, indicates information about the type of a prediction unit of a current coding unit divided partitions. For example, in any of the sizes 2Nx2N of the current coding unit CU_0, the size of 2Nx2N partition 1502, a size of 2NxN partition 1504, a size of Nx2N partition 1506, the size NxN partition 1508 of the one type of can be used is divided. In this case represents one of the information 1500 about the partition mode of the current coding unit is the partition 1502, the size of 2Nx2N, the size of 2NxN partition 1504, the size of Nx2N partition 1506 and the size NxN partition 1508 It is set to.

However, the partition type may comprise an asymmetric partition, optionally in the form of a partition, such as the geometry of the partition is not limited to this. For example, the size of 4Nx4N current coding unit CU_0, the size 4NxN of partitions, size 4Nx2N of partition, the size of 4Nx3N partition, the size of 4Nx4N partition, the size of 3Nx4N partition, the size of 2Nx4N partition, the size of 1Nx4N partitions, size of 2Nx2N of the partition may be used is divided into any one type. In addition, the current coding unit size 3Nx3N CU_0, the size 3NxN of partition, the size of 3Nx2N partition, the size of 3Nx3N partition, the size of 2Nx3N partition, the size of 1Nx3N partition, can be used is divided into any one type of the size of 2Nx2N partition have. Further, although the above has described the case where the current coding unit is square, it may be a current coding unit may be any of rectangular shape.

Information about the prediction mode 1510 indicates a prediction mode of each partition. For example, during use the information 1510 about the prediction mode, the intra mode, the information 1500, the partition points on the partition mode 1512, an inter mode 1514, a skip mode 1516 and the coupling mode (1518) whether one has to be set up that predictive coding is performed.

Further, the information 1520 related to the size conversion unit indicates whether to perform a transformation to the current coding unit based on any transform unit. For example, the conversion unit may be one of a first intra transformation unit size 1522, a second intra transformation unit size 1524, a first inter transformation unit size 1526, the second inter transformation unit size (1528) have.

Receiving unit 210 of the image decoding apparatus 200 according to one embodiment, the information on the information 1510 on the information 1500, a prediction mode for each of the field-specific encoding partition mode for each unit, a conversion unit size ( extract 1520) to be used for decoding.

Figure 16 shows a field-specific coding unit according to an embodiment of the present disclosure.

There is division information can be used to indicate the depth of the changes. The segmentation information indicates whether a coding unit of the current depth of whether split into coding units of the sub-field.

Prediction unit 1610 for the prediction coding of the field 0 and 2N_0x2N_0 size coding unit 1600 of the of 2N_0x2N_0 size partition mode (1612), 2N_0xN_0 size partition mode of the 1614, the N_0x2N_0 size partition mode (1616), N_0xN_0 It may include partition mode (1618) in size. The projected unit is divided into a symmetric ratio partitions (1612, 1614, 1616, 1618) only, but is illustrated, partition mode as described above, such as asymmetric partition, optionally in the form of a partition, the geometry of the partition is not limited to this the can be included.

Each partition mode, one 2N_0x2N_0 size of the partition, two 2N_0xN_0 size of the partition, two N_0x2N_0 and size of the partitions, with four sizes of N_0xN_0 repeatedly predictive encoding is to be performed for each partition. 2N_0x2N_0 size, for the partition of the size and size N_0x2N_0 2N_0xN_0 N_0xN_0 and size, can be a predictive encoding performed in an intra mode, an inter mode, and coupled modes. A skip mode may be performed only for the prediction coding of the partition size 2N_0x2N_0.

Partition size mode of 2N_0x2N_0, 2N_0xN_0 and N_0x2N_0 (1612, 1614, 1616) if the coding error due to one of the smallest, there is no longer need to be split into sub-field.

If the coding error due to the partition mode 1618 of size N_0xN_0 the most small, to change the depth of 0 to 1, and divided and repeatedly encoded in about 1620, the coding unit of a partition mode of the field 2 and the size of N_0xN_0 (1630) by performing a least coding error may continue to search for.

Of field 1 and the size 2N_1x2N_1 (= N_0xN_0) predicting unit (1640) for the predictive coding of the coding unit 1630 of the size 2N_1x2N_1 partition mode (1642), size of the partition mode (1644), the size N_1x2N_1 partition mode of 2N_1xN_1 of (1646), it may include a partition mode (1648) of the size of N_1xN_1.

Also, repeatedly size N_1xN_1 if coding error due to the partition mode (1648) of the size is the smallest, change of field 1 to field 2, and for the partition and 1650, the coding unit of the field 2 and the size of N_2xN_2 (1660) It performs encoding and may continue searching for the minimum encoding error.

If the maximum depth d, the depth each coding unit is set up when the depth d-1, division information may be set by the depth d-2. That is, the divided (1670) from the depth d-2 if the coding is performed to the depth of d-1, the predictive encoding of the depth d-1 and a size 2N_ (d-1) x2N_ (d-1) coding unit 1680 of the predicting unit (1690) for, the size 2N_ (d-1) x2N_ (d-1) partition mode (1692), size 2N_ (d-1) partition mode (1694) of xN_ (d-1), the size of the It may include a partition mode (1698) of N_ (d-1) x2N_ (d-1) partition mode (1696), size of N_ (d-1) xN_ (d-1) of.

Partition mode of, one size 2N_ (d-1) partition of x2N_ (d-1), two sizes partition of 2N_ (d-1) xN_ (d-1), two size N_ (d-1) x2N_ (d-1) partition, the encoding through the four size N_ (d-1) xN_ repeatedly predictive encoding to each partition of the (d-1) is carried out in, a partition mode in which the smallest coding error occurred can be detected .

Size of N_ (d-1) xN_ even if coding error is the smallest due to the partition mode (1698) of the (d-1), Since the maximum depth of d, an encoding unit of the depth d-1 CU_ (d-1) is no longer does go through the process of dividing a sub-field, the current depth of the maximum coding unit 1600 is determined by the depth d-1, partitioning mode may be determined as N_ (d-1) xN_ (d-1). In addition, because it is up to the depth d, no segmentation information is not set for the coding unit 1652 of the depth d-1.

Data unit (1699) can be referred to as the "minimum units" for the current maximum coding unit. The minimum unit in accordance with one embodiment, may be a lowermost depth of the minimum coding units is four data units of the square of the divided size. Through this iterative encoding process, the exemplary selection of the image encoding apparatus 400 includes a coding unit size of the smallest coding error generated by comparing the field by coding error of the encoding unit 1600 according to the embodiment, the depth of the coding units determine, and it is the partition mode and a prediction mode can be set to a coding mode.

In this way, depth 0, 1, ..., the error is the smallest depth may be selected by comparing the least encoding errors every field by the d-1, d. Field, and partition and a prediction mode of the prediction unit can be transmitted is coded as division information. In addition, since the coding unit should be split from the depth of field is selected from 0, only split information on the selected field is set to '0', the field-specific partitioning information other than the selected field shall be set to '1'.

The image decoding apparatus 200 according to various embodiments may be used to decode the coded unit 1612 extracts the information on the depth and the prediction unit of the coding unit 1600. The image decoding apparatus in accordance with various embodiments of the controller 200 using a field-specific partitioning information identifying a selected depth of field of the division information is "0", and can be used for the decoding by using division information for that field.

Figure 17, Figure 18 and Figure 19 shows a relationship between a coding unit, a prediction unit and the conversion unit according to an embodiment of the present disclosure.

An encoding unit 1710, per are of field encoding unit is the video encoder 400 according to one embodiment determined to the maximum coding units. Prediction units 1760 are partitions of prediction units of each deulyimyeo field by coding units in the coding unit 1710, a converting unit (1770) are transformation units of each field by coding unit.

The depth of each coding unit 1710 is when it is in the maximum coding units of field 0, the encoding unit (1712, 1754) is a 1, the encoding unit (1714, 1716, 1718, 1728, 1750, 1752), the depth is the depth 2, the encoding unit (1720, 1722, 1724, 1726, 1730, 1732, 1748) is the depth 3, the encoding unit (1740, 1742, 1744, 1746) is a depth of four.

Some of the prediction unit 1760 partitions (1714, 1716, 1722, 1732, 1748, 1750, 1752, 1754) in which the encoding unit is a split form. In other words the, partition (1714, 1722, 1750, 1754) is a partition mode said 2NxN, a partition (1716, 1748, 1752) is a partition mode, the partition 1732 is partition modes NxN of the Nx2N. Depth by encoding prediction unit and partitions of the unit 1710 are less than or equal to each coding unit.

This conversion or reverse conversion to the data unit of a small size is performed for the coding units as compared to the image data of the portion 1752 of the translation unit (1770). In addition, the conversion unit (1714, 1716, 1722, 1732, 1748, 1750, 1752, 1754) are compared, and the prediction unit patisyeonwa of the prediction unit 1760, a different size or type of data units. That is, the image decoding device, video encoding according to the 200 and one embodiment device 400 even as the intra-prediction / motion estimation / motion compensation operations, and transformation / inverse transformation operations on the same encoding unit, according to one embodiment, It can be carried out based on the discrete data units.

In this way, each maximum coding unit, are for each region for each of the coding units of hierarchical coding is performed recursively, whereby the crystal may be optimal coding unit, the coding unit according to a recursive tree structure are configured. Coding information may include a partition information, the partition mode information, prediction mode information, the transform unit size information for the coded unit. Table 2 below shows an example that can be set in the video decoding device 200 and one embodiment of the image encoding apparatus 400 in accordance with in accordance with one embodiment.

Table 2

0 division information (coding unit for coding the size of 2Nx2N current depth d) Division information 1
Prediction mode Partition type Conversion Unit Size The sub-field coding unit of d + 1 for each iterative coding
Inter intra skip (2Nx2N only) Symmetrical partition type Asymmetrical partition types Conversion unit division information 0 Conversion unit division information 1
2Nx2N2NxNNx2NNxN 2NxnU2NxnDnLx2NnRx2N etc. 2Nx2N NxN (symmetric partition type) N / 2xN / 2 and so on (asymmetric partition type)

Transmission unit 420 receiving unit 210 of the image decoding apparatus 200 according to the one embodiment output the coding information for the coded unit according to the tree structure, and for example, the image encoding device 400 according to one embodiment it is possible to extract encoded information for the coded unit according to the tree structure from the received bit stream.

The segmentation information indicates whether the current coding unit is split into coding units of the sub-field. If the current partition information on the depth d is 0, the current does encoding unit is no longer divided into sub-coding unit of the current coding unit, is a partition mode information, prediction mode, the transformation unit size information defined for the coding unit of the current depth can. If that should be further split according to the split information, a step, for each coding unit of the divided four sub-field to be coded is performed independently.

Prediction mode can be expressed as one of an intra mode, an inter mode, and skip mode. Intra mode and the inter mode may be defined in all partition mode, skip mode can be defined only in a partition mode 2Nx2N.

Partition mode information, indicates an asymmetric partition mode 2NxnU, 2NxnD, nLx2N, nRx2N dividing a height or width of the prediction unit in a symmetric partition mode, 2Nx2N, 2NxN, Nx2N, and NxN and, asymmetric ratio divided into a symmetric ratio can. Asymmetric partitions and 2NxnD 2NxnU mode is that each height 1: 1 to form a partitioned, asymmetric partitioning mode nLx2N and nRx2N is the first width, respectively: 3 and 3 shows a divided form to 1: 3 and 3.

Conversion unit size may be set in two different sizes in the two different sizes, an inter mode in the intra mode. That is, if the conversion unit split information is 0, the size of the conversion unit is set to the size of the current coding unit of 2Nx2N. If the conversion unit partition information is 1, a conversion unit of a current coding unit partition size can be set. If also the size of 2Nx2N If the current partition mode, the symmetric mode for the partition size in the coded unit conversion unit is NxN, asymmetric partitioning mode may be set to the N / 2xN / 2.

An exemplary coding information of the coding unit according to the tree structure according to an example, may be assigned to the field of coding units, at least one of the prediction unit and the smallest unit. An encoding unit of the field may comprise one or more of the prediction unit and the smallest unit that have the same encoding information.

Therefore, it is whether to be sure that you identify the encoding information held respectively between the adjacent data units, comprising the coding unit of the same depth. Also, by using the encoding information and the data holding unit may determine the encoding unit of the field, the distribution of the field in the maximum coding unit may be derived.

Therefore, in this case, when predicting the current coding unit, see the surrounding data unit, the encapsulation information of a data unit in the field by coding units adjacent to a current coding unit may be used is a direct reference.

In another embodiment, the data to be currently encoded if the unit refers to a peripheral unit of encoding to predictive encoding is performed by using the coded information of the field by coding unit adjacent to, contiguous to the current coding unit in the field by an encoding unit that Search by being encoded may be a peripheral unit references.

Figure 20 illustrates the relationship between the coding unit, a prediction unit and a translation unit according to the coding mode information in Table 2.

The maximum coding unit 2000 includes coding units in the field (2002, 2004, 2006, 2012, 2014, 2016, 2018). One coding unit 2018 of the encoding units is because it is the depth of the division information may be set to zero. Partition mode information of the encoding unit 2018 of the size of 2Nx2N, the partition mode 2Nx2N (2022), 2NxN (2024), Nx2N (2026), NxN (2028), 2NxnU (2032), 2NxnD (2034), nLx2N (2036) and it can be set to one of nRx2N (2038).

Conversion unit division information (TU size flag) is a type of transformation index, the size of the transformation unit corresponding to a conversion index can be changed according to the prediction unit type or a partition mode of the encoding unit.

For example, a partition mode information is symmetrical partition mode 2Nx2N (2022), 2NxN (2024), Nx2N (2026) and the NxN (2028) If the case is set to one, a translation unit split information is 0, a conversion unit of the size 2Nx2N ( 2042) is set, if the division information conversion unit 1 has a transform unit (2044) of NxN size can be set.

Partition mode information is asymmetric partitioning mode 2NxnU (2032), 2NxnD (2034), nLx2N (2036), and if the case is set to one of nRx2N (2038), a translation unit division information (TU size flag) is 0, the size conversion unit of 2Nx2N ( 2052) may be set and, if the translation unit, division information 1, the size N / 2xN / 2 conversion units (2054) is set.

(TU size flag) to Fig. 19, the above conversion unit division information according to the flag, but it is a conversion unit division information according to one embodiment defined as a 1-bit flag is not set with a value of 0 or 1 0 , increasing to 1, 2, 3 ... etc., and may also be a translation unit divided hierarchically. Conversion unit division information may be used by one embodiment of the conversion index.

In this case, by using the translation unit division information according to an embodiment with a minimum size of the maximum size, a translation unit in the translation unit, a translation unit, the size of the actually used it can be expressed. The video encoder 400 in accordance with one embodiment may, encoding the fastest conversion unit size information, minimum size information conversion unit and the maximum conversion unit division information. The encoded maximum conversion unit size information, minimum size information conversion unit and the maximum conversion unit division information may be inserted in the SPS. The image decoding apparatus according to an embodiment 200 using a maximum conversion unit size information, minimum size information conversion unit and the maximum conversion unit division information may be used for video decoding.

For example, (a) the current and the encoding unit size 64x64, if a maximum conversion unit size is 32x32, (a-1) converting unit split information is 0 when the magnitude of the transformation unit 32x32, (a-2) Conversion Unit when the division information 1 work is the size of a transformation unit may be the size of the conversion unit when the 16x16, (a-3) conversion unit division information 2 days set to 8x8.

As another example, (b), and the size of the conversion unit if this is the encoding unit size 32x32, if the minimum conversion unit size is 32x32, (b-1) conversion unit split information is 0 can be set to 32x32, the conversion unit Since the number is smaller than the 32x32 size is no more than one conversion unit division information may be set.

As another example, (c) the current coding unit and the size of 64x64, if a maximum conversion unit division information 1, division information conversion unit may be a 0 or a 1, there is no other converting unit division information may be set.

Therefore, the maximum conversion unit division information a 'MaxTransformSizeIndex', the minimum conversion unit size, the 'MinTransformSize', conversion unit division information is to define that the conversion unit size in the case of 0 'RootTuSize', the current minimum conversion unit as possible in the encoding unit size 'CurrMinTuSize' can be defined as the following equation (11).

CurrMinTuSize =

max (MinTransformSize, RootTuSize / (2 ^ MaxTransformSizeIndex)) ... (11)

Are compared with a minimum transformation unit size 'CurrMinTuSize' possible in the encoding unit, 'RootTuSize' conversion unit when the size of the transformation unit split information is 0 may represent the maximum possible size conversion unit systemically employed. That is, according to the relational expression (1), 'RootTuSize / (2 ^ MaxTransformSizeIndex)', the conversion obtained by dividing the number of times corresponding to the 'RootTuSize' the conversion unit size in the case of a translation unit split information is 0, the maximum conversion unit division information the unit size, 'MinTransformSize' may be, of these large values ​​from the current unit of encoding at least a size conversion unit 'CurrMinTuSize' Since the minimum conversion unit size.

Maximum conversion unit size RootTuSize according to one embodiment may vary depending on the prediction mode.

For example, if the current prediction mode is inter mode RootTuSize it may be determined according to the following equation (12). Relation (12) in the 'MaxTransformSize' is the maximum conversion unit size, 'PUSize' denotes a current prediction unit size.

RootTuSize = min (MaxTransformSize, PUSize) ......... (12)

In other words, the current prediction mode can be set to the inter mode, if, 'RootTuSize' conversion unit when the size of the transformation unit split information is 0, the maximum conversion unit size and the smaller of the current prediction unit size.

If the prediction mode of the current partition unit of an intra mode 'RootTuSize' may be determined according to the following equation (13). 'PartitionSize' indicates the size of the current partition unit.

RootTuSize = min (MaxTransformSize, PartitionSize) ........... (13)

That is, it can be set to the current if the prediction mode is an intra mode, 'RootTuSize' conversion unit when the size of the transformation unit split information is 0, the maximum conversion unit size and the smaller of the current partition unit size.

However, the current maximum transformation unit size 'RootTuSize' according to one embodiment that vary according to the prediction mode of the partition unit is an example only of one embodiment, it should be noted that not a factor in the current determines the maximum conversion unit size is not limited thereto.

Also in accordance with the image encoding scheme based on the coding units of the tree structure described above with reference to 5-20, each of the coding units of the tree structure, and the image data in the spatial domain are coded, the image decoding method based on the coding units of the tree structure the image data in the spatial domain is restored, while decoding is performed for each maximum coding unit, the picture and the picture of the video sequence it can be restored in accordance with the. The restored video or played back by the playback apparatus, or stored in a storage medium, and can be transmitted over the network.

In addition, for each encoding unit of the or each maximum coding unit for each picture or each slice, or a tree structure, or each prediction unit of the coding unit, or each conversion unit of the coding unit, the offset parameter may be signaled. In one example, the reconstructed samples by adjusting the value of the maximum coding unit by using the offset value to restore on the basis of the offset parameters received for each maximum coding unit, the maximum coding unit is to minimize the error between the original block can be restored.

On the other hand, the embodiments of the disclosure described above may be implemented in a general purpose digital computer to be written as a program that can be executed on a computer, and operate the programs using a computer readable recording medium. The computer-readable medium may include storage media such as magnetic storage media (e.g., ROM, floppy disks, hard disks, etc.), optical recording media (e.g., CD-ROMs, DVDs, etc.).

At least a part of "- unit" in the present specification can be implemented in hardware. In addition, the hardware may include a processor. The processor is a general-purpose single-may-chip microprocessor (e.g., ARM), a special purpose microprocessor (e.g., a digital signal processor (DSP)), a microcontroller, a programmable gate array (array), etc. - or a multiple . A processor may also be referred to as a central processing unit (CPU). At least a part of "- unit" can also be used a combination of (e.g., ARM and DSP) of the processor.

Hardware may also include a memory. The memory may be any electronic component capable of storing electronic information. Memory is a random access memory (RAM), read only memory (ROM), magnetic disk storage media, optical storage media, flash memory in the RAM device, one that includes a processor-board (on-board) memory, EPROM memory, EEPROM memory , it may be implemented as registers, and others, in a combination of the two.

Data and programs may be stored in memory. Program may be executable by a processor to implement the method disclosed herein. Execution of the program may include the use of data stored in the memory. When the processor executes the instructions, and may be various parts have been loaded (load) on a processor of the command, various pieces of data that may be loaded into the processor.

So far I looked at the center of its preferred embodiment with respect to this disclosure. One of ordinary skill in the art that the present disclosure will be appreciated that may be implemented in a scope not departing from the spirit of the disclosed present disclosure in a modified form. The exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. The scope of the present disclosure is not by the detailed description given in the appended claims, and all differences within the equivalent scope will be construed as being included in the present disclosure.

Claims (16)

  1. Comprising: predicting whether a combination of intra-prediction and inter-prediction parse the combined prediction information indicating whether from the bitstream for the current block;
    Determining, based on the combined prediction information, it determines whether to perform a combined prediction for the current block;
    When performing the method comprising the combined prediction, to obtain a first predicted value by performing the inter prediction for the current block and to obtain a second predicted value by performing intra prediction with respect to the current block;
    Reference picture and determining, based on at least one of the distance, the current size and characteristics of the inter-prediction and intra-prediction of the block of the current picture to determine a weight for weight, and intra-prediction for inter-prediction; And
    The image decoding method for performing a binding prediction on the basis of the weight, the weight, the first predicted value and said second predicted value for the intra-prediction for the inter prediction.
  2. According to claim 1,
    The step of parsing information regarding possible modes used from the bitstream;
    Determining, based on information relating to the available mode, selecting a plurality of mode of the available modes of the related prediction directions included in the intra-prediction; And
    The image decoding method further comprising the step of said available mode, determining a weight for each.
  3. According to claim 1,
    The step of parsing information regarding possible modes used from the bitstream;
    Determining, based on information relating to the available mode, selecting a plurality of mode of the available modes of which correspond to a plurality of reference blocks of the current block to a reference included in the inter-prediction; And
    The image decoding method further comprising the step of said available mode, determining a weight for each.
  4. According to claim 1,
    Performing the combined prediction,
    The image decoding method comprising the step of calculating (weight X of the first predicted value with respect to the inter-prediction) + (weight X and the second predicted value for the intra-prediction).
  5. According to claim 1,
    Performing the combined prediction,
    For luma (luminance) channel performing the combined prediction; And
    For chroma (chrominance) channel image decoding method that includes the step of performing one of the inter prediction or the intra prediction.
  6. According to claim 1,
    The step of parsing the information about the accuracy of a motion vector from the bit stream; And
    On the basis of the information about the accuracy of the motion vector, the accuracy of the setting of the inter-prediction for the current block, the motion vector in one of the half -pel (half-pel), the integer -pel (integer-pel) and 2-pel the image decoding method further comprising the steps:
  7. According to claim 1,
    Determining the weight,
    The step of parsing the weight information for the current block from the bitstream; And
    The image decoding method, based on the weight information comprises the step of determining a weight for weight, and the intra prediction for the inter prediction.
  8. According to claim 1,
    And the current block includes a prediction unit for use in the prediction unit and the intra prediction is used in inter-prediction,
    The image decoding method prediction unit used in the inter-prediction is determined by the prediction unit and independently used in the intra prediction.
  9. According to claim 1,
    Determining the weight,
    Determining the initial weight of the reference weight for the inter-prediction;
    Determining a reference distance of the current picture, including the current block and the reference picture of the inter-prediction;
    Determining a distance and difference between the reference distance of the current picture, including the current block and the reference picture of the inter-prediction;
    The image decoding method comprises the step of determining the weights for the inter-prediction based on the difference between the reference weight and the distance.
  10. Claim 1 to claim 9, wherein in any one of the image decoding method as the program is recorded in a computer-readable recording medium for implementing the.
  11. Receiving unit for parsing the intra prediction and the combined prediction information (combine prediction information) that indicates whether or not coupled to predict the inter-prediction for the current block from the bitstream; And
    Based on the combined prediction information, when determining whether to perform a combined prediction for the current block, it performs the combined prediction, and to obtain a first predicted value by performing the inter prediction for the current block, the current block to perform the intra-prediction to obtain a second prediction value, and the reference picture, and based on at least one of the distance, the current size and characteristics of the inter-prediction and intra-prediction of the block of the current picture to the weight and the intra-prediction for inter-prediction with respect for determining the weight and the image decoding apparatus that includes weights for the inter-prediction, weights for the intra prediction, the first prediction value and a decoding unit that performs the combined prediction based on the second predicted value.
  12. Obtaining a first predicted value by performing the inter prediction for the current block;
    Obtaining a second predicted value by performing intra prediction with respect to the current block;
    Reference picture and determining, based on at least one of the distance, the current size and characteristics of the inter-prediction and intra-prediction of the block of the current picture to determine a weight for weight, and intra-prediction for inter-prediction;
    Performing a binding prediction on the basis of the weight, the weight, the first predicted value and said second predicted value for the intra-prediction for the inter-prediction;
    Determining a combined prediction information (combine prediction information) about whether or not to perform the binding predicted for the current block; And
    The combined information and the image prediction encoding method comprising the step of transmitting a bitstream including at least one of the weight information using the weight.
  13. 13. The method of claim 12,
    The combined prediction information and the image coding method further comprising the step of entropy coding in a lower position than that of the intra-prediction and the inter prediction for at least one of said weight information.
  14. 13. The method of claim 12,
    Determining the weight,
    Sample values ​​of the original pixels in the current block, the image encoding method comprising the step of determining the first predicted value and the weights based on the second predicted value.
  15. 15. The method of claim 14,
    Determining the weight,
    The image encoding method comprising the step of calculating weights on the basis of the expected value of the samples of the expected value and the original pixel value of the sample and the ratio of the first prediction value of the original pixel and the second ratio of the second predicted value.
  16. Obtain a first predicted value by performing the inter prediction for the current block and obtaining a second predicted value by performing intra prediction with respect to the current block and the reference picture to the size and the inter-prediction of the distance, the current block of the current picture and the intra- and of the characteristics of the prediction based on at least one, and determining a weight for weight, and intra-prediction for inter-prediction, based on the weight, the weight, the first predicted value and said second predicted value for the intra-prediction for the inter-prediction coding unit to perform a combined prediction, and determines the combined prediction information (combine prediction information) about whether or not to perform the binding predicted for the current block; And
    The combined prediction information and the image encoding apparatus comprising: a transmitter for transmitting a bitstream including at least one of the weight information using the weight.
PCT/KR2015/011873 2014-11-06 2015-11-06 Video encoding method and apparatus, and video decoding method and apparatus WO2016072775A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201462075987P true 2014-11-06 2014-11-06
US62/075,987 2014-11-06

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US15/525,193 US20180288410A1 (en) 2014-11-06 2015-11-06 Video encoding method and apparatus, and video decoding method and apparatus
EP15857835.1A EP3217663A4 (en) 2014-11-06 2015-11-06 Video encoding method and apparatus, and video decoding method and apparatus
KR1020177011969A KR20170084055A (en) 2014-11-06 2015-11-06 Video encoding method and apparatus and video decoding method and apparatus
CN201580070271.7A CN107113425A (en) 2014-11-06 2015-11-06 Video encoding method and apparatus, and video decoding method and apparatus

Publications (1)

Publication Number Publication Date
WO2016072775A1 true WO2016072775A1 (en) 2016-05-12

Family

ID=55909416

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2015/011873 WO2016072775A1 (en) 2014-11-06 2015-11-06 Video encoding method and apparatus, and video decoding method and apparatus

Country Status (5)

Country Link
US (1) US20180288410A1 (en)
EP (1) EP3217663A4 (en)
KR (1) KR20170084055A (en)
CN (1) CN107113425A (en)
WO (1) WO2016072775A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018127188A1 (en) * 2017-01-06 2018-07-12 Mediatek Inc. Multi-hypotheses merge mode
WO2018132150A1 (en) * 2017-01-13 2018-07-19 Google Llc Compound prediction for video coding

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020042935A (en) * 2000-12-01 2002-06-08 조정남 Method for selecting macroblock compression mode in video coding system
KR20070047522A (en) * 2005-11-02 2007-05-07 삼성전자주식회사 Method and apparatus for encoding and decoding of video
KR20080066706A (en) * 2005-09-27 2008-07-16 콸콤 인코포레이티드 Channel switch frame
KR101050828B1 (en) * 2003-08-26 2011-07-21 톰슨 라이센싱 A method and apparatus for decoding the inter-coded block hybrid intra
US20120230405A1 (en) * 2009-10-28 2012-09-13 Media Tek Singapore Pte. Ltd. Video coding methods and video encoders and decoders with localized weighted prediction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008004940A1 (en) * 2006-07-07 2008-01-10 Telefonaktiebolaget Lm Ericsson (Publ) Video data management
US9906786B2 (en) * 2012-09-07 2018-02-27 Qualcomm Incorporated Weighted prediction mode for scalable video coding

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20020042935A (en) * 2000-12-01 2002-06-08 조정남 Method for selecting macroblock compression mode in video coding system
KR101050828B1 (en) * 2003-08-26 2011-07-21 톰슨 라이센싱 A method and apparatus for decoding the inter-coded block hybrid intra
KR20080066706A (en) * 2005-09-27 2008-07-16 콸콤 인코포레이티드 Channel switch frame
KR20070047522A (en) * 2005-11-02 2007-05-07 삼성전자주식회사 Method and apparatus for encoding and decoding of video
US20120230405A1 (en) * 2009-10-28 2012-09-13 Media Tek Singapore Pte. Ltd. Video coding methods and video encoders and decoders with localized weighted prediction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3217663A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018127188A1 (en) * 2017-01-06 2018-07-12 Mediatek Inc. Multi-hypotheses merge mode
WO2018132150A1 (en) * 2017-01-13 2018-07-19 Google Llc Compound prediction for video coding

Also Published As

Publication number Publication date
EP3217663A4 (en) 2018-02-14
US20180288410A1 (en) 2018-10-04
KR20170084055A (en) 2017-07-19
EP3217663A1 (en) 2017-09-13
CN107113425A (en) 2017-08-29

Similar Documents

Publication Publication Date Title
WO2012023762A2 (en) Method for decoding intra-predictions
WO2012005520A2 (en) Method and apparatus for encoding video by using block merging, and method and apparatus for decoding video by using block merging
WO2013022297A2 (en) Method and device for encoding a depth map of multi viewpoint video data, and method and device for decoding the encoded depth map
WO2011087295A2 (en) Method and apparatus for encoding and decoding video by using pattern information in hierarchical data unit
WO2010002214A2 (en) Image encoding method and device, and decoding method and device therefor
WO2011090314A2 (en) Method and apparatus for encoding and decoding motion vector based on reduced motion vector predictor candidates
WO2011129619A2 (en) Video encoding method and video encoding apparatus and video decoding method and video decoding apparatus, which perform deblocking filtering based on tree-structure encoding units
WO2012043989A2 (en) Method for partitioning block and decoding device
WO2011087321A2 (en) Method and apparatus for encoding and decoding motion vector
WO2011071328A2 (en) Method and apparatus for encoding video, and method and apparatus for decoding video
WO2012081879A1 (en) Method for decoding inter predictive encoded motion pictures
WO2010085064A2 (en) Apparatus and method for motion vector encoding/decoding, and apparatus and method for image encoding/decoding using same
WO2011049397A2 (en) Method and apparatus for decoding video according to individual parsing or decoding in data unit level, and method and apparatus for encoding video for individual parsing or decoding in data unit level
WO2011126275A2 (en) Determining intra prediction mode of image coding unit and image decoding unit
WO2011019250A4 (en) Method and apparatus for encoding video, and method and apparatus for decoding video
WO2011053050A2 (en) Method and apparatus for encoding and decoding coding unit of picture boundary
WO2011074896A2 (en) Adaptive image encoding device and method
WO2011021839A2 (en) Method and apparatus for encoding video, and method and apparatus for decoding video
WO2011019249A2 (en) Video encoding method and apparatus and video decoding method and apparatus, based on hierarchical coded block pattern information
WO2011053020A2 (en) Method and apparatus for encoding residual block, and method and apparatus for decoding residual block
WO2012023806A2 (en) Method and apparatus for encoding video, and decoding method and apparatus
WO2011049396A2 (en) Method and apparatus for encoding video and method and apparatus for decoding video, based on hierarchical structure of coding unit
WO2011019253A2 (en) Method and apparatus for encoding video in consideration of scanning order of coding units having hierarchical structure, and method and apparatus for decoding video in consideration of scanning order of coding units having hierarchical structure
WO2012148138A2 (en) Intra-prediction method, and encoder and decoder using same
WO2012173415A2 (en) Method and apparatus for encoding motion information and method and apparatus for decoding same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15857835

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase in:

Ref document number: 20177011969

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 15525193

Country of ref document: US

NENP Non-entry into the national phase in:

Ref country code: DE

REEP

Ref document number: 2015857835

Country of ref document: EP