KR20140124437A - Method for encoding and decoding motion information and an appratus using it - Google Patents

Method for encoding and decoding motion information and an appratus using it Download PDF

Info

Publication number
KR20140124437A
KR20140124437A KR20130041287A KR20130041287A KR20140124437A KR 20140124437 A KR20140124437 A KR 20140124437A KR 20130041287 A KR20130041287 A KR 20130041287A KR 20130041287 A KR20130041287 A KR 20130041287A KR 20140124437 A KR20140124437 A KR 20140124437A
Authority
KR
South Korea
Prior art keywords
motion vector
unit
coding
block
code
Prior art date
Application number
KR20130041287A
Other languages
Korean (ko)
Inventor
문주희
최광현
한종기
Original Assignee
인텔렉추얼디스커버리 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 인텔렉추얼디스커버리 주식회사 filed Critical 인텔렉추얼디스커버리 주식회사
Priority to KR20130041287A priority Critical patent/KR20140124437A/en
Publication of KR20140124437A publication Critical patent/KR20140124437A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

According to another aspect of the present invention, there is provided a method of coding motion information, the method comprising: obtaining a predicted motion vector based on a neighboring block of a current coded block based on a video signal; Calculating a differential motion vector between the predictive motion vector and a motion vector corresponding to the current encoded block; And adaptively coding the differential motion vector, wherein the step of adaptively coding comprises: dividing the video signal into a predetermined unit, and dividing the differential motion vector by a predetermined degree, . ≪ / RTI >

Description

[0001] The present invention relates to a motion information encoding / decoding method and apparatus,

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a video codec, and more particularly, to a video encoding method and apparatus capable of increasing coding efficiency in coding and decoding motion information.

In general video coding and decoding, a process of estimating a motion vector is required. In this case, although the motion vector can be predicted and used in the unit of an integer pixel, the motion vector can be predicted more finely in half pixel or quarter pixel unit. The reason why the motion vector is searched more finely than the integer pixel unit is because the image can be moved by a half pixel or a quarter pixel rather than by an integer pixel unit. Therefore, if only the integer pixel is predicted, the encoding efficiency is lowered in a half-pixel or a quarter-pixel moving image.

In consideration of this point, HEVC, which is a video codec that has recently completed standardization, estimates motion vectors by integer pixel, half-pixel, and quarter-pixel units, and then encodes the current block to be encoded, Coding is performed using a vector.

In order to estimate the motion vector, the differential motion vector MVD can be divided into an x component and a y component, and the absolute value of each component can be encoded using an exponential Golomb code. The code information of each component is encoded in a separate manner. The exponential Golomb code exists from the 0th code to the nth code, where the value of n is unlimited. When MVD is encoded in the existing HEVC, the first exponential Golomb code is fixedly used.

However, when the MVD found through the motion estimation process has the integer pixel unit value, the first-order exponential Golomb code is used, and when the MVD that is found has the half-pixel unit value, the first-order exponential Golomb code And the first exponential Golomb code is used even when the found MVD has a value of a quarter pixel unit, the compression efficiency is limited, and the encoding efficiency is inferior because the characteristics of the image information are not considered.

Embodiments of the present invention provide a method for adaptively and selectively applying a degree of a goalrum code for coding a motion vector to generate a codeword.

It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may be present.

According to an aspect of the present invention, there is provided a coding method for motion information, comprising: obtaining a prediction motion vector based on a surrounding block of a current coding block based on a video signal; ; Calculating a differential motion vector between the predictive motion vector and a motion vector corresponding to the current encoded block; And adaptively coding the differential motion vector, wherein the step of adaptively coding comprises: dividing the video signal into a predetermined unit, and dividing the differential motion vector by a predetermined degree, . ≪ / RTI >

If the present invention technique is used, the compression efficiency can be improved by coding the differential motion vector into a codeword to which the Gollum code of the optimal order is applied.

1 is a block diagram showing an example of a configuration of a video encoding apparatus.
2 is a block diagram showing an example of a structure of an inter prediction coding apparatus.
3 is a block diagram showing an example of a configuration of an inter prediction decoding apparatus.
4 is a diagram for explaining an example of a motion vector prediction method.
FIG. 5 is a diagram illustrating lengths of codewords generated for various values of a differential motion vector to be encoded using exponential Golomb code.
FIG. 6 illustrates a slice-based adaptive exponentiation code determination method according to an embodiment of the present invention.
FIG. 7 illustrates a decoding process when the optimal exponential Golomb code order is used differently on a slice-by-slice basis according to an embodiment of the present invention.
FIG. 8 shows a decoding process when different orders of optimal exponential Golomb codes are used in units of LCU according to another embodiment of the present invention.

Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.

Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.

Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.

Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise. The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.

Throughout this specification, the term " combination thereof " included in the expression of the machine form means one or more combinations or combinations selected from the group consisting of the constituents described in the expression of the machine form, And the like.

As an example of a method of encoding an actual image and its depth information map, the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) having the highest coding efficiency among the video coding standards developed so far jointly standardize Encoding can be performed using HEVC (High Efficiency Video Coding).

FIG. 1 is a block diagram of an example of the configuration of a video encoding apparatus, and shows a coding structure of an HEVC.

As shown in FIG. 1, the HEVC includes various new algorithms such as coding unit and structure, inter prediction, intra prediction, interpolation, filtering, and transform.

FIG. 2 is a block diagram illustrating an example of a structure of an inter prediction coding apparatus. The inter prediction coding apparatus includes a motion information determination unit 110, a motion information coding mode determination unit 120, a motion information coding unit 130, Block generating unit 140, a residual block generating unit 150, a residual block coding unit 160, and a multiplexer 170. [

Referring to FIG. 2, the motion information determination unit 110 determines motion information of a current block. The motion information includes a reference picture index and a motion vector. The reference picture index indicates any one of the previously coded and reconstructed pictures. And indicates one of the reference pictures belonging to the list 0 (L0) when the current block is unidirectionally inter-predictive-coded.

On the other hand, when the current block is bi-directionally predictive-coded, a reference picture index indicating one of the reference pictures of the list 0 (L0) and a reference picture index indicating one of the reference pictures of the list 1 (L1) . In addition, when the current block is bi-directionally predictive-coded, it may include an index indicating one or two pictures among the reference pictures of the composite list LC generated by combining the list 0 and the list 1.

The motion vector indicates the position of the prediction block in the picture indicated by each reference picture index. The motion vector may be a pixel unit (integer unit) or a sub-pixel unit. For example, it may have a resolution of 1/2, 1/4, 1/8 or 1/16 pixels. When the motion vector is not an integer unit, the prediction block is generated from the pixels of the integer unit.

The motion information encoding mode determination unit 120 determines whether motion information of the current block is to be coded in a skip mode, a merge mode, or an AMVP mode.

The skip mode is applied when there is a skip candidate having the same motion information as the current block motion information, and the residual signal is zero. The skip mode is also applied when the current block is the same size as the coding unit. The current block can be viewed as a prediction unit.

The merge mode is applied when there is a merge candidate having the same motion information as the current block motion information. The merge mode is applied when there is a residual signal when the current block is different in size from the coding unit or the size is the same. The merge candidate and the skip candidate can be the same.

AMVP mode is applied when skip mode and merge mode are not applied. The AMVP candidate having the motion vector most similar to the motion vector of the current block is selected as the AMVP predictor.

The motion information encoding unit 130 encodes motion information according to a method determined by the motion information encoding mode determination unit 120. [ When the motion information encoding mode is a skip mode or a merge mode, a merge motion vector encoding process is performed. When the motion information encoding mode is AMVP, the AMVP encoding process is performed.

The prediction block generator 140 generates a prediction block using the motion information of the current block. If the motion vector is an integer unit, the block corresponding to the position indicated by the motion vector in the picture indicated by the reference picture index is copied to generate a prediction block of the current block.

However, when the motion vector is not an integer unit, the pixels of the prediction block are generated from the pixels in the integer unit in the picture indicated by the reference picture index. In this case, in the case of a luminance pixel, a prediction pixel can be generated using an 8-tap interpolation filter. In the case of a chrominance pixel, a 4-tap interpolation filter can be used to generate a predictive pixel.

The residual block generating unit 150 generates residual blocks using the current block and the current block. If the current block size is 2Nx2N, a residual block is generated using a 2Nx2N prediction block corresponding to the current block and the current block.

However, if the current block size used for prediction is 2NxN or Nx2N, a prediction block for each of the 2NxN blocks constituting 2Nx2N is obtained, and the 2Nx2N final prediction block using the 2NxN prediction blocks is calculated Can be generated. The 2Nx2N residual block may be generated using the 2Nx2N prediction block. It is possible to overlap-smoothing the pixels of the boundary portion to solve the discontinuity of the boundary portion of 2NxN-sized two prediction blocks.

The residual block coding unit 160 divides the generated residual block into one or more conversion units. Then, each conversion unit is transcoded, quantized, and entropy encoded. At this time, the size of the conversion unit may be determined according to the size of the residual block in a quadtree manner.

The residual block coding unit 160 transforms the residual block generated by the inter prediction method using an integer-based transform matrix. The transform matrix is an integer-based DCT matrix. The residual block coding unit 160 uses a quantization matrix to quantize the coefficients of the residual block transformed by the transform matrix. The quantization matrix is determined by a quantization parameter. The quantization parameter is determined for each coding unit equal to or larger than a predetermined size. The predetermined size may be 8x8 or 16x16.

Accordingly, when the current coding unit is smaller than the predetermined size, only the quantization parameters of the first coding unit are encoded in the coding order among the plurality of coding units within the predetermined size, and the quantization parameters of the remaining coding units are the same as the parameters, no need.

The coefficients of the transform block are quantized using a quantization matrix determined according to the determined quantization parameter and the prediction mode.

The quantization parameter determined for each coding unit equal to or larger than the predetermined size is predictively encoded using a quantization parameter of a coding unit adjacent to the current coding unit. A quantization parameter predictor of the current coding unit can be generated by searching the left coding unit of the current coding unit, the upper coding unit order, and using one or two valid quantization parameters available.

For example, a valid first quantization parameter retrieved in the above order may be determined as a quantization parameter predictor. In addition, it is possible to search for the left coding unit, the coding unit immediately before in the coding order, and determine the valid first quantization parameter as a quantization parameter predictor.

The coefficients of the quantized transform block are scanned and converted into one-dimensional quantization coefficients. The scanning scheme can be set differently according to the entropy encoding mode. For example, in the case of CABAC encoding, the inter prediction encoded quantized coefficients can be scanned in a predetermined manner (zigzag or raster scan in the diagonal direction). On the other hand, when encoded by CAVLC, it can be scanned in a different manner from the above method. For example, the scanning method may be determined according to the intra-prediction mode in the case of interlacing, or the intra-prediction mode in the case of intra.

The coefficient scanning method may be determined depending on the size of the conversion unit. The scan pattern may vary according to the directional intra prediction mode. The scan order of the quantization coefficients is scanned in the reverse direction.

The multiplexer 170 multiplexes the motion information encoded by the motion information encoding unit 130 and the residual signals encoded by the residual block coding unit. The motion information may vary depending on the encoding mode. That is, in the case of skipping or merge, only the index indicating the predictor is included. However, in the case of AMVP, the reference picture index, the difference motion vector, and the AMVP index of the current block are included.

FIG. 3 is a block diagram illustrating an example of a configuration of an inter prediction decoding apparatus. The inter prediction decoding apparatus 200 includes a demultiplexer 210, a motion information encoding mode determination unit 220, a merge mode motion information decoding unit 230 An AMVP mode motion information decoding unit 240, a prediction block generating unit 250, a residual block decoding unit 260, and a restoration block generating unit 270.

Referring to FIG. 3, the demultiplexer 210 demultiplexes the current encoded motion information and the encoded residual signals from the received bitstream. The demultiplexer 210 transmits the demultiplexed motion information to the motion information encoding mode determination unit 220 and transmits the demultiplexed residual signal to the residual block decoding unit 260.

The motion information encoding mode determination unit 220 determines a motion information encoding mode of the current block. When the skip_flag of the received bitstream has a value of 1, the motion information encoding mode determination unit 220 determines that the motion information encoding mode of the current block is encoded in the skip encoding mode. The motion information encoding mode determination unit 220 determines that the skip_flag of the received bitstream has a value of 0 and the motion information encoding mode of the current block having only the merge index of the motion information received from the demultiplexer 210 is the merge mode As shown in FIG.

When the skip_flag of the received bitstream has a value of 0 and the motion information received from the demultiplexer 210 has a reference picture index, a differential motion vector, and an AMVP index, the motion information encoding mode determination unit 220 determines It is determined that the motion information encoding mode of the current block is coded in the AMVP mode.

The merge mode motion information decoding unit 230 is activated when the motion information encoding mode determination unit 220 determines the motion information encoding mode of the current block as a skip or merge mode.

The AMVP mode motion information decoding unit 240 is activated when the motion information encoding mode determination unit 220 determines the motion information encoding mode of the current block to be the AMVP mode.

The predictive block generator 250 generates a predictive block of a current block using the motion information reconstructed by the merge mode motion information decoder 230 or the AMVP mode motion information decoder 240. If the motion vector is an integer unit, the block corresponding to the position indicated by the motion vector in the picture indicated by the reference picture index is copied to generate a prediction block of the current block.

However, when the motion vector is not an integer unit, the pixels of the prediction block are generated from the integer unit pixels in the picture indicated by the reference picture index. In this case, in the case of a luminance pixel, a prediction pixel can be generated using an 8-tap interpolation filter. In the case of a chrominance pixel, a 4-tap interpolation filter can be used to generate a predictive pixel.

The residual block decoding unit 260 entropy decodes the residual signal. Then, the entropy-decoded coefficients are inversely scanned to generate a two-dimensional quantized coefficient block. The inverse scanning method can be changed according to the entropy decoding method.

That is, the inverse scanning method of the inter-prediction residual signal in case of decoding based on CABAC and decoding based on CAVLC can be changed. For example, in case of decoding based on CABAC, a raster inverse scanning method in a diagonal direction, and a case in which decoding is based on CAVLC, a zigzag reverse scanning method can be applied. In addition, the inverse scanning method may be determined depending on the size of the prediction block.

The residual block decoding unit 260 dequantizes the generated coefficient block using an inverse quantization matrix. And restores the quantization parameter to derive the quantization matrix. The quantization step size is restored for each coding unit of a predetermined size or more.

The predetermined size may be 8x8 or 16x16. Accordingly, when the current coding unit is smaller than the predetermined size, only the quantization parameters of the first coding unit are restored in the coding order among the plurality of coding units within the predetermined size, and the quantization parameters of the remaining coding units are the same as the parameters, You do not have to.

The quantization parameter of the coding unit adjacent to the current coding unit is used to recover the quantization parameter determined for each coding unit equal to or larger than the predetermined size. The first coding unit of the current coding unit, the upper coding unit order, and determine a valid first quantization parameter as a quantization parameter predictor of the current coding unit. In addition, the first coding unit may be searched in order of the coding unit immediately before in the coding order, and the first validation parameter may be determined as a quantization parameter predictor.

And restores the quantization parameter of the current prediction unit using the determined quantization parameter predictor and the difference quantization parameter.

The residual block decoding unit 260 inversely transforms the dequantized coefficient block to recover the residual block.

The reconstruction block generation unit 270 adds the prediction blocks generated by the prediction block generation unit 250 and the residual blocks generated by the residual block decoding unit 260 to generate reconstruction blocks.

When the current coding block is an inter coded block, the MVP is determined based on the motion vector (MV) and the reference image index of the blocks already coded in the vicinity, or the merge mode and the merge skip mode are considered.

4 is a diagram for explaining an example of a method of predicting a motion vector.

Referring to FIG. 4, in order to determine an optimal motion vector for each block mode in H.264 / AVC and HEVC, a point where a cost function value is minimum in a motion search region is searched. In order to find an accurate motion vector, / 2 < / RTI > and 1/4 pixel units, respectively.

The motion vector obtained through the motion prediction process is calculated as a differential motion vector MVD between the predicted motion vector and the finally determined motion vector as shown in Equation (1), and then the motion vector is binarized and encoded.

In this case, the process of finding a block with a high coding efficiency is a step of estimating a motion vector. The current motion vector of the current block is selected from among a number of candidate motion vectors having the smallest cost generated in the following equation.

Figure pat00001

Here, Distortion means the sum of the absolute values of the difference between the current coding block and the block indicated by the motion vector, Rate is a predicted value of the bit amount generated when coding the estimated motion vector, and λ means Lagrange multiplication.

The process of encoding the estimated motion vector is as follows. First, a predicted motion vector (represented by PMV or MVP) predicted from neighboring blocks of the current coded block is calculated, and a differential vector between the PMV and the motion vector searched for the current block is calculated. The encoder encodes the difference motion vector MVD.

In the general coding method, the tap coefficients of the interpolation filter used to create the brightness values of the half-pixel position and the quarter-pixel position are shown in Table 1 below. The predicted brightness value in the half-pixel unit can be generated using the surrounding eight integer pixel values. Also, the predicted brightness value at the quarter-pixel position can be generated using the surrounding seven pixel values.

Pixel position to be interpolated Filter coefficient Filter length 1/2 {-1, 4, -11, 40, 40, -11, 4, -1} 8-tap 1/4 {-1, 4, -10, 58, 17, -5, 1, 0} 7-tap

As described above, the differential motion vector MVD is divided into the x component and the y component of the MVD, and the absolute values of the separated components are encoded using the exponential Golomb code, which is a codeword. The code information of each component is encoded in a separate manner. The exponential Golomb code exists from the 0th code to the nth code, where the value of n is unlimited. The existing MVD coding technique uses the first exponential Golomb code fixedly.

Meanwhile, the following Tables 2, 3, 4, and 5 show zero, first, second, and third exponential Golomb codes, respectively. MVD (x) and MVD (y) in Table 2, Table 3, Table 4, and Table 5 represent the x component and the y component of the differential motion vector MVD. Encoding is performed using separate fixed length flags when MVD (x) | and MVD (y) are 0 or 1/4, and encoding is performed using exponential Golomb code only when each absolute value is larger than 1/4 do.

| MVD (x) | -2/4
or
| MVD (y) | -2/4
Index Bit string Bit string length
0 0 0 One 1/4 One 100 3 2/4 2 101 3 3/4 3 11000 5 ... ... ... 7/4 7 1110000 7 8/4 8 1110001 7 9/4 9 1110010 7 10/4 10 1110011 7

| MVD (x) | -2/4
or
| MVD (y) | -2/4
Index Bit string Bit string length
0 0 00 2 1/4 One 01 2 2/4 2 1000 4 3/4 3 1001 4 ... ... ... 7/4 7 110001 6 8/4 8 110010 6 9/4 9 110011 6 10/4 10 110100 6

| MVD (x) | -2/4
or
| MVD (y) | -2/4
Index Bit string Bit string length
0 0 000 3 1/4 One 001 3 2/4 2 010 3 3/4 3 011 3 ... ... ... 7/4 7 10011 5 8/4 8 10100 5 9/4 9 10101 5 10/4 10 10110 5

| MVD (x) | -2/4
or
| MVD (y) | -2/4
Index Bit string Bit string length
0 0 0000 4 1/4 One 0001 4 2/4 2 0010 4 3/4 3 0011 4 ... ... ... 7/4 7 0110 4 8/4 8 100001 6 9/4 9 100010 6 10/4 10 100011 6

According to the embodiment of the present invention, in order to overcome the compression efficiency limit of the existing compression standard technology, the coding efficiency is improved by adaptively selecting the degree of the exponentiation code according to the characteristic of the image information to be encoded .

For example, the information | MVD (x) | - When 2/4 is 1/4, the bit string generated when it is coded to <Table 2> becomes '100'. However, when the encoding is performed in Table 3, the generated bit stream becomes '01'. Even when the difference motion vector value having the same value is encoded, the encoding efficiency can be changed according to the degree of the exponential Golomb code used. As another example, | MVD (x) | - When 2/4 is 7/4, the bit stream generated when coding according to Table 2 becomes '1110000'. However, if it is encoded in Table 5, it can be encoded as '0110'. 3-bit coding gain can be obtained when coding is performed in Table 5 rather than in Table 2.

FIG. 5 is a block diagram of an encoding method using an exponential Golomb code. And &lt; RTI ID = 0.0 &gt; | MVD (y) |. &Lt; / RTI &gt;

The horizontal axis is the | MVD (x) | Or | MVD (y) | . The vertical axis indicates the length of the generated bit stream. As can be seen from FIG. 5, the length of the bit string generated when the primary score code is used may be largely different from the bit string length when the tertiary score code is used.

In FIG. 5, the | MVD (x) | Or | MVD (y) | Is smaller than 5, it is preferable from the viewpoint of coding efficiency that the coding is performed with the primary score code, which is a low order grade score code. On the other hand, | MVD (x) | Or | MVD (y) | Is larger than 5, it is advantageous in terms of coding efficiency to perform coding with a tertiary score, which is a high order grade score code. In HEVC, since the x and y components of all the MVDs are encoded using only the first exponential Golomb code, it can be seen that there is a limit in improving the coding efficiency. Therefore, it is expected that the coding efficiency can be improved according to the adaptive selection method according to the embodiment of the present invention.

Hereinafter, a method for determining the degree of the code according to the embodiment of the present invention at the time of encoding will be described in more detail.

In the embodiment of the present invention, in order to efficiently encode a difference motion vector, the degree of the macroblock code can be adaptively selected according to the unit of a video signal. At this time, the differential motion vector can be encoded using the exponential gradation code of the determined order. In this case, the unit for selecting a new degree of exponential Golomb code may be a slice or an LCU unit.

FIG. 6 illustrates a slice-based adaptive exponentiation code determination method according to an embodiment of the present invention.

The order of determining the optimal order of the goallum codes can be described as shown in FIG.

As shown in FIG. 6, in step 1, a zero-order exponential Golomb code can be used to estimate motion vectors of all blocks in the slice.

In step 2, you can use the first exponential Golomb code.

In steps 3 and 4, the motion vectors of all coding blocks in the slice can be estimated using the second and third exponential Golomb codes, respectively.

After step 1 to step 4, the motion vector of each coding block in the slice can be selected as a motion vector minimizing the cost function of Equation (1).

On the other hand, PUs of various sizes may exist in each CU in each of the stages 1 to 4. Equation (1) is used for estimating the optimal motion vector for each specific PU, but the size of the PUs in the optimal form within the CU can be determined based on Equation (2) below.

Figure pat00002

Distortion in Equation (2) denotes a sum of squared errors between pixel values in the original block and the restored block, and R denotes a bit amount generated when the current block is coded. In this equation, m is an index indicating the CU number inside the slice, and k is a degree number of the exponential Golomb code used when coding the motion vector.

Then, if motion vectors and coding modes are determined for all coding blocks in the slice, the rate-distortion cost function for each degree of exponential Golomb code can be calculated using Equation (3) below.

Figure pat00003

 In Equation (3), N_CU denotes the number of CUs present in the slice.

The method of determining the degree of the colom code for each slice unit described above can be easily extended by a method of selecting for each LCU (Largest coding unit). This can be easily extended by transmitting the order of the optimal exponential Golomb code per LCU.

In addition, the method of determining the degree of the colom code for each slice unit described above can be easily expanded by a method of selecting for each picture. This can be easily extended by transmitting the degree of the optimal exponential Golomb code per picture.

Hereinafter, a decoding method according to an embodiment of the present invention will be described.

FIG. 7 illustrates a decoding process when the optimal exponential Golomb code order is used differently on a slice-by-slice basis according to an embodiment of the present invention.

 In the first step of the decoding method according to the embodiment of the present invention, the degree of the exponential Golomb code used in the encoding in the slice header in the bitstream is decoded.

Then, when decoding all the blocks in the slice, the decoded codebook is selected and used by using the decoded order.

FIG. 8 shows a decoding process when different orders of optimal exponential Golomb codes are used in units of LCU according to another embodiment of the present invention.

In the first step of decoding according to another embodiment of the present invention, the degree of the exponential Golomb code used in the coding in the CU header or Coding Quadtree Syntax in the bitstream is decoded.

In the next step, when decoding all blocks in the LCU, a decoded codebook is selected using the decoded order.

The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).

The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.

While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.

Claims (10)

A method of coding motion information,
Obtaining a predicted motion vector based on a surrounding block of a current encoded block, based on the video signal;
Calculating a differential motion vector between the predictive motion vector and a motion vector corresponding to the current encoded block; And
And generating a codeword by adaptively encoding the differential motion vector,
Wherein the step of adaptively coding includes dividing the video signal into a predetermined unit and selecting a degree of a goallum code for coding the differential motion vector for each predetermined unit.
The method according to claim 1,
Wherein the predetermined unit is a picture unit.
The method according to claim 1,
Wherein the predetermined unit is a slice unit.
The method according to claim 1,
Wherein the predetermined unit is an LCU unit.
The method according to claim 1,
The step of selecting the order
And a degree of minimizing a cost function for coding the differential motion vector is selected as an order of the goallum code.
An apparatus for encoding motion information,
A motion vector processing unit for obtaining a predicted motion vector based on a surrounding block of a current coded block based on a video signal and calculating a differential motion vector between the predicted motion vector and a motion vector corresponding to the current coded block; And
And a code word generator for adaptively coding the differential motion vector to generate a code word,
Wherein the codeword generation unit classifies the video signal into a predetermined unit and selects a degree of a goallum code for coding the differential motion vector for each predetermined unit.
The method according to claim 6,
Wherein the predetermined unit is a picture unit.
The method according to claim 6,
Wherein the predetermined unit is a slice unit.
The method according to claim 6,
Wherein the predetermined unit is an LCU unit.
The method according to claim 1,
Wherein the codeword generation unit selects a degree that minimizes a cost function for coding the differential motion vector as a degree of the goallum code.
KR20130041287A 2013-04-15 2013-04-15 Method for encoding and decoding motion information and an appratus using it KR20140124437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR20130041287A KR20140124437A (en) 2013-04-15 2013-04-15 Method for encoding and decoding motion information and an appratus using it

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR20130041287A KR20140124437A (en) 2013-04-15 2013-04-15 Method for encoding and decoding motion information and an appratus using it

Publications (1)

Publication Number Publication Date
KR20140124437A true KR20140124437A (en) 2014-10-27

Family

ID=51994647

Family Applications (1)

Application Number Title Priority Date Filing Date
KR20130041287A KR20140124437A (en) 2013-04-15 2013-04-15 Method for encoding and decoding motion information and an appratus using it

Country Status (1)

Country Link
KR (1) KR20140124437A (en)

Similar Documents

Publication Publication Date Title
US12022103B2 (en) Method for generating prediction block in AMVP mode
KR102334293B1 (en) A method and an apparatus for processing a video signal
JP6321749B2 (en) Video encoding device
CN107295347B (en) Apparatus for decoding motion information in merge mode
KR20130050406A (en) Method for generating prediction block in inter prediction mode
KR20130050407A (en) Method for generating motion information in inter prediction mode
KR20130050149A (en) Method for generating prediction block in inter prediction mode
KR20130050405A (en) Method for determining temporal candidate in inter prediction mode
KR20130016172A (en) Decoding method of inter coded moving picture
KR20130050403A (en) Method for generating rrconstructed block in inter prediction mode
KR20130050404A (en) Method for generating reconstructed block in inter prediction mode
KR20140124920A (en) Method for encoding and decoding video using motion vector prediction, and apparatus thereof
US12034960B2 (en) Method for generating prediction block in AMVP mode
US12034959B2 (en) Method for generating prediction block in AMVP mode
US12028544B2 (en) Method for generating prediction block in AMVP mode
KR20140124437A (en) Method for encoding and decoding motion information and an appratus using it
KR20210037205A (en) Image encoding and decoding method and apparatus through efficient generation of prediction pixels in the screen
KR20140124436A (en) Method for encoding and decoding video using motion vector prediction, and apparatus thereof
KR20140124073A (en) Method for encoding and decoding video using motion information merge, and apparatus thereof

Legal Events

Date Code Title Description
N231 Notification of change of applicant
WITN Withdrawal due to no request for examination