KR20140124437A - Method for encoding and decoding motion information and an appratus using it - Google Patents
Method for encoding and decoding motion information and an appratus using it Download PDFInfo
- Publication number
- KR20140124437A KR20140124437A KR20130041287A KR20130041287A KR20140124437A KR 20140124437 A KR20140124437 A KR 20140124437A KR 20130041287 A KR20130041287 A KR 20130041287A KR 20130041287 A KR20130041287 A KR 20130041287A KR 20140124437 A KR20140124437 A KR 20140124437A
- Authority
- KR
- South Korea
- Prior art keywords
- motion vector
- unit
- coding
- block
- code
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/527—Global motion vector estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
According to another aspect of the present invention, there is provided a method of coding motion information, the method comprising: obtaining a predicted motion vector based on a neighboring block of a current coded block based on a video signal; Calculating a differential motion vector between the predictive motion vector and a motion vector corresponding to the current encoded block; And adaptively coding the differential motion vector, wherein the step of adaptively coding comprises: dividing the video signal into a predetermined unit, and dividing the differential motion vector by a predetermined degree, . ≪ / RTI >
Description
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a video codec, and more particularly, to a video encoding method and apparatus capable of increasing coding efficiency in coding and decoding motion information.
In general video coding and decoding, a process of estimating a motion vector is required. In this case, although the motion vector can be predicted and used in the unit of an integer pixel, the motion vector can be predicted more finely in half pixel or quarter pixel unit. The reason why the motion vector is searched more finely than the integer pixel unit is because the image can be moved by a half pixel or a quarter pixel rather than by an integer pixel unit. Therefore, if only the integer pixel is predicted, the encoding efficiency is lowered in a half-pixel or a quarter-pixel moving image.
In consideration of this point, HEVC, which is a video codec that has recently completed standardization, estimates motion vectors by integer pixel, half-pixel, and quarter-pixel units, and then encodes the current block to be encoded, Coding is performed using a vector.
In order to estimate the motion vector, the differential motion vector MVD can be divided into an x component and a y component, and the absolute value of each component can be encoded using an exponential Golomb code. The code information of each component is encoded in a separate manner. The exponential Golomb code exists from the 0th code to the nth code, where the value of n is unlimited. When MVD is encoded in the existing HEVC, the first exponential Golomb code is fixedly used.
However, when the MVD found through the motion estimation process has the integer pixel unit value, the first-order exponential Golomb code is used, and when the MVD that is found has the half-pixel unit value, the first-order exponential Golomb code And the first exponential Golomb code is used even when the found MVD has a value of a quarter pixel unit, the compression efficiency is limited, and the encoding efficiency is inferior because the characteristics of the image information are not considered.
Embodiments of the present invention provide a method for adaptively and selectively applying a degree of a goalrum code for coding a motion vector to generate a codeword.
It is to be understood, however, that the technical scope of the present invention is not limited to the above-described technical problems, and other technical problems may be present.
According to an aspect of the present invention, there is provided a coding method for motion information, comprising: obtaining a prediction motion vector based on a surrounding block of a current coding block based on a video signal; ; Calculating a differential motion vector between the predictive motion vector and a motion vector corresponding to the current encoded block; And adaptively coding the differential motion vector, wherein the step of adaptively coding comprises: dividing the video signal into a predetermined unit, and dividing the differential motion vector by a predetermined degree, . ≪ / RTI >
If the present invention technique is used, the compression efficiency can be improved by coding the differential motion vector into a codeword to which the Gollum code of the optimal order is applied.
1 is a block diagram showing an example of a configuration of a video encoding apparatus.
2 is a block diagram showing an example of a structure of an inter prediction coding apparatus.
3 is a block diagram showing an example of a configuration of an inter prediction decoding apparatus.
4 is a diagram for explaining an example of a motion vector prediction method.
FIG. 5 is a diagram illustrating lengths of codewords generated for various values of a differential motion vector to be encoded using exponential Golomb code.
FIG. 6 illustrates a slice-based adaptive exponentiation code determination method according to an embodiment of the present invention.
FIG. 7 illustrates a decoding process when the optimal exponential Golomb code order is used differently on a slice-by-slice basis according to an embodiment of the present invention.
FIG. 8 shows a decoding process when different orders of optimal exponential Golomb codes are used in units of LCU according to another embodiment of the present invention.
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily carry out the present invention. It should be understood, however, that the present invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. In the drawings, the same reference numbers are used throughout the specification to refer to the same or like parts.
Throughout this specification, when a part is referred to as being "connected" to another part, it is not limited to a case where it is "directly connected" but also includes the case where it is "electrically connected" do.
Throughout this specification, when a member is " on " another member, it includes not only when the member is in contact with the other member, but also when there is another member between the two members.
Throughout this specification, when an element is referred to as "including " an element, it is understood that the element may include other elements as well, without departing from the other elements unless specifically stated otherwise. The terms "about "," substantially ", etc. used to the extent that they are used throughout the specification are intended to be taken to mean the approximation of the manufacturing and material tolerances inherent in the stated sense, Accurate or absolute numbers are used to help prevent unauthorized exploitation by unauthorized intruders of the referenced disclosure. The word " step (or step) "or" step "used to the extent that it is used throughout the specification does not mean" step for.
Throughout this specification, the term " combination thereof " included in the expression of the machine form means one or more combinations or combinations selected from the group consisting of the constituents described in the expression of the machine form, And the like.
As an example of a method of encoding an actual image and its depth information map, the Moving Picture Experts Group (MPEG) and the Video Coding Experts Group (VCEG) having the highest coding efficiency among the video coding standards developed so far jointly standardize Encoding can be performed using HEVC (High Efficiency Video Coding).
FIG. 1 is a block diagram of an example of the configuration of a video encoding apparatus, and shows a coding structure of an HEVC.
As shown in FIG. 1, the HEVC includes various new algorithms such as coding unit and structure, inter prediction, intra prediction, interpolation, filtering, and transform.
FIG. 2 is a block diagram illustrating an example of a structure of an inter prediction coding apparatus. The inter prediction coding apparatus includes a motion
Referring to FIG. 2, the motion
On the other hand, when the current block is bi-directionally predictive-coded, a reference picture index indicating one of the reference pictures of the list 0 (L0) and a reference picture index indicating one of the reference pictures of the list 1 (L1) . In addition, when the current block is bi-directionally predictive-coded, it may include an index indicating one or two pictures among the reference pictures of the composite list LC generated by combining the
The motion vector indicates the position of the prediction block in the picture indicated by each reference picture index. The motion vector may be a pixel unit (integer unit) or a sub-pixel unit. For example, it may have a resolution of 1/2, 1/4, 1/8 or 1/16 pixels. When the motion vector is not an integer unit, the prediction block is generated from the pixels of the integer unit.
The motion information encoding
The skip mode is applied when there is a skip candidate having the same motion information as the current block motion information, and the residual signal is zero. The skip mode is also applied when the current block is the same size as the coding unit. The current block can be viewed as a prediction unit.
The merge mode is applied when there is a merge candidate having the same motion information as the current block motion information. The merge mode is applied when there is a residual signal when the current block is different in size from the coding unit or the size is the same. The merge candidate and the skip candidate can be the same.
AMVP mode is applied when skip mode and merge mode are not applied. The AMVP candidate having the motion vector most similar to the motion vector of the current block is selected as the AMVP predictor.
The motion
The
However, when the motion vector is not an integer unit, the pixels of the prediction block are generated from the pixels in the integer unit in the picture indicated by the reference picture index. In this case, in the case of a luminance pixel, a prediction pixel can be generated using an 8-tap interpolation filter. In the case of a chrominance pixel, a 4-tap interpolation filter can be used to generate a predictive pixel.
The residual
However, if the current block size used for prediction is 2NxN or Nx2N, a prediction block for each of the 2NxN blocks constituting 2Nx2N is obtained, and the 2Nx2N final prediction block using the 2NxN prediction blocks is calculated Can be generated. The 2Nx2N residual block may be generated using the 2Nx2N prediction block. It is possible to overlap-smoothing the pixels of the boundary portion to solve the discontinuity of the boundary portion of 2NxN-sized two prediction blocks.
The residual
The residual
Accordingly, when the current coding unit is smaller than the predetermined size, only the quantization parameters of the first coding unit are encoded in the coding order among the plurality of coding units within the predetermined size, and the quantization parameters of the remaining coding units are the same as the parameters, no need.
The coefficients of the transform block are quantized using a quantization matrix determined according to the determined quantization parameter and the prediction mode.
The quantization parameter determined for each coding unit equal to or larger than the predetermined size is predictively encoded using a quantization parameter of a coding unit adjacent to the current coding unit. A quantization parameter predictor of the current coding unit can be generated by searching the left coding unit of the current coding unit, the upper coding unit order, and using one or two valid quantization parameters available.
For example, a valid first quantization parameter retrieved in the above order may be determined as a quantization parameter predictor. In addition, it is possible to search for the left coding unit, the coding unit immediately before in the coding order, and determine the valid first quantization parameter as a quantization parameter predictor.
The coefficients of the quantized transform block are scanned and converted into one-dimensional quantization coefficients. The scanning scheme can be set differently according to the entropy encoding mode. For example, in the case of CABAC encoding, the inter prediction encoded quantized coefficients can be scanned in a predetermined manner (zigzag or raster scan in the diagonal direction). On the other hand, when encoded by CAVLC, it can be scanned in a different manner from the above method. For example, the scanning method may be determined according to the intra-prediction mode in the case of interlacing, or the intra-prediction mode in the case of intra.
The coefficient scanning method may be determined depending on the size of the conversion unit. The scan pattern may vary according to the directional intra prediction mode. The scan order of the quantization coefficients is scanned in the reverse direction.
The
FIG. 3 is a block diagram illustrating an example of a configuration of an inter prediction decoding apparatus. The inter prediction decoding apparatus 200 includes a
Referring to FIG. 3, the
The motion information encoding
When the skip_flag of the received bitstream has a value of 0 and the motion information received from the
The merge mode motion
The AMVP mode motion
The
However, when the motion vector is not an integer unit, the pixels of the prediction block are generated from the integer unit pixels in the picture indicated by the reference picture index. In this case, in the case of a luminance pixel, a prediction pixel can be generated using an 8-tap interpolation filter. In the case of a chrominance pixel, a 4-tap interpolation filter can be used to generate a predictive pixel.
The residual
That is, the inverse scanning method of the inter-prediction residual signal in case of decoding based on CABAC and decoding based on CAVLC can be changed. For example, in case of decoding based on CABAC, a raster inverse scanning method in a diagonal direction, and a case in which decoding is based on CAVLC, a zigzag reverse scanning method can be applied. In addition, the inverse scanning method may be determined depending on the size of the prediction block.
The residual
The predetermined size may be 8x8 or 16x16. Accordingly, when the current coding unit is smaller than the predetermined size, only the quantization parameters of the first coding unit are restored in the coding order among the plurality of coding units within the predetermined size, and the quantization parameters of the remaining coding units are the same as the parameters, You do not have to.
The quantization parameter of the coding unit adjacent to the current coding unit is used to recover the quantization parameter determined for each coding unit equal to or larger than the predetermined size. The first coding unit of the current coding unit, the upper coding unit order, and determine a valid first quantization parameter as a quantization parameter predictor of the current coding unit. In addition, the first coding unit may be searched in order of the coding unit immediately before in the coding order, and the first validation parameter may be determined as a quantization parameter predictor.
And restores the quantization parameter of the current prediction unit using the determined quantization parameter predictor and the difference quantization parameter.
The residual
The reconstruction
When the current coding block is an inter coded block, the MVP is determined based on the motion vector (MV) and the reference image index of the blocks already coded in the vicinity, or the merge mode and the merge skip mode are considered.
4 is a diagram for explaining an example of a method of predicting a motion vector.
Referring to FIG. 4, in order to determine an optimal motion vector for each block mode in H.264 / AVC and HEVC, a point where a cost function value is minimum in a motion search region is searched. In order to find an accurate motion vector, / 2 < / RTI > and 1/4 pixel units, respectively.
The motion vector obtained through the motion prediction process is calculated as a differential motion vector MVD between the predicted motion vector and the finally determined motion vector as shown in Equation (1), and then the motion vector is binarized and encoded.
In this case, the process of finding a block with a high coding efficiency is a step of estimating a motion vector. The current motion vector of the current block is selected from among a number of candidate motion vectors having the smallest cost generated in the following equation.
Here, Distortion means the sum of the absolute values of the difference between the current coding block and the block indicated by the motion vector, Rate is a predicted value of the bit amount generated when coding the estimated motion vector, and λ means Lagrange multiplication.
The process of encoding the estimated motion vector is as follows. First, a predicted motion vector (represented by PMV or MVP) predicted from neighboring blocks of the current coded block is calculated, and a differential vector between the PMV and the motion vector searched for the current block is calculated. The encoder encodes the difference motion vector MVD.
In the general coding method, the tap coefficients of the interpolation filter used to create the brightness values of the half-pixel position and the quarter-pixel position are shown in Table 1 below. The predicted brightness value in the half-pixel unit can be generated using the surrounding eight integer pixel values. Also, the predicted brightness value at the quarter-pixel position can be generated using the surrounding seven pixel values.
As described above, the differential motion vector MVD is divided into the x component and the y component of the MVD, and the absolute values of the separated components are encoded using the exponential Golomb code, which is a codeword. The code information of each component is encoded in a separate manner. The exponential Golomb code exists from the 0th code to the nth code, where the value of n is unlimited. The existing MVD coding technique uses the first exponential Golomb code fixedly.
Meanwhile, the following Tables 2, 3, 4, and 5 show zero, first, second, and third exponential Golomb codes, respectively. MVD (x) and MVD (y) in Table 2, Table 3, Table 4, and Table 5 represent the x component and the y component of the differential motion vector MVD. Encoding is performed using separate fixed length flags when MVD (x) | and MVD (y) are 0 or 1/4, and encoding is performed using exponential Golomb code only when each absolute value is larger than 1/4 do.
or
| MVD (y) | -2/4
or
| MVD (y) | -2/4
or
| MVD (y) | -2/4
or
| MVD (y) | -2/4
According to the embodiment of the present invention, in order to overcome the compression efficiency limit of the existing compression standard technology, the coding efficiency is improved by adaptively selecting the degree of the exponentiation code according to the characteristic of the image information to be encoded .
For example, the information | MVD (x) | - When 2/4 is 1/4, the bit string generated when it is coded to <Table 2> becomes '100'. However, when the encoding is performed in Table 3, the generated bit stream becomes '01'. Even when the difference motion vector value having the same value is encoded, the encoding efficiency can be changed according to the degree of the exponential Golomb code used. As another example, | MVD (x) | - When 2/4 is 7/4, the bit stream generated when coding according to Table 2 becomes '1110000'. However, if it is encoded in Table 5, it can be encoded as '0110'. 3-bit coding gain can be obtained when coding is performed in Table 5 rather than in Table 2.
FIG. 5 is a block diagram of an encoding method using an exponential Golomb code. And < RTI ID = 0.0 > | MVD (y) |. ≪ / RTI >
The horizontal axis is the | MVD (x) | Or | MVD (y) | . The vertical axis indicates the length of the generated bit stream. As can be seen from FIG. 5, the length of the bit string generated when the primary score code is used may be largely different from the bit string length when the tertiary score code is used.
In FIG. 5, the | MVD (x) | Or | MVD (y) | Is smaller than 5, it is preferable from the viewpoint of coding efficiency that the coding is performed with the primary score code, which is a low order grade score code. On the other hand, | MVD (x) | Or | MVD (y) | Is larger than 5, it is advantageous in terms of coding efficiency to perform coding with a tertiary score, which is a high order grade score code. In HEVC, since the x and y components of all the MVDs are encoded using only the first exponential Golomb code, it can be seen that there is a limit in improving the coding efficiency. Therefore, it is expected that the coding efficiency can be improved according to the adaptive selection method according to the embodiment of the present invention.
Hereinafter, a method for determining the degree of the code according to the embodiment of the present invention at the time of encoding will be described in more detail.
In the embodiment of the present invention, in order to efficiently encode a difference motion vector, the degree of the macroblock code can be adaptively selected according to the unit of a video signal. At this time, the differential motion vector can be encoded using the exponential gradation code of the determined order. In this case, the unit for selecting a new degree of exponential Golomb code may be a slice or an LCU unit.
FIG. 6 illustrates a slice-based adaptive exponentiation code determination method according to an embodiment of the present invention.
The order of determining the optimal order of the goallum codes can be described as shown in FIG.
As shown in FIG. 6, in step 1, a zero-order exponential Golomb code can be used to estimate motion vectors of all blocks in the slice.
In
In
After step 1 to step 4, the motion vector of each coding block in the slice can be selected as a motion vector minimizing the cost function of Equation (1).
On the other hand, PUs of various sizes may exist in each CU in each of the stages 1 to 4. Equation (1) is used for estimating the optimal motion vector for each specific PU, but the size of the PUs in the optimal form within the CU can be determined based on Equation (2) below.
Distortion in Equation (2) denotes a sum of squared errors between pixel values in the original block and the restored block, and R denotes a bit amount generated when the current block is coded. In this equation, m is an index indicating the CU number inside the slice, and k is a degree number of the exponential Golomb code used when coding the motion vector.
Then, if motion vectors and coding modes are determined for all coding blocks in the slice, the rate-distortion cost function for each degree of exponential Golomb code can be calculated using Equation (3) below.
In Equation (3), N_CU denotes the number of CUs present in the slice.
The method of determining the degree of the colom code for each slice unit described above can be easily extended by a method of selecting for each LCU (Largest coding unit). This can be easily extended by transmitting the order of the optimal exponential Golomb code per LCU.
In addition, the method of determining the degree of the colom code for each slice unit described above can be easily expanded by a method of selecting for each picture. This can be easily extended by transmitting the degree of the optimal exponential Golomb code per picture.
Hereinafter, a decoding method according to an embodiment of the present invention will be described.
FIG. 7 illustrates a decoding process when the optimal exponential Golomb code order is used differently on a slice-by-slice basis according to an embodiment of the present invention.
In the first step of the decoding method according to the embodiment of the present invention, the degree of the exponential Golomb code used in the encoding in the slice header in the bitstream is decoded.
Then, when decoding all the blocks in the slice, the decoded codebook is selected and used by using the decoded order.
FIG. 8 shows a decoding process when different orders of optimal exponential Golomb codes are used in units of LCU according to another embodiment of the present invention.
In the first step of decoding according to another embodiment of the present invention, the degree of the exponential Golomb code used in the coding in the CU header or Coding Quadtree Syntax in the bitstream is decoded.
In the next step, when decoding all blocks in the LCU, a decoded codebook is selected using the decoded order.
The method according to the present invention may be implemented as a program for execution on a computer and stored in a computer-readable recording medium. Examples of the computer-readable recording medium include a ROM, a RAM, a CD- , A floppy disk, an optical data storage device, and the like, and may also be implemented in the form of a carrier wave (for example, transmission over the Internet).
The computer readable recording medium may be distributed over a networked computer system so that computer readable code can be stored and executed in a distributed manner. And, functional programs, codes and code segments for implementing the above method can be easily inferred by programmers of the technical field to which the present invention belongs.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it is to be understood that the invention is not limited to the disclosed exemplary embodiments, but, on the contrary, It should be understood that various modifications may be made by those skilled in the art without departing from the spirit and scope of the present invention.
Claims (10)
Obtaining a predicted motion vector based on a surrounding block of a current encoded block, based on the video signal;
Calculating a differential motion vector between the predictive motion vector and a motion vector corresponding to the current encoded block; And
And generating a codeword by adaptively encoding the differential motion vector,
Wherein the step of adaptively coding includes dividing the video signal into a predetermined unit and selecting a degree of a goallum code for coding the differential motion vector for each predetermined unit.
Wherein the predetermined unit is a picture unit.
Wherein the predetermined unit is a slice unit.
Wherein the predetermined unit is an LCU unit.
The step of selecting the order
And a degree of minimizing a cost function for coding the differential motion vector is selected as an order of the goallum code.
A motion vector processing unit for obtaining a predicted motion vector based on a surrounding block of a current coded block based on a video signal and calculating a differential motion vector between the predicted motion vector and a motion vector corresponding to the current coded block; And
And a code word generator for adaptively coding the differential motion vector to generate a code word,
Wherein the codeword generation unit classifies the video signal into a predetermined unit and selects a degree of a goallum code for coding the differential motion vector for each predetermined unit.
Wherein the predetermined unit is a picture unit.
Wherein the predetermined unit is a slice unit.
Wherein the predetermined unit is an LCU unit.
Wherein the codeword generation unit selects a degree that minimizes a cost function for coding the differential motion vector as a degree of the goallum code.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20130041287A KR20140124437A (en) | 2013-04-15 | 2013-04-15 | Method for encoding and decoding motion information and an appratus using it |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR20130041287A KR20140124437A (en) | 2013-04-15 | 2013-04-15 | Method for encoding and decoding motion information and an appratus using it |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20140124437A true KR20140124437A (en) | 2014-10-27 |
Family
ID=51994647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR20130041287A KR20140124437A (en) | 2013-04-15 | 2013-04-15 | Method for encoding and decoding motion information and an appratus using it |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20140124437A (en) |
-
2013
- 2013-04-15 KR KR20130041287A patent/KR20140124437A/en not_active Application Discontinuation
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12022103B2 (en) | Method for generating prediction block in AMVP mode | |
KR102334293B1 (en) | A method and an apparatus for processing a video signal | |
JP6321749B2 (en) | Video encoding device | |
CN107295347B (en) | Apparatus for decoding motion information in merge mode | |
KR20130050406A (en) | Method for generating prediction block in inter prediction mode | |
KR20130050407A (en) | Method for generating motion information in inter prediction mode | |
KR20130050149A (en) | Method for generating prediction block in inter prediction mode | |
KR20130050405A (en) | Method for determining temporal candidate in inter prediction mode | |
KR20130016172A (en) | Decoding method of inter coded moving picture | |
KR20130050403A (en) | Method for generating rrconstructed block in inter prediction mode | |
KR20130050404A (en) | Method for generating reconstructed block in inter prediction mode | |
KR20140124920A (en) | Method for encoding and decoding video using motion vector prediction, and apparatus thereof | |
US12034960B2 (en) | Method for generating prediction block in AMVP mode | |
US12034959B2 (en) | Method for generating prediction block in AMVP mode | |
US12028544B2 (en) | Method for generating prediction block in AMVP mode | |
KR20140124437A (en) | Method for encoding and decoding motion information and an appratus using it | |
KR20210037205A (en) | Image encoding and decoding method and apparatus through efficient generation of prediction pixels in the screen | |
KR20140124436A (en) | Method for encoding and decoding video using motion vector prediction, and apparatus thereof | |
KR20140124073A (en) | Method for encoding and decoding video using motion information merge, and apparatus thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
N231 | Notification of change of applicant | ||
WITN | Withdrawal due to no request for examination |