KR20130119717A - Apparatus and method of efficient video coding using auxiliary motion vectors - Google Patents

Apparatus and method of efficient video coding using auxiliary motion vectors Download PDF

Info

Publication number
KR20130119717A
KR20130119717A KR1020120042736A KR20120042736A KR20130119717A KR 20130119717 A KR20130119717 A KR 20130119717A KR 1020120042736 A KR1020120042736 A KR 1020120042736A KR 20120042736 A KR20120042736 A KR 20120042736A KR 20130119717 A KR20130119717 A KR 20130119717A
Authority
KR
South Korea
Prior art keywords
motion information
information
auxiliary
auxiliary motion
image
Prior art date
Application number
KR1020120042736A
Other languages
Korean (ko)
Inventor
최효민
심동규
남정학
Original Assignee
광운대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 광운대학교 산학협력단 filed Critical 광운대학교 산학협력단
Priority to KR1020120042736A priority Critical patent/KR20130119717A/en
Publication of KR20130119717A publication Critical patent/KR20130119717A/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

With respect to next generation high efficiency video encoding or next generation scalable and multi-view video encoding for super resolution video, the present invention relates to a video encoding method and device for transmitting, to a decoder, motion information that can be used as auxiliary information and providing the increase of encoding efficiency and various functions by effectively using the same. A next generation high efficiency video encoding device or a next generation scalable and multi-view video encoding device comprises auxiliary motion information in a high-level syntax. [Reference numerals] (570,574) Merge motions and use the motions?;(571) Obtain expected block information;(573) Use motion information on a time aixs?;(575) Perform multi-directional intra expectation;(576,584) Construct a candidate group by using neighboring blocks;(577) Construct a candidate group by using the neighboring blocks and the blocks corresponding to the previous screen;(578) Obtain expectation information about improved motions;(579,CC) Obtain motion merging information;(580) Add auxiliary motion information;(581) Obtain auxiliary motion information;(582) Compensate motions;(AA,DD,GG,II,KK,MM) No;(BB,FF,HH,JJ,LL,NN) Yes;(EE) Expect the intra?

Description

Apparatus and Method of Efficient Video Coding Using Auxiliary Motion Vectors

The present invention relates to a next-generation high-efficiency image, scalable, and multi-view image encoding / decoding apparatus and method, and more particularly, to a method and apparatus for using auxiliary motion information with high-level grammar in the listed image encoding / decoding process. .

Recently, as the hardware performance of multimedia displays has increased and consumers' desire for ultra high resolution has increased, a video coding standard for ultra high resolution images has been required. Although the bandwidth of networks transmitting multimedia data has increased sufficiently, video encoding performance is still important for higher image quality. Therefore, ITU-T Study Group16 VCEG (Video Coding Experts Group) and ISO / IEC JTC 1 / SC29 / WG11 Moving Picture Experts Group (MPEG) Together, JCT-VC (Joint Collaborative Team on Video Coding) was created.

JCT-VC considers a number of technologies to achieve coding performance that is twice as high as H.264 / AVC's compression efficiency for the next generation video coding compression standard for ultra high resolution video.

In the prediction of motion vectors among inter prediction coding techniques, conventional H.264 / AVC uses motion vectors obtained through intermediate values of motion vectors on the left, top, and right sides of a block to be encoded. However, in the next generation video coding and compression technique for ultra-high resolution video, an optimal motion vector is selected from various candidate groups and a corresponding index of the candidate group in which the optimal motion vector is located is transmitted to the decoder for further improving prediction accuracy and encoding efficiency. In the candidate group, motion information of blocks located spatially and motion information of blocks corresponding in time may be used.

The problem to be solved by the present invention is a high-level grammar of auxiliary motion information that can provide various functions such as image coding efficiency and error compensation in the next generation high efficiency image coding, scalable video and multi-view video encoding methods and apparatus. To transmit.

Next-generation high-efficiency video decoding apparatus uses predictive motion vectors by creating candidate groups using motion information located spatially and motion information of blocks previously encoded temporally for motion vector prediction. In the environment of wireless network or long-distance network, the transmission of video data is not guaranteed the accuracy of reception rate, and such data loss is a factor that can cause serious image quality degradation in decoder. In addition, due to the limited physical design of the decoder, when the motion information decoded in the previous time cannot be stored, the motion vector cannot be used when creating a motion prediction candidate group. Such limitation of prediction candidate group generation lowers image coding efficiency. As another example, when the image compressed data transmitted by the encoder is not received by the decoder, the quality of the reconstructed image may be lowered.

In addition to the single-layer high-efficiency image encoding method and apparatus, in the case of scalable and multi-view images, when there is motion information that can reflect characteristics between consecutive images between layers or viewpoints, image encoding efficiency may be improved.

In order to solve the above problems, a method and apparatus for next-generation high-efficiency video encoding, scalable and multi-view video encoding using auxiliary motion information transmitted through a high-level grammar are proposed.

The present invention has been made in view of the above problems, and it is an object of the present invention to provide an apparatus and method for controlling the same.

To solve the above problems, a method and an apparatus for encoding an image using auxiliary motion information transmitted through a high-level grammar according to an embodiment of the present invention are proposed.

When the motion information of a temporally located block cannot be used in generating a motion prediction candidate group for motion prediction, auxiliary motion information transmitted through a high level grammar may be used instead.

When image data loss occurs in the decoder, motion compensation may be performed from reference images using auxiliary motion information transmitted through a high-level grammar for image reconstruction.

In scalable video encoding, encoding efficiency may be improved by using inter-layer video motion correction information as high-level grammar auxiliary motion information.

As described above, according to the present invention, in the case of performing the motion prediction or the image error recovery including the present invention in the apparatus and method for the next generation high efficiency image encoding / decoding, scalable and multi-view image encoding for future ultra high resolution images, If not, better image quality can be guaranteed at the same bit rate.

1 is an embodiment of a high efficiency video encoder using auxiliary motion information proposed by the present invention.
2 is an embodiment of a high efficiency video decoder using auxiliary motion information proposed by the present invention.
3 is an embodiment of a multi-channel video encoder using auxiliary motion information proposed by the present invention.
4 is an embodiment of a multi-channel video decoder using auxiliary motion information proposed by the present invention.
5 is an operation method of an embodiment of motion vector acquisition using auxiliary motion information proposed by the present invention.
6 is an operation method of an error correction embodiment using auxiliary motion information proposed by the present invention.
FIG. 7A illustrates an operation method of obtaining an inter-view distance consideration block by using auxiliary motion information proposed by the present invention in multi-view video encoding.
7B illustrates an operation method of obtaining a block of a corrected position from a reference layer by using auxiliary motion information proposed by the present invention in scalable image coding.

Hereinafter, a method and an apparatus for encoding an image using auxiliary motion information with a high level grammar according to an embodiment of the present invention will be described in detail with reference to the accompanying drawings.

In the drawings, the same components are denoted or omitted by the same reference numerals and symbols as much as possible even though they are shown in different drawings, and are also the names of the same components, but tools that perform slightly different roles on different drawings. It should be noted that the numbers are changed slightly rather than the same reference numbers. In the following description of the present invention, if it is determined that a detailed description of a related known function or configuration may unnecessarily obscure the subject matter of the present invention, the detailed description thereof will be omitted.

1 is an embodiment of a next-generation high efficiency video encoder using auxiliary motion information proposed by the present invention. The next generation high efficiency image encoder includes an inter predictor 100; Auxiliary motion information unit 110; An encoder 120; . In addition to the discrete cosine conversion unit; Inverse discrete cosine transform unit; Quantization unit; Inverse quantization unit; In-loop filter unit; Intra prediction unit; A recovery frame buffer unit; .

2 is an embodiment of a next-generation high efficiency video decoder using auxiliary motion information proposed by the present invention. The next generation high efficiency image decoder includes an inter predictor 200; Auxiliary motion information unit 210; A decoder 220; . In addition to the inverse discrete cosine transform unit; Inverse quantization unit; In-loop filter unit; Intra prediction unit; A recovery frame buffer unit; .

3 is an embodiment of a multi-channel video encoder using auxiliary motion information proposed by the present invention. The multi-channel image encoder includes an auxiliary motion information unit 311 in the reference layer; Inter prediction unit 300; . The multi-channel video encoder also includes an auxiliary motion information unit 312 in the enhancement layer; An inter prediction unit 301; And an encoder 320; . In addition, each of the intra prediction unit to the reference layer and the enhancement layer; Discrete cosine transform unit; Inverse discrete cosine transform unit; Quantization unit; Inverse quantization unit; And the like.

4 is an embodiment of a multi-channel video decoder using auxiliary motion information proposed by the present invention. The multi-channel video decoder includes an auxiliary motion information unit 411 in the reference layer; An inter predictor 400; . The multi-channel video decoder also includes an auxiliary motion information unit 412 in the enhancement layer; An inter predictor 401; And a decoder 420; . In addition, the intra prediction unit in the reference layer and the enhancement layer, respectively; Inverse discrete cosine transform unit; Inverse quantization unit; And the like.

5 is an embodiment of a motion vector acquisition method using auxiliary motion information proposed in the present invention. In accordance with another aspect of the present invention, a method of obtaining a motion vector includes: checking whether a motion merging is used in a coding unit; Obtaining 571 prediction block information; Obtaining 572 the motion merging information; Checking 572 whether it is an intra prediction mode; Checking 573 whether the time axis motion information is used; Determining whether to use motion merging in prediction block units (574); Performing 575 multidirectional intra prediction; Constructing a candidate group using neighboring blocks (576); Constructing a candidate group using the neighboring block and the previous screen corresponding block (577); Obtaining 578, enhanced motion prediction information; Acquiring motion merging information in the prediction block unit (579); Adding auxiliary motion information (580); Acquiring motion information 581; Performing motion compensation 582.

6 is an embodiment of an error correction method using auxiliary motion information proposed in the present invention. An embodiment of an error correction method using auxiliary motion information may include an auxiliary motion information unit 610; Loss block 660; Block 670 for error correction; .

FIG. 7A illustrates an embodiment of a method for acquiring an inter-view distance consideration block using auxiliary motion information proposed by the present invention. An embodiment for acquiring the block includes an auxiliary motion information unit 711; The target block 761 at the current time; A target block 772 considering the distance between viewpoints; .

FIG. 7B is an embodiment of a method of obtaining a position correction block in consideration of inter-layer resolution using auxiliary motion information proposed by the present invention. FIG. An embodiment for obtaining the block includes an auxiliary motion information unit 774; Target block 774 of the enhancement layer; The target block 763 in consideration of the position correction; .

1 is an embodiment of a next-generation high efficiency video encoder using auxiliary motion information proposed by the present invention. The input image is predicted by the intra predictor and the inter predictor 100 and encoded by the encoder 120 through a discrete transform and quantization process on the difference between the original image and the original image. In this case, in order to satisfy a specific image reconstruction scenario, inter prediction 100 may be performed without using previously encoded motion information. The next generation high efficiency video encoder proposed by the present invention may perform inter prediction 100 using auxiliary motion information instead of using previously encoded motion information. The auxiliary motion information is transmitted to the decoder through a high level grammar, and for example, may be transmitted through a slice header or a picture parameter set. In addition, the auxiliary motion information is not used in inter prediction, and the corresponding auxiliary motion information may be used for error recovery in the decoder.

2 is an embodiment of a next-generation high efficiency video decoder using auxiliary motion information proposed by the present invention. The decoder 220 decodes the quantized discrete cosine transform coefficients and obtains a differential signal in units of pixels through an inverse quantization unit and an inverse discrete transform process. The predicted signal and the differential signal are obtained through the intra predictor and the inter prediction 200, and the image is reconstructed using the in-loop filter. At this time, the motion estimation unit 200 may perform motion compensation using the auxiliary motion information 210. Alternatively, the auxiliary motion information 210 may be used to correct an error of the image decoded by the decoder.

3 is an embodiment of a multi-channel video encoder using auxiliary motion information proposed by the present invention. As an embodiment of the multi-channel image encoder, the multi-channel image encoder may be a multiview image encoder, a scalable image encoder, or a multiview scalable image encoder. In the lower reference channel image encoder, the input image is predicted by the intra predictor and the inter predictor 300, and the discrete signal obtains the quantized discrete cosine transform coefficients through the discrete cosine transform unit and the quantization unit. The coefficient is entropy encoded by the encoder 320. In the inter prediction 300, motion information for motion compensation may be obtained using the auxiliary motion information 311. The auxiliary motion information 311 used in the inter prediction 300 is encoded by the encoder 320 and transmitted to the decoder. In the higher enhancement channel image encoder, an input image is predicted by an intra predictor and an inter predictor 301, and a differential signal is obtained through a discrete cosine transform unit and a quantization unit to obtain quantized discrete cosine transform coefficients. The coefficient is entropy encoded by the encoder 320. In the inter prediction 301, the motion information for motion compensation may be obtained using the auxiliary motion information 312. The auxiliary motion information 312 used in the inter prediction 301 is encoded by the encoder 320 and transmitted to the decoder.

4 is an embodiment of a multi-channel video decoder using auxiliary motion information proposed by the present invention. As an embodiment of the multi-channel video decoder, it may be a multiview video decoder, a scalable video decoder, or a multiview scalable video decoder. The lower reference channel image decoder may obtain a quantized discrete cosine transform coefficient through the decoder 420 to obtain a differential signal in units of pixels through inverse quantization and inverse discrete cosine transform. The prediction signal obtained by the intra prediction and inter prediction unit 400 is summed with the difference signal to obtain an in-loop filter, and a reconstructed image is obtained. At this time, when the auxiliary motion information 411 transmitted from the encoder is used in inter prediction, the auxiliary motion information 411 is used in the motion information generation process for motion compensation. Alternatively, the information may not be used in the inter prediction unit 400 but may be used as information for error recovery. The higher enhancement channel image decoder may obtain a quantized discrete cosine transform coefficient through the decoder 420 to obtain a differential signal in units of pixels through inverse quantization and inverse discrete cosine transform. The prediction signal obtained by the intra prediction and inter prediction unit 401 and the difference signal are added to take an in-loop filter, and a reconstructed image is obtained. At this time, when the auxiliary motion information 412 transmitted from the encoder is used in inter prediction, the auxiliary motion information 412 is used in the motion information generation process for motion compensation. Alternatively, the inter prediction unit 401 may not be used but may be used as information for error recovery.

5 is an operation method of an embodiment of motion vector acquisition using auxiliary motion information proposed by the present invention. In operation 570, whether the motion unit is used for the block skip in the coding unit is determined whether to reconstruct the blocks in the prediction block unit. In the case of using motion merging, in operation 572, information on which candidate to select from among motion merging candidate groups may be obtained. In step 573, if the time axis motion information is used to form the candidate group, it is determined whether the candidate group configuration is to target only neighboring blocks in space. When using the time axis motion information, the motion information is acquired from the neighboring block and the corresponding blocks on the time axis to form a candidate group (577), and motion information is obtained in the motion information acquisition step 581. The motion compensation step 582 is performed using the obtained motion information. If time axis motion information is not used, step 576 of constructing a candidate group using only neighboring blocks is basically performed. At this time, the step 580 of adding the auxiliary motion information may be performed, but may not be added by the auxiliary motion information transmitted in the high level grammar. After acquiring the motion information through the step 581 of acquiring the motion information, the motion compensation step 582 is performed. When motion merging is not used in the coding unit, a prediction block information acquisition step 571 is performed. Information about the size of the prediction block and the prediction mode is obtained. In the intra prediction step 572, it is checked whether the prediction block is predicted in the intra mode. In the intra prediction mode, the multi-directional intra prediction step 575 is performed. If not intra prediction, step 574 is checked to see if motion merging on a prediction block basis is used. If motion merging is used in the prediction block unit, information about which candidates of motion merging candidates is to be obtained through step 579 of obtaining motion merging information. In constructing the candidate group, the configuration of the candidate group is changed through a step 573 of checking whether time axis motion information is used. When the time axis motion information is used, a step 577 of constructing a candidate group using the neighboring block and the previous screen corresponding block is followed by step 571 of obtaining motion information, and obtaining motion information to obtain motion compensation step ( 582). If time axis motion information is not used, step 584 of constructing a candidate group using only neighboring blocks is performed. Thereafter, step 580 of adding auxiliary motion information transmitted through the high-level grammar from the encoder is performed. Auxiliary motion information may not be added according to the motion information transmitted through the high level grammar. Thereafter, after step 581 of obtaining motion information, a motion compensation step 582 is performed using the obtained motion information. When motion merging in the prediction block unit is not used, step 578 of obtaining enhanced motion prediction information is performed. This indicates which candidate is used as the actual motion information in the enhanced motion prediction candidate configuration. After checking whether the time axis motion information is used, it is determined whether the candidate configuration uses the time axis motion information through step 573. When the time axis motion information is not used, the candidate group is configured through a candidate group forming step 577 using the neighboring block and the previous screen corresponding block, and then the motion information is obtained 581. The motion compensation step 582 is performed using the obtained motion information. If time axis motion information is not used, step 584 of constructing a candidate group using only neighboring blocks is performed. Thereafter, step 580 of adding auxiliary motion information transmitted through the high-level grammar from the encoder is performed. Auxiliary motion information may not be added according to the motion information transmitted through the high level grammar. Thereafter, after step 581 of obtaining motion information, a motion compensation step 582 is performed using the obtained motion information.

6 is an operation method of an error correction embodiment using auxiliary motion information proposed by the present invention. Due to an error occurring in the process of transmitting the encoded information in the encoder, the decoder may decode the image using information including the error in decoding the image. In this case, the auxiliary motion information 610 transmitted from the encoder may be used to correct an error of the decoded image by using the information including the error. As shown in FIG. 6, the auxiliary motion information 610 may be used for the loss block 660 generated in the current reconstruction frame to obtain an error correction block 670 from an appropriate position from the previous reconstruction frame.

FIG. 7A illustrates an operation method of obtaining an inter-view distance consideration block by using auxiliary motion information proposed by the present invention in multi-view video encoding. In the multi-view video encoder / decoder which is an embodiment of the multi-channel video encoder / decoder, information considering the distance between viewpoints may be transmitted to the decoder as auxiliary motion information 711. For the target block 761 of the current view frame, an inter-view distance consideration block 772 can be obtained using the auxiliary motion information 711. The block 772 can be obtained by using the information of the corresponding block 772. Decryption is possible.

7B illustrates an operation method of obtaining a block of a corrected position from a reference layer by using auxiliary motion information proposed by the present invention in scalable image coding. In the scalable image encoder / decoder, which is an embodiment of the multi-channel image encoder / decoder, correction information regarding a corresponding position of pixels between layers may be required due to a change in resolution. In this case, the auxiliary motion information 712 transmitted from the encoder may be used as the position correction information. Accordingly, the target block 774 of the enhancement layer frame may obtain and use the position correction block 763 in the reference layer frame using the auxiliary motion information 712.

Although the present invention has been described above by means of limited embodiments and drawings, specific terms have been used herein, which are used only for the purpose of illustrating the present invention and are intended to limit the scope of the invention described in the claims or claims. It is not intended to be limiting. Accordingly, various modifications and variations are possible to those skilled in the art to which the present invention pertains.

Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined by the equivalents of the claims, as well as the claims.

Claims (3)

A next generation high efficiency image encoding method and apparatus or a multi channel image encoding method and apparatus includes auxiliary motion information that can be used in an inter prediction process. The next generation high efficiency image decoding method and apparatus or the multi channel image decoding method and apparatus includes auxiliary motion information that can be used in the inter prediction process. A next generation high efficiency image decoding method and apparatus or a multi-channel image decoding method and apparatus includes auxiliary motion information that can be used in an image error correction process.



KR1020120042736A 2012-04-24 2012-04-24 Apparatus and method of efficient video coding using auxiliary motion vectors KR20130119717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020120042736A KR20130119717A (en) 2012-04-24 2012-04-24 Apparatus and method of efficient video coding using auxiliary motion vectors

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020120042736A KR20130119717A (en) 2012-04-24 2012-04-24 Apparatus and method of efficient video coding using auxiliary motion vectors

Publications (1)

Publication Number Publication Date
KR20130119717A true KR20130119717A (en) 2013-11-01

Family

ID=49850644

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020120042736A KR20130119717A (en) 2012-04-24 2012-04-24 Apparatus and method of efficient video coding using auxiliary motion vectors

Country Status (1)

Country Link
KR (1) KR20130119717A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017069450A1 (en) * 2015-10-21 2017-04-27 경희대학교 산학협력단 Auxiliary information for improving media value, and method and apparatus for remotely extracting, expressing, and transmitting same

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017069450A1 (en) * 2015-10-21 2017-04-27 경희대학교 산학협력단 Auxiliary information for improving media value, and method and apparatus for remotely extracting, expressing, and transmitting same

Similar Documents

Publication Publication Date Title
TWI826546B (en) Improvements on history-based motion vector predictor
TWI761415B (en) Motion vector reconstructions for bi-directional optical flow (bio)
US11140408B2 (en) Affine motion prediction
US10334258B2 (en) Scalable video coding method and apparatus using inter prediction mode
KR101947142B1 (en) Methods of decoding using skip mode and apparatuses for using the same
TWI539796B (en) Disparity vector derivation
EP2862353B1 (en) Method and apparatus for efficient slice header processing
US8948243B2 (en) Image encoding device, image decoding device, image encoding method, and image decoding method
KR20210072064A (en) Inter prediction method and apparatus
JP2022514161A (en) Devices and methods for interprediction of triangular partitions in coding blocks
JP2014524707A (en) 3D video with asymmetric spatial resolution
CA2763489C (en) Method and device for improved multi-layer data compression
US12075071B2 (en) Modification of picture parameter set (PPS) for HEVC extensions
KR20140085515A (en) Efficient storage of motion information for high efficiency video coding
US11363301B2 (en) Conditionally parsed extension syntax for HEVC extension processing
KR20130045784A (en) Method and apparatus for scalable video coding using inter prediction mode
KR20230150284A (en) Efficient video encoder architecture
KR101850152B1 (en) Method for applying adaptive loop filter and scalable video coding apparatus
KR20150100355A (en) Method and apparatus of inter prediction
KR20130119717A (en) Apparatus and method of efficient video coding using auxiliary motion vectors
US20150010083A1 (en) Video decoding method and apparatus using the same
KR20140127400A (en) Method for encoding and decoding image, and apparatus thereof
CN112887736B (en) Conditional parse extension syntax for HEVC extension processing
KR20110087871A (en) Method and apparatus for image interpolation having quarter pixel accuracy using intra prediction modes
KR20140127401A (en) Method for encoding and decoding image, and apparatus thereof

Legal Events

Date Code Title Description
WITN Withdrawal due to no request for examination