WO2010038961A2 - 복수 개의 움직임 벡터 추정을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 - Google Patents
복수 개의 움직임 벡터 추정을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 Download PDFInfo
- Publication number
- WO2010038961A2 WO2010038961A2 PCT/KR2009/005524 KR2009005524W WO2010038961A2 WO 2010038961 A2 WO2010038961 A2 WO 2010038961A2 KR 2009005524 W KR2009005524 W KR 2009005524W WO 2010038961 A2 WO2010038961 A2 WO 2010038961A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion vector
- motion
- encoding
- block
- motion vectors
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
- H04N19/139—Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to a motion vector encoding / decoding method and apparatus using a plurality of motion vector estimation, and an image encoding / decoding method and apparatus using the same. More particularly, the present invention relates to a method and apparatus for improving compression efficiency by efficiently encoding or decoding a motion vector for estimating and compensating for motion in encoding or decoding an image.
- a motion vector is generated through motion estimation and motion compensation is performed using the motion vector.
- the motion vector encoding and decoding method commonly used in the field of image encoding and decoding predicts a motion vector of a spatially located neighboring block. Predictive encoding is performed on a motion vector of an estimated block using the value). That is, since the motion vector of the current block is closely correlated with the motion vector of the neighboring block, the prediction value for the current motion vector is calculated using the motion vector of the neighboring block and generated as a predicted motion vector (PMV). After that, the encoding efficiency is increased by reducing the amount of bits necessary for encoding the motion vector by encoding only a difference value from the predicted motion vector without encoding the value of the motion vector of the current block.
- PMV predicted motion vector
- the compression efficiency can be increased as the prediction motion vector is similar to the motion vector of the current block for efficient compression.
- the prediction motion vector is similar to the motion vector of the current block for efficient compression.
- the bit rate is increased and the compression efficiency is reduced.
- the present invention has a main object to improve the compression efficiency by efficiently encoding or decoding a motion vector for estimating and compensating for motion in encoding or decoding an image.
- the present invention provides a device for encoding a motion vector, which estimates a plurality of motion vectors, estimates one motion vector of the plurality of motion vectors as a motion vector of the current block, A motion vector estimator for estimating the remaining motion vectors of the vectors according to at least one estimation criterion predefined with the image decoding apparatus; And a motion vector encoder for encoding motion information generated using the plurality of motion vectors.
- a plurality of motion vectors are estimated, one motion vector of the plurality of motion vectors is estimated as a motion vector of a current block, and the plurality of motion vectors are estimated.
- a motion information encoding step of encoding motion information generated using the plurality of motion vectors is a motion vector estimating step of estimating the remaining motion vectors among the at least one motion vector according to at least one estimation criterion predefined with the image decoding apparatus.
- motion information generated by estimating a plurality of motion vectors is encoded, and one motion vector of the plurality of motion vectors is used as a motion vector of the current block.
- a prediction unit generating a prediction block of the current block using the prediction block;
- a subtraction unit for generating a residual block by subtracting the current block and the prediction block;
- An encoder which encodes a residual block;
- an encoded data generator for generating and outputting encoded data including the encoded motion information and the encoded residual block.
- motion information generated by estimating a plurality of motion vectors is encoded, and one motion vector of the plurality of motion vectors is used as a motion vector of the current block.
- a prediction step of generating a prediction block of the current block by using A subtraction step of generating a residual block by subtracting the current block and the prediction block;
- an apparatus for decoding a motion vector comprising: a motion vector estimator for estimating one or more motion vectors according to at least one estimation criterion defined with an image encoding apparatus; A motion information reconstruction unit for decoding and reconstructing the encoded motion information; And a motion vector reconstruction unit which reconstructs the motion vector of the current block by using the reconstructed motion information and the estimated one or more motion vectors.
- a method of decoding a motion vector comprising: a motion vector estimating step of estimating one or more motion vectors according to at least one estimation criterion defined with an image encoding apparatus; A motion information reconstruction step of decoding and reconstructing the encoded motion information; And a motion vector reconstruction step of reconstructing the motion vector of the current block by using the reconstructed motion information and the estimated one or more motion vectors.
- an apparatus for decoding an image comprising: an information extracting unit for extracting an encoded residual block and encoded motion information from encoded data; A decoder which decodes and restores the encoded residual block; Estimates one or more motion vectors according to the image encoding apparatus and one or more predetermined criterion, and decodes and decodes the encoded motion information.
- the motion vector of the current block is obtained by using the reconstructed motion information and the estimated one or more motion vectors.
- a predictor configured to reconstruct and generate a prediction block of the current block by using the reconstructed motion vector of the current block; And an adder configured to reconstruct the current block by adding the reconstructed residual block and the predictive block.
- a method of decoding an image comprising: an information extraction step of extracting an encoded residual block and encoded motion information from encoded data; A decoding step of decoding and restoring the encoded residual block; Estimates one or more motion vectors according to the image encoding apparatus and one or more predetermined criterion, and decodes and decodes the encoded motion information.
- the motion vector of the current block is obtained by using the reconstructed motion information and the estimated one or more motion vectors.
- a bit amount due to encoding a motion vector for estimating and compensating for a motion using a more accurate motion vector can be reduced, thereby improving compression efficiency.
- 1 is an exemplary diagram for explaining a process of encoding a motion vector according to the H.264 / AVC standard.
- 2 is an exemplary diagram illustrating the number of bits per symbol for entropy encoding
- FIG. 3 is a block diagram schematically illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention
- FIG. 4 is a block diagram schematically illustrating a configuration of a motion vector encoding apparatus according to an embodiment of the present invention
- FIG. 5 is an exemplary diagram for explaining a process of estimating a first motion vector according to an embodiment of the present invention
- FIG. 6 is an exemplary diagram for explaining a process of estimating a second motion vector according to an embodiment of the present invention
- FIG. 7 is a flowchart illustrating a motion vector encoding method according to an embodiment of the present invention.
- FIG. 8 is a flowchart illustrating a video encoding method according to an embodiment of the present invention.
- FIG. 9 is a block diagram schematically illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- FIG. 10 is a block diagram schematically illustrating the configuration of a motion vector decoding apparatus according to an embodiment of the present invention.
- FIG. 11 is a flowchart illustrating a motion vector decoding method according to an embodiment of the present invention.
- FIG. 12 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- 1 is an exemplary diagram for explaining a process of encoding a motion vector according to the H.264 / AVC standard.
- block D is a current block to which a motion vector is to be encoded
- block A, block B and block C represent neighboring blocks for block D.
- And Is defined as having And the motion vector of the current block Is (2,0), which is the motion vector of the neighboring block And Are assumed to be (2,0), (2,1) and (2,2), respectively.
- PMV predicted motion vector for the motion vector of the current block described above. Is calculated as Equation 1, and the predicted motion vector Are also horizontal components ( ) And vertical components ( Is defined as having
- the predicted motion vector for the motion vector of the current block is calculated by Median ( ⁇ ) that calculates the median value of the motion vector of the neighboring blocks (blocks A, B, C). .
- Current motion vector using Equation 1 Predictive motion vector Is obtained, the differential motion vector obtained by subtracting the predictive motion vector from the motion vector of the current block to be encoded using Equation (2).
- the differential motion vector is encoded and stored (or transmitted) by a predetermined method, such as entropy encoding.
- Equation 2 Becomes (0, -1).
- 2 is an exemplary diagram showing the number of bits per symbol for entropy encoding.
- the differential motion vector Using (2,0) as the predictive motion vector, the differential motion vector Becomes (0,0), and the amount of bits required to encode this is two bits (one bit for the horizontal component and one bit for the vertical component). Therefore, compared to the method using the predictive motion vector using the median value, two bits can be reduced.
- FIG. 3 is a block diagram schematically illustrating a configuration of an image encoding apparatus according to an embodiment of the present invention.
- the image encoding apparatus 300 may include a block mode determiner 310, a predictor 320, a subtractor 330, a first encoder 340, and a second encoder 350.
- the encoded data generator 360, the decoder 370, the adder 380, and the reference picture storage 390 may be configured.
- the video encoding apparatus 300 may be a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), or a PlayStation Portable (PSP). ),
- a communication device such as a communication modem for communicating with various devices or a wired / wireless communication network, a memory for storing various programs and data for encoding an image, and executing a program. Means a variety of devices including a microprocessor for operation and control.
- the block mode determiner 310 applies a predetermined optimal criterion (for example, rate-distortion optimization criterion) to block modes that can be selected to the current block to be currently encoded in the image, and thus the block mode for the current block. (E.g., block mode with minimum rate-distortion cost). If the block mode is pre-set in the image encoding apparatus 300, the block mode determiner 310 may not be included in the image encoding apparatus 300 and may be selectively omitted.
- a predetermined optimal criterion for example, rate-distortion optimization criterion
- the prediction unit 320 generates a prediction block by predicting the current block and outputs the prediction block. That is, the prediction unit 320 predicts a pixel value of each pixel of the current block to be encoded in the image, and predicts a predicted block having a predicted pixel value of each pixel predicted.
- the prediction unit 320 may include a motion vector encoder 322 and a motion compensator 324.
- the motion vector encoder 322 may be a block unit (eg, 16 ⁇ 16 block, 16 ⁇ 8 block, 8) corresponding to a block mode or a preset block mode for the current block output from the block mode determiner 310.
- the motion vector encoder 322 may output index information of the reference picture, which is information for identifying the reference picture used to estimate the first motion vector and the second motion vector.
- the motion vector encoder 322 may use index information of the reference picture output from the block mode determiner 310 or index information of a preset reference picture, and may use index information of the reference picture.
- the first motion vector and the second motion vector may be estimated with reference to the indicated reference picture.
- the motion vector encoder 322 is located in the temporal vicinity of the current picture to be encoded.
- an error value with respect to blocks according to a block mode is respectively calculated, and a first motion vector and a second motion vector are based on a reference picture including a block having a minimum error value.
- the motion vector may be estimated.
- the motion vector encoder 322 will be described in detail with reference to FIG. 4 in a later process.
- the motion compensator 324 is a second motion vector that is a motion vector of the current block output from the motion vector encoder 322 to the reference picture indicated by the index information of the reference picture output from the motion vector encoder 322. Generate and output the predictive block of the current block using.
- the subtraction unit 330 subtracts the prediction block from the current block to generate a residual block. That is, the subtractor 330 calculates a difference between the pixel value of each pixel of the current block to be encoded and the predicted pixel value of each pixel of the prediction block predicted by the predictor 320 to obtain a residual signal in the form of a block. Create a residual block with
- the first encoder 340 transforms and quantizes the residual block and outputs a quantized residual block. That is, the first encoder 340 converts the residual signal of the residual block into the frequency domain, converts each pixel value of the residual block into a frequency coefficient, and quantizes the residual block having the frequency coefficient.
- the first encoder 340 may use various transformation techniques for transforming an image signal of a spatial axis into a frequency axis, such as a Hadamard transform and a Discrete Cosine Transform Based Transform.
- the residual signal can be converted into a frequency domain using the residual signal, and the residual signal converted into the frequency domain becomes a frequency coefficient.
- the first encoder 340 converts the transformed residual block into dead zone uniform threshold quantization (DZUTQ), a quantization weighted matrix, or an improved quantization. It can be quantized using a technique or the like.
- DZUTQ dead zone uniform threshold quantization
- a quantization weighted matrix or an improved quantization. It can be quantized
- the first encoder 340 transforms and quantizes the residual block.
- the residual block having the frequency coefficient may be generated by transforming the residual signal of the residual block, and the quantization process may not be performed. Not only the quantization process can be performed without converting the residual signal of the block into frequency coefficients, but not even both the transformation and quantization processes can be performed.
- the first encoder 340 may be omitted in the image encoding apparatus 340 according to an embodiment of the present invention.
- the second encoder 350 encodes the residual block output from the first encoder 340. That is, the second encoder 350 scans the quantized frequency coefficients, frequency coefficients, or residual signals of the residual block according to various scan methods such as zigzag scan to generate quantized frequency coefficient sequences, frequency coefficient sequences, or signal sequences, and entropy encoding ( It is encoded using various encoding techniques such as Entropy Coding). Meanwhile, the functions of the first encoder 340 and the second encoder 350 may be integrated to be implemented as one encoder.
- the encoded data generator 360 generates and outputs encoded data including the encoded residual block output from the encoder 350 and the encoded motion information output from the motion vector encoder 322.
- the encoded data generator 360 may additionally include information about the block mode of the current block that is output from the block mode determiner 310 or the preset current block in the encoded data.
- the encoded data generator 360 may be implemented as a multiplexer (MUX).
- the decoder 370 inverse quantizes and inverse transforms the residual block quantized by the first encoder 340. That is, the decoder 370 inversely quantizes the quantized frequency coefficients of the angularized residual block to generate a residual block having a frequency coefficient, and inversely transforms the inverse quantized residual block to restore a residual block having a pixel value, that is, Create a residual block.
- the decoder 370 may inverse transform and inverse quantize using an inverse transform method and a quantization method used by the first encoder 340.
- the decoder 370 performs only inverse transform, does not perform inverse quantization, and performs only quantization in the first encoder 340. If no transformation is performed, only inverse quantization may be performed and inverse transformation may not be performed. If the first encoder 340 does not perform both the transform and the quantization, or if the first encoder 340 is omitted without being configured in the image encoder 300, the decoder 370 also performs inverse transform and inverse. The quantization may not be performed or may be omitted without being configured in the image encoding apparatus 300.
- the adder 380 reconstructs the current block by adding the prediction block predicted by the predictor 320 and the residual block reconstructed by the decoder 370.
- the reference picture storage unit 390 stores the reconstructed current block output from the adder 380 as a reference picture in picture units so that when the prediction unit 320 encodes the next block of the current block or another block in the future, the reference picture is stored. To be used as:
- the image encoding apparatus 300 is an intra prediction unit for intra prediction and a reconstructed current block based on the H.264 / AVC standard.
- the deblocking filter unit may further include a deblocking filtering.
- the first encoder 340 and the decoder 370 perform transform and quantization (or inverse transform and inverse quantization) operations on a specific picture (eg, an intra picture) based on the H.264 / AVC standard.
- the deblocking filtering refers to an operation of reducing block distortion generated by encoding an image in block units, and applying a deblocking filter to a block boundary and a macroblock boundary, or applying a deblocking filter only to a macroblock boundary or a deblocking filter. You can optionally use one of the methods that does not use.
- FIG. 4 is a block diagram schematically illustrating a configuration of a motion vector encoding apparatus according to an embodiment of the present invention.
- the motion vector encoding apparatus may be implemented by the motion vector encoding unit 322 in the image encoding apparatus 300 according to the embodiment of the present invention described above with reference to FIG. 3.
- the motion vector encoder 322 is called.
- the motion vector encoder 322 may include a first motion vector estimator 410, a second motion vector estimator 420, and a motion information encoder 430. .
- the first motion vector estimator 410 is previously shared or defined by the image encoding apparatus 300 and the image decoding apparatus to be described later among the motion vectors included in the predetermined search range for estimating the first motion vector.
- the decoding apparatus also estimates the first motion vector according to a predetermined first estimation criterion capable of estimating the first motion vector by itself.
- a predetermined first estimation criterion an adjacent pixel matching (TM) method as shown in FIG. 5 may be used.
- the adjacent pixel matching method may be calculated by Equation 3 below.
- search range SR1 represents the size of the region on the reference picture for first motion vector estimation. For example, referring to FIG. 5, a search range defined by 8 pixels in the horizontal direction and 8 pixels in the vertical direction may be considered.
- TMS Temporal Matching Set
- the adjacent pixel matching method includes candidate motion vectors included in the search range SR1 for motion estimation for a finite number of indexes j in the TMS .
- the pixel value indicated by index j around the reference block on the reference picture obtained using And the corresponding pixel value indicated by index j around the current block The difference between is determined as an estimation error.
- the pixel value of the current picture Since the reconstructed image has already been encoded and decoded, it is information that can be known by both the image encoding apparatus 300 and the image decoding apparatus.
- the image decoding apparatus is known, and a condition used for estimation of the first motion vector is called a predetermined decoding condition C dec . That is, a first motion vector capable of estimating a candidate motion vector in the image decoding apparatus to have the smallest estimation error within the search range.
- a predetermined decoding condition C dec a condition used for estimation of the first motion vector.
- SSD sum of squared differences
- SAD sum of absolute differences
- the predetermined decoding condition C dec (eg, the previously reconstructed neighbor pixel value corresponding to the current block in the reference picture and the current picture) is also reconstructed in the image decoding apparatus. If the same first motion vector can be estimated using?), Other estimation criteria other than the adjacent pixel matching method shown in Fig. 5 can be used as the predetermined estimation criteria. For example, when estimating the first motion vector using the intermediate value calculation method used in the aforementioned H.264 / AVC standard, the first motion vector of the current block shown in FIG. 1 uses the motion vector of the neighboring block.
- Equation 4 In the image decoding apparatus, it is possible to estimate the same value by using the predetermined decoding condition C dec .
- C dec is a case where a predetermined motion vector of an adjacent block of the current block is determined.
- the first method may be performed by various methods such as an intermediate value calculation method and a boundary pixel matching method according to an application and an object to which the present invention is applied. Estimation criteria can be defined.
- the second motion vector estimator 420 may estimate a second motion that may be estimated by the image encoding apparatus 300 according to a predetermined second estimation criterion among motion vectors included in a search range for estimating the second motion vector. Determine the vector. As illustrated, for example, in FIG. 6, the second motion vector estimator 420 estimates a motion vector for the current block by using a second estimation criterion that can be used only by the image encoding apparatus 300. Estimate the vector.
- the second estimation criterion may be a rate-distortion optimization criterion such as Equation 5, but other criteria may be used.
- Is a candidate motion vector included in search range SR2 for estimating the second motion vector Denotes a second motion vector that minimizes f enc ( ⁇ ) representing a second estimation criterion among candidate motion vectors.
- the search range SR1 for estimating the first motion vector and the search range SR2 for estimating the second motion vector are not necessarily identical to each other.
- the predetermined second estimation criterion f enc ( ⁇ ) is preferably a rate-distortion optimization function J ( ⁇ ). J ( ⁇ ) can be expressed by the distortion function D ( ⁇ ) and the rate function R ( ⁇ ).
- the predetermined encoding condition C enc refers to an element that influences the determination of the second motion vector. 6 and 5, the pixel value of the current picture, the pixel value of the reference block used as the reference picture, and the like correspond to the encoding condition C enc .
- the distortion function D ( ⁇ ) and the rate function R ( ⁇ ) can be calculated through Equation 6, which is used for the rate function R ( ⁇ ). Denotes a first motion vector obtained after performing estimation of the first motion vector.
- MES Motion Etsimation Set
- MES Motion Etsimation Set
- the MES is defined as representing all pixels in the current block (or reference block), but may be limited to representing only some of the pixel positions depending on applications such as fast matching.
- the predetermined second estimation criterion may be defined as in the above-described embodiment, but is not necessarily limited thereto and may be defined in various ways according to the application and the object to which the present invention is applied.
- the rate function R ( ⁇ ) may be omitted or used for the rate function R ( ⁇ ). May be used as a predetermined default value, such as an intermediate value, rather than the first motion vector output after performing the first motion vector estimation.
- the second motion vector estimator 420 since the second motion vector estimator 420 does not have to use the first motion vector output from the first motion vector estimator 410, the first motion vector does not depart from the essential characteristics of the present invention.
- the order of the estimator 410 and the second motion vector estimator 420 may be changed.
- the predetermined first estimation criterion used by the first motion vector estimator 410 and the predetermined second estimation criterion used by the second motion vector estimator 420 are applied to the present invention.
- f enc ( ⁇ ) and f dec ( ⁇ ) can be applied in various forms. However, as described above, the degree of how f enc ( ⁇ ) and f dec ( ⁇ ) can produce the same result may be a factor that affects the performance of the present invention.
- f dec ( ⁇ ) more effective f dec ( ⁇ ) can be defined depending on the extent to which defined f enc ( ⁇ ) and f dec ( ⁇ ) can produce the same result.
- the degree to which f dec ( ⁇ ) can produce the same result according to the defined f enc ( ⁇ ) varies by an arbitrary unit (eg, a picture unit or a slice unit in the image encoding and decoding method). In this case, a more effective f dec ( ⁇ ) may be used predictively or in anticipation among various predetermined estimation criteria.
- the ( n -1) th picture is predicted or predicted as the most effective f enc ( ⁇ ) for which the peripheral pixel matching method can yield the same result as f dec ( ⁇ ), but the n th picture is the boundary pixel matching.
- the method can be expected or predicted to be the most effective f dec ( ⁇ ) that can yield the same result as f enc ( ⁇ ).
- the image encoding apparatus 300 may define an arbitrary condition in advance with the image decoding apparatus and use f dec ( ⁇ ) as an estimation criterion. When the image decoding apparatus 300 does not define any condition in advance with the image decoding apparatus, Information on which f dec ( ⁇ ) is used may be transmitted to the image decoding apparatus based on an arbitrary unit.
- the first motion vector estimator 410 and the second motion vector estimator 420 are independently configured, but the first motion vector estimator 410 and the second motion vector estimator are described above. It may be implemented as a motion vector estimator (not shown) including 420.
- the motion information encoder 430 generates motion information using the first motion vector output from the first motion vector estimator 410 and the second motion vector output from the second motion vector estimator 420, It is encoded and stored or output using a predetermined coding scheme such as entropy coding.
- the motion information encoder 430 may use various methods without departing from the essential features of the present invention.
- the motion information encoder 430 generates and encodes a difference between the first motion vector and the second motion vector as motion information or generates and encodes only the second motion vector as motion information, as shown in Equation 7 below. You may.
- the motion information encoder 430 performs entropy encoding when encoding the difference between the generated first motion vector and the second motion vector or the second motion vector.
- the coding may be performed using different variable length coding (VLC) tables based on the motion vector. That is, the first motion vector is analyzed to determine the characteristics (eg, magnitude and direction) of the first motion vector, and the characteristics of the image are determined by using the identified characteristics of the first motion vector, which is suitable for the characteristics of the image.
- VLC variable length coding
- Conditional entropy encoding may be specifically implemented as in the following example. For example, if the size is set as a criterion of characteristics among various characteristics of the first motion vector, motion information is encoded by selectively using different variable length tables among a plurality of variable length encoding tables according to the sizes of the first motion vectors. can do. If the first boundary value and the second boundary value are set in advance as a criterion for distinguishing the magnitude of the first motion vector, a plurality of variable length encoding tables that can be used are the first variable length encoding table to the third variable length encoding.
- the motion information when the absolute value of the magnitude of the first motion vector is less than the first boundary value, the motion information may be encoded using the first variable length encoding table, and the absolute value of the magnitude of the first motion vector may be Motion information may be encoded using a second variable length coding table when the value is greater than or equal to the first boundary value and less than the second boundary value.
- the third variable length may be used.
- Motion information may be encoded using an encoding table.
- the plurality of variable length coding tables that can be used may be tables representing codes capable of efficiently encoding the motion vectors according to the size of the motion vectors, which may be determined empirically or empirically. have.
- FIG. 7 is a flowchart illustrating a motion vector encoding method according to an embodiment of the present invention.
- the motion vector encoding apparatus that is, the motion vector encoding unit 322 illustrated in FIG. 3, estimates the first motion vector of the current block according to the first decoding criterion predefined with the image decoding apparatus (S710). That is, the motion vector encoder 322 is a predetermined vector that is shared (or defined) by the image encoding apparatus 300 and the image decoding apparatus among the motion vectors included in the search range for estimating the first motion vector. A first motion vector that may be estimated by the image decoding apparatus may be estimated according to the first estimation criterion.
- the motion vector encoder 322 estimates a second motion vector of the current block according to a second estimation criterion not defined with the image decoding apparatus (S720). That is, the motion vector encoder 322 may perform the second motion that can be estimated only by the image encoding apparatus 300 according to a predetermined second estimation criterion among the motion vectors included in the search range for estimating the second motion vector. The vector can be estimated.
- the motion vector encoder 322 generates and encodes motion information of the current block by using the first motion vector and the second motion vector (S730). That is, the motion vector encoder 322 may generate, encode, and store (or output) motion information by using the first motion vector estimated in step S710 and the second motion vector estimated in step S720.
- the process of estimating the first motion vector and the second motion vector and generating and encoding the motion information is the same as described above with reference to FIG. 4, and thus a detailed description thereof will be omitted.
- step S720 in Figure 7 this is only an embodiment of the present invention, it may be implemented by changing the order within the scope without departing from the essential characteristics of the present invention.
- FIG. 7 illustrates that the S720 stage is performed after the S710 stage
- the S710 stage may be performed after the S720 stage according to the application and the object to which the present invention is applied.
- FIG. 8 is a flowchart illustrating an image encoding method according to an embodiment of the present invention.
- the image encoding apparatus 300 determines the block mode of the image, and divides the image into block units such as a macroblock or a subblock of the macroblock, and inter prediction mode or intra.
- An optimal encoding mode is determined among various encoding modes such as a prediction mode, and the current block to be encoded is predicted and encoded according to the determined encoding mode.
- the image encoding apparatus 300 estimates the first motion vector and the second motion vector of the current block (S810) and the estimated first motion vector. And generating and encoding motion information using the second motion vector (S820), and generating a prediction block of the current block by compensating for the motion of the current block using the motion information (S830).
- steps S810 and S820 may be performed as described above with reference to FIG. 7.
- the image encoding apparatus 300 transforms and quantizes the residual block generated by subtracting the current block and the prediction block (S840), encodes the quantized residual block (S850), and encodes the encoded residual block and the encoded motion information. Generate and output encoded data including a (S860). In this case, the image encoding apparatus 300 may generate encoded data further including a predetermined block mode.
- the process of generating the residual block by using the prediction block, transforming, quantizing, and encoding the residual block is the same as described above with reference to FIG. 3, and thus a detailed description thereof will be omitted.
- the residual block is transformed and quantized in step S840, neither the transformation nor the quantization may be performed or only one process may be selectively performed. In this case, neither the transformation nor the quantization is performed in step S850 or one. Only residual processes may be encoded selectively.
- the image encoded by the encoding data by the image encoding apparatus 300 is a real-time or non-real-time through the wired or wireless communication network such as the Internet, local area wireless communication network, wireless LAN network, WiBro network, mobile communication network or the like
- the image decoding apparatus may be transmitted to an image decoding apparatus to be described later through a communication interface such as a universal serial bus (USB), and decoded by the image decoding apparatus to restore and reproduce the image.
- a communication interface such as a universal serial bus (USB)
- FIG. 9 is a block diagram schematically illustrating a configuration of an image decoding apparatus according to an embodiment of the present invention.
- the image decoding apparatus 900 includes an information extractor 910, a first decoder 920, a second decoder 930, a predictor 940, an adder 950, and the like.
- the reference picture storage unit 960 may be configured.
- the video decoding apparatus 900 may be a personal computer (PC), a notebook computer, a personal digital assistant (PDA), a portable multimedia player (PMP), or a PlayStation Portable (PSP). ),
- a communication device such as a communication modem for communicating with various devices or a wired / wireless communication network, a memory for storing various programs and data for decoding an image, and executing a program. Means a variety of devices including a microprocessor for operation and control.
- the information extractor 910 receives encoded data, extracts information about a block mode (for example, an identifier), and outputs information about the extracted block mode. Further, when the block mode is the motion vector skipping mode (for example, when the block mode is the intra 16x16 mode, the intra 4x4 mode, or the like), the information extractor 910 encodes the residual block without extracting the motion information from the encoded data. Can be extracted and output. On the other hand, when the block mode is not the motion vector skipping mode (for example, when the block mode is the inter 16x16 mode, the inter 4x4 mode, the P8x8 mode, etc.), the information extracting unit 910 encodes the motion information and the encoded motion information from the encoded data. Extracted residual block and output. In this case, the information extractor 910 may further extract and output index information of the reference picture from the encoded data.
- the block mode for example, an identifier
- the information extractor 910 encodes the residual block without extracting the motion information from the encoded data. Can
- the first decoder 920 decodes the encoded residual block output from the information extractor 910. That is, the first decoder 920 decodes binary data of a residual block encoded by using an entropy encoding technique to generate a quantized frequency coefficient sequence, and inversely scans by various scan methods such as an inverse zigzag scan to perform quantization frequency coefficient sequence. Create a residual block with If the binary data of the encoded residual block is binary data in which frequency coefficients are encoded, the residual block decoded by the first decoder 920 will be a residual block having frequency coefficients, and the binary data of the encoded residual block.
- the residual block decoded by the first decoder 920 may be a residual block having the residual signal. Meanwhile, according to the configuration, an entropy decoding process of decoding binary data of a residual block encoded using an entropy encoding technique described as a function of the first decoder 920 may be implemented in the information extractor 910.
- the second decoder 930 inverse quantizes and inversely transforms the residual block decoded by the first decoder 920 to restore the residual block. That is, the second decoder 930 inversely quantizes the quantized frequency coefficients of the decoded residual block output from the first decoder 920 and inversely transforms the inverse quantized frequency coefficients to restore the residual block having the residual signal. . If the residual block decoded by the first decoder 920 has a quantization frequency coefficient, the second decoder 930 performs both inverse quantization and inverse transformation, but by the first decoder 920. If the decoded residual block has a frequency coefficient, only inverse transform may be performed without performing inverse quantization.
- the second decoder 930 may not be configured or omitted in the image decoding apparatus 900. Meanwhile, in FIG. 9, the first decoder 920 and the second decoder 930 are illustrated and described as being configured independently, but may be configured as one decoder (not shown) incorporating each function. .
- the prediction unit 940 predicts the current block and generates a prediction block.
- the predictor 940 may include a motion vector decoder 942 and a motion compensator 944.
- the motion vector decoder 942 estimates the first motion vector in units of blocks corresponding to the block mode according to the information about the block mode output from the information extractor 910 in the reference picture stored in the reference picture storage 960.
- the decoder extracts the motion information by decoding the encoded motion information output from the information extractor 910, and restores the second motion vector which is the motion vector of the current block by using the reconstructed motion information and the estimated first motion vector. do. In this way, the reconstructed second motion vector becomes the motion vector of the current block.
- the motion compensator 944 predicts the prediction block by predicting the second motion vector reconstructed from the reference picture stored in the reference picture storage 960, that is, the reference block indicated by the motion vector of the current block, as a prediction block of the current block.
- the motion vector decoding unit 942 uses the reference picture.
- the motion vector decoding unit 942 outputs the reference picture from among the many reference pictures stored in the reference picture storage unit 960.
- the reference picture identified by the index information may be used.
- the adder 950 reconstructs the current block by adding the reconstructed residual block output from the second decoder 930 to the prediction block predicted and output by the predictor 940.
- the reconstructed current block is accumulated in picture units and output as a reconstructed picture or stored in the reference picture storage unit 960 as a reference picture, and may be used to predict the next block.
- the image decoding apparatus 900 deblocks the intra prediction unit and the reconstructed current block for intra prediction based on the H.264 / AVC standard. It may further include a deblocking filter unit for deblocking filtering. In addition, the second decoder 930 may further perform inverse transform and inverse quantization operations on a specific picture (eg, an intra picture) based on the H.264 / AVC standard.
- FIG. 10 is a block diagram schematically illustrating the configuration of a motion vector decoding apparatus according to an embodiment of the present invention.
- the motion vector decoding apparatus may be implemented as a motion vector decoding unit 942 in the image decoding apparatus 900 according to an embodiment of the present invention described above with reference to FIG. 9.
- the motion vector decoder 942 is called.
- the motion vector decoder 942 includes a motion vector estimator 1010, a motion information decoder 1020, and a motion vector decompressor 1030.
- the motion vector estimator 1010 is a predetermined vector that is shared (or defined) by the image encoding apparatus 300 and the image decoding apparatus 900 among the motion vectors included in the search range for estimating the first motion vector.
- the first motion vector is estimated according to the first estimation criterion.
- the predetermined first estimation criterion is the neighboring pixel matching method described above with reference to FIGS. 4 to 6 on the premise that the image encoding apparatus 300 and the image decoding apparatus 900 are shared (or defined) in advance. It may be defined by various methods, such as a median value calculation method and a boundary pixel matching method.
- the motion information decoder 1020 restores motion information by decoding the encoded motion information output from the information extractor 910 using various encoding techniques such as entropy encoding and conditional entropy encoding.
- the conditional entropy encoding is the same as the conditional entropy encoding described above with reference to FIG. 4, and since decoding is performed using different variable length encoding tables based on the first motion vector, detailed description thereof will be omitted.
- the motion information decoder 1020 may be independently implemented as described above to perform the above-described functions. However, the motion information decoder 1020 may be selectively omitted according to an implementation method or need, and in this case, the function may be the information extracting unit 910. It can be implemented integrated into.
- the motion vector reconstructor 1030 reconstructs the second motion vector using the first motion vector output from the motion vector estimator 1010 and the motion information output from the motion information decoder 1020.
- the motion vector reconstructor 1030 may reconstruct the second motion vector by substituting the first motion vector and the motion information into Equation 8 or reconstruct only the reconstructed motion information as the second motion vector.
- the present invention is not limited thereto, and under the premise that the image encoding apparatus 300 and the image decoding apparatus 900 are shared (or defined) in advance, the second motion vector may be restored in various ways without departing from the essential characteristics of the present invention. Can be. In this way, the reconstructed second motion vector becomes the motion vector of the current block.
- FIG. 11 is a flowchart illustrating a motion vector decoding method according to an embodiment of the present invention.
- the motion vector decoding apparatus may include the image encoding apparatus 300 and the image decoding apparatus 900 among the motion vectors included in the search range for estimating the first motion vector.
- the second motion vector that is, the motion vector of the current block is restored using the reconstructed motion information and the estimated first motion vector.
- FIG. 12 is a flowchart illustrating an image decoding method according to an embodiment of the present invention.
- the image decoding apparatus 900 which receives and stores a bitstream or encoded data of an image through a wired or wireless communication network or a cable, decodes the image to reproduce the image according to an algorithm of another program being selected or executed by the user. do.
- the image decoding apparatus 900 extracts the encoded residual block and the encoded motion information from the encoded data (S1210), and decodes the encoded residual block to restore the residual block (S1220).
- the image decoding apparatus 900 estimates the first motion vector of the current block according to the image encoding apparatus 300 and a first predetermined criterion (S1230), and decodes the encoded motion information to restore the motion information.
- the second motion vector is reconstructed using the reconstructed motion information and the estimated first motion vector (S1250).
- the reconstructed second motion vector becomes a motion vector of the current block.
- the image decoding apparatus 900 generates a prediction block of the current block by compensating for the motion of the current block in the reference picture by using the reconstructed second motion vector (S1260), and adds the reconstructed residual block and the prediction block to the current block.
- S1270 To restore (S1270).
- the reconstructed current block is accumulated and stored in picture units and output as a reconstructed picture or stored as a reference picture.
- the image encoding apparatus 300 or the motion vector encoding apparatus estimates the first motion vector according to a first estimation criterion previously shared or defined with the image decoding apparatus 900. That is, only the image encoding apparatus 300 may estimate, that is, the image encoding apparatus 300 estimates the second motion vector according to a second estimation criterion which is not previously shared or defined with the image decoding apparatus 900 (in this case, The estimated second motion vector may be an optimal motion vector of the current block, and may be a motion vector of the current block.) And generating and encoding motion information using the first motion vector and the second motion vector.
- a first estimation criterion previously shared or defined with the image decoding apparatus 900. That is, only the image encoding apparatus 300 may estimate, that is, the image encoding apparatus 300 estimates the second motion vector according to a second estimation criterion which is not previously shared or defined with the image decoding apparatus 900 (in this case, The estimated second motion vector may be an optimal motion vector of the current block, and may be a
- the image decoding apparatus 900 or the motion vector decoding apparatus estimates the first motion vector according to a first estimation criterion previously shared or defined with the image encoding apparatus 300, and decodes the motion information.
- the second motion vector is reconstructed as the motion vector of the current block by using the information and the first motion vector.
- Another embodiment of the present invention encodes a motion vector, as in an embodiment, like the first motion vector and the second. Instead of estimating only two motion vectors, which are motion vectors, and encoding the motion information using the motion vectors, a plurality of motion vectors are predicted, but one or more first motion vectors are estimated, and only one optimal motion vector is estimated. Motion information is encoded. Of course, when decoding the motion vector, the motion vector of the current block which is one motion vector is reconstructed by estimating one or more first motion vectors and using the reconstructed motion information and the predicted one or more motion vectors.
- the motion vector encoding apparatus is an apparatus for encoding a motion vector, which estimates a plurality of motion vectors, estimates one motion vector of the plurality of motion vectors as a motion vector of the current block, And a motion vector estimator for estimating the remaining motion vectors of the plurality of motion vectors according to at least one estimation criterion, and a motion vector encoder for encoding motion information generated using the plurality of motion vectors.
- a motion vector estimator for estimating the remaining motion vectors of the plurality of motion vectors according to at least one estimation criterion
- a motion vector encoder for encoding motion information generated using the plurality of motion vectors.
- the motion vector estimator may estimate the remaining motion vector using at least one of an adjacent pixel matching method, an intermediate value calculation method, and a boundary pixel matching method as one or more estimation criteria, and uses one of the rate-distortion optimization methods.
- the motion vector can be estimated.
- the motion information encoder may generate a difference between one motion vector and the other motion vectors as motion information, and encode motion information using different variable length encoding tables based on the remaining motion vectors.
- the motion information encoder uses the first variable length encoding table when the absolute value of the magnitude of the remaining motion vectors is less than the first boundary value, and the absolute value of the magnitude of the remaining motion vectors is greater than or equal to the first boundary value.
- the second variable length coding table may be used when it is less than the preset second boundary value
- the third variable length coding table may be used when the absolute value of the magnitude of the remaining motion vectors is greater than or equal to the second boundary value.
- a motion vector encoding method predicts a plurality of motion vectors, estimates one motion vector of the plurality of motion vectors as a motion vector of the current block, and images the remaining motion vectors of the plurality of motion vectors. And a motion vector estimating step of estimating according to the decoding apparatus and at least one predefined criterion and a motion information encoding step of encoding motion information generated using the plurality of motion vectors.
- An image encoding apparatus encodes motion information generated by estimating a plurality of motion vectors, and uses a motion vector of the plurality of motion vectors as a motion vector of the current block to predict the current block.
- a coder for generating and outputting coded data including a predictor for generating a residual block, a subtractor for generating a residual block by subtracting a current block and a predictive block, an encoder for encoding a residual block, encoded motion information, and encoded residual block It may be configured to include a data generator.
- the prediction unit estimates a motion vector of one of the plurality of motion vectors according to an estimation criterion not predefined with the image decoding apparatus, and estimates the remaining motion vectors of the plurality of motion vectors according to the estimation criterion predefined with the image decoding apparatus.
- One motion vector of the plurality of motion vectors may be a motion vector that cannot be estimated by the image decoding apparatus, and the other motion vectors may be motion vectors that can be estimated by the image decoding apparatus.
- An image encoding method encodes motion information generated by estimating a plurality of motion vectors, and uses a motion vector of the plurality of motion vectors as a motion vector of the current block to predict the current block. Generating a prediction step; A subtraction step of generating a residual block by subtracting the current block and the prediction block; And an encoding step of encoding a residual block and an encoding data generation step of generating and outputting encoded data including the encoded motion information and the encoded residual block.
- a motion vector decoding apparatus includes a motion vector estimator for estimating one or more motion vectors according to an image encoding apparatus and at least one predetermined criterion, and motion information reconstruction for decoding and restoring encoded motion information. And a motion vector reconstruction unit for reconstructing the motion vector of the current block using the sub- and reconstructed motion information and the estimated one or more motion vectors.
- the motion vector estimator may use one or more of an adjacent pixel matching method, an intermediate value calculation method, and a boundary pixel matching method as one or more estimation criteria.
- the motion information reconstruction unit may decode the motion information using different variable length coding tables based on one or more motion vectors. When the absolute value of the magnitude of the one or more motion vectors is less than the first boundary value, the first variable may be decoded. If the length encoding table is used and the absolute value of the magnitude of the one or more motion vectors is greater than or equal to the first boundary value and less than the preset second boundary value, the second variable length encoding table is used, and the absolute value of the magnitude of the one or more motion vectors is used. When the value is equal to or greater than the second boundary value, the third variable length encoding table may be used. The motion vector reconstruction unit may reconstruct the sum of the reconstructed motion information and the estimated one or more motion vectors as a motion vector of the current block.
- a motion vector decoding method comprising: a motion vector estimating step of estimating one or more motion vectors according to a video encoding apparatus and one or more predetermined estimation criteria; and motion information for decoding and restoring encoded motion information. And a motion vector reconstruction step of reconstructing the motion vector of the current block by using the reconstruction step and the reconstructed motion information and the estimated one or more motion vectors.
- An image decoding apparatus includes an information extracting unit for extracting an encoded residual block and encoded motion information, a decoder for decoding and restoring an encoded residual block, and an image encoding apparatus. Estimate one or more motion vectors according to one or more estimation criteria, decode and decode coded motion information, restore the motion vector of the current block by using the reconstructed motion information and the estimated one or more motion vectors, and The prediction unit may generate a prediction block of the current block by using a motion vector, and an adder which reconstructs the current block by adding the reconstructed residual block and the prediction block.
- an image decoding method includes: an information extraction step of extracting an encoded residual block and encoded motion information, a decoding step of decoding and restoring an encoded residual block, and an image encoding apparatus previously defined Estimate one or more motion vectors according to one or more estimation criteria, decode and decode coded motion information, restore the motion vector of the current block by using the reconstructed motion information and the estimated one or more motion vectors, and A prediction step of generating a prediction block of the current block using a motion vector and an addition step of reconstructing the current block by adding the reconstructed residual block and the prediction block may be performed.
- the optimal motion vector that can be estimated only by the image encoding apparatus 300 based on one or more motion vectors that can be estimated by both the image encoding apparatus and the image decoding apparatus Since the motion vector of the current block can be encoded, it is possible to encode the motion vector using a more accurate estimate, but there is no need to further encode information on which motion vector is used. Can reduce the amount, thereby improving the compression efficiency.
- both the image encoding apparatus and the image decoding apparatus share or define a predetermined estimation criterion capable of estimating one or more motion vectors, thereby further adding information for estimating one or more motion vectors. Since there is no need to encode, the bit amount for encoding the motion vector can be reduced, thereby improving the compression efficiency.
- the image decoding apparatus grasps the characteristics of the image based on one or more motion vectors (for example, the first motion vector in one embodiment) that can be estimated by itself, and the identified image.
- one or more motion vectors for example, the first motion vector in one embodiment
- the image decoding apparatus grasps the characteristics of the image based on one or more motion vectors (for example, the first motion vector in one embodiment) that can be estimated by itself, and the identified image.
- the image decoding apparatus grasps the characteristics of the image based on one or more motion vectors (for example, the first motion vector in one embodiment) that can be estimated by itself, and the identified image.
- a variable length coding table capable of encoding a motion vector most efficiently
- the difference between the predetermined motion vector and the predictive vector is encoded using a predetermined fixed variable length encoding table without considering the characteristics of the image, thereby adapting to various image characteristics. Can not cope with.
- the present invention is applied to a method and apparatus for encoding or decoding an image, and can reduce the amount of bits for encoding a motion vector while estimating using a more accurate motion vector, thereby improving compression efficiency. It is a very useful invention that produces an effect.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
Claims (20)
- 움직임 벡터를 부호화하는 장치에 있어서,복수 개의 움직임 벡터를 추정하되, 상기 복수 개의 움직임 벡터 중 하나의 움직임 벡터를 현재 블록의 움직임 벡터로서 추정하고, 상기 복수 개의 움직임 벡터 중 나머지 움직임 벡터를 영상 복호화 장치와 기 정의된 하나 이상의 추정 기준에 따라 추정하는 움직임 벡터 추정부; 및상기 복수 개의 움직임 벡터를 이용하여 생성한 움직임 정보를 부호화하는 움직임 벡터 부호화부를 포함하는 것을 특징으로 하는 움직임 벡터 부호화 장치.
- 제 1 항에 있어서, 상기 움직임 벡터 추정부는,인접 화소 정합 방법, 중간값 계산 방법 및 경계 화소 정합 방법 중 하나 이상을 상기 하나 이상의 추정 기준으로서 이용하는 것을 특징으로 하는 움직임 벡터 부호화 장치.
- 제 1 항에 있어서, 상기 움직임 벡터 추정부는,율-왜곡 최적화 방법을 이용하여 상기 하나의 움직임 벡터를 추정하는 것을 특징으로 하는 움직임 벡터 부호화 장치.
- 제 1 항에 있어서, 상기 움직임 정보 부호화부는,상기 하나의 움직임 벡터와 상기 나머지 움직임 벡터의 차이를 상기 움직임 정보로서 생성하는 것을 특징으로 하는 움직임 벡터 부호화 장치.
- 제 1 항에 있어서, 상기 움직임 정보 부호화부는,상기 나머지 움직임 벡터를 기초로 다른 가변 길이 부호화 테이블을 이용하여 상기 움직임 정보를 부호화하는 것을 특징으로 하는 움직임 벡터 부호화 장치.
- 제 5 항에 있어서, 상기 움직임 정보 부호화부는,상기 나머지 움직임 벡터의 크기의 절대값이 기 설정된 제 1 경계값 미만인 경우에는 제 1 가변 길이 부호화 테이블을 이용하고, 상기 나머지 움직임 벡터의 크기의 절대값이 상기 제 1 경계값 이상이고 기 설정된 제 2 경계값 미만인 경우에는 제 2 가변 길이 부호화 테이블을 이용하며, 상기 나머지 움직임 벡터의 크기의 절대값이 상기 제 2 경계값 이상인 경우에는 제 3 가변 길이 부호화 테이블을 이용하는 것을 특징으로 하는 움직임 벡터 부호화 장치.
- 움직임 벡터를 부호화하는 방법에 있어서,복수 개의 움직임 벡터를 추정하되, 상기 복수 개의 움직임 벡터 중 하나의 움직임 벡터를 현재 블록의 움직임 벡터로서 추정하고, 상기 복수 개의 움직임 벡터 중 나머지 움직임 벡터를 영상 복호화 장치와 기 정의된 하나 이상의 추정 기준에 따라 추정하는 움직임 벡터 추정 단계; 및상기 복수 개의 움직임 벡터를 이용하여 생성한 움직임 정보를 부호화하는 움직임 정보 부호화 단계를 포함하는 움직임 벡터 부호화 방법.
- 영상을 부호화하는 장치에 있어서,복수 개의 움직임 벡터를 추정하여 생성한 움직임 정보를 부호화하며, 상기 복수 개의 움직임 벡터 중 하나의 움직임 벡터를 현재 블록의 움직임 벡터로서 이용하여 상기 현재 블록의 예측 블록을 생성하는 예측부;상기 현재 블록과 상기 예측 블록을 감산하여 잔차 블록을 생성하는 감산부;상기 잔차 블록을 부호화하는 부호화부; 및상기 부호화된 움직임 정보 및 상기 부호화된 잔차 블록을 포함하는 부호화 데이터를 생성하여 출력하는 부호화 데이터 생성부를 포함하는 것을 특징으로 하는 영상 부호화 장치.
- 제 8 항에 있어서, 상기 예측부는,영상 복호화 장치와 기 정의하지 않은 추정 기준에 따라 상기 복수 개의 움직임 벡터 중 상기 하나의 움직임 벡터를 추정하고, 상기 영상 복호화 장치와 기 정의한 추정 기준에 따라 상기 복수 개의 움직임 벡터 중 나머지 움직임 벡터를 추정하는 것을 특징으로 하는 영상 부호화 장치
- 제 8 항에 있어서,상기 복수 개의 움직임 벡터 중 상기 하나의 움직임 벡터는 영상 복호화 장치에서 추정할 수 없는 움직임 벡터인 것을 특징으로 하는 영상 부호화 장치.
- 제 8 항에 있어서,상기 복수 개의 움직임 벡터 중 나머지 움직임 벡터는 영상 복호화 장치에서 추정할 수 있는 움직임 벡터인 것을 특징으로 하는 영상 부호화 장치.
- 영상을 부호화하는 방법에 있어서,복수 개의 움직임 벡터를 추정하여 생성한 움직임 정보를 부호화하며, 상기 복수 개의 움직임 벡터 중 하나의 움직임 벡터를 현재 블록의 움직임 벡터로서 이용하여 상기 현재 블록의 예측 블록을 생성하는 예측 단계;상기 현재 블록과 상기 예측 블록을 감산하여 잔차 블록을 생성하는 감산 단계;상기 잔차 블록을 부호화하는 부호화 단계; 및상기 부호화된 움직임 정보 및 상기 부호화된 잔차 블록을 포함하는 부호화 데이터를 생성하여 출력하는 부호화 데이터 생성 단계를 포함하는 것을 특징으로 하는 영상 부호화 방법.
- 움직임 벡터를 복호화하는 장치에 있어서,영상 부호화 장치와 기 정의된 하나 이상의 추정 기준에 따라 하나 이상의 움직임 벡터를 추정하는 움직임 벡터 추정부;부호화된 움직임 정보를 복호화하여 복원하는 움직임 정보 복원부; 및상기 복원된 움직임 정보와 상기 추정된 하나 이상의 움직임 벡터를 이용하여 현재 블록의 움직임 벡터를 복원하는 움직임 벡터 복원부를 포함하는 것을 특징으로 하는 움직임 벡터 복호화 장치.
- 제 13 항에 있어서, 상기 움직임 벡터 추정부는,상기 하나 이상의 추정 기준으로서, 인접 화소 정합 방법, 중간값 계산 방법 및 경계 화소 정합 방법 중 하나 이상을 이용하는 것을 특징으로 하는 움직임 벡터 복호화 장치.
- 제 13 항에 있어서, 상기 움직임 정보 복원부는,상기 하나 이상의 움직임 벡터를 기초로 서로 다른 가변 길이 부호화 테이블을 이용하여 상기 움직임 정보를 복호화하는 것을 특징으로 하는 움직임 벡터 복호화 장치.
- 제 15 항에 있어서, 상기 움직임 정보 복원부는,상기 하나 이상의 움직임 벡터의 크기의 절대값이기 설정된 제 1 경계값 미만인 경우에는 제 1 가변 길이 부호화 테이블을 이용하고, 상기 하나 이상의 움직임 벡터의 크기의 절대값이 상기 제 1 경계값 이상이고 기 설정된 제 2 경계값 미만인 경우에는 제 2 가변 길이 부호화 테이블을 이용하며, 상기 하나 이상의 움직임 벡터의 크기의 절대값이 상기 제 2 경계값 이상인 경우에는 제 3 가변 길이 부호화 테이블을 이용하는 것을 특징으로 하는 움직임 벡터 복호화 장치.
- 제 13 항에 있어서, 상기 움직임 벡터 복원부는,상기 복원된 움직임 정보와 상기 추정된 하나 이상의 움직임 벡터의 합을 상기 현재 블록의 움직임 벡터로서 복원하는 것을 특징으로 하는 움직임 벡터 복호화 장치.
- 움직임 벡터를 복호화하는 방법에 있어서,영상 부호화 장치와 기 정의된 하나 이상의 추정 기준에 따라 하나 이상의 움직임 벡터를 추정하는 움직임 벡터 추정 단계;부호화된 움직임 정보를 복호화하여 복원하는 움직임 정보 복원 단계; 및상기 복원된 움직임 정보와 상기 추정된 하나 이상의 움직임 벡터를 이용하여 현재 블록의 움직임 벡터를 복원하는 움직임 벡터 복원 단계를 포함하는 것을 특징으로 하는 움직임 벡터 복호화 방법.
- 영상을 복호화하는 장치에 있어서,부호화 데이터로부터 부호화된 잔차 블록 및 부호화된 움직임 정보를 추출하는 정보 추출부;상기 부호화된 잔차 블록을 복호화하여 복원하는 복호화부;영상 부호화 장치와 기 정의된 하나 이상의 추정 기준에 따라 하나 이상의 움직임 벡터를 추정하고 상기 부호화된 움직임 정보를 복호화하여 복원하며, 상기 복원된 움직임 정보와 상기 추정된 하나 이상의 움직임 벡터를 이용하여 현재 블록의 움직임 벡터를 복원하고 상기 복원된 현재 블록의 움직임 벡터를 이용하여 현재 블록의 예측 블록을 생성하는 예측부; 및상기 복원된 잔차 블록과 상기 예측 블록을 가산하여 현재 블록을 복원하는 가산부를 포함하는 것을 특징으로 하는 영상 복호화 장치.
- 영상을 복호화하는 방법에 있어서,부호화 데이터로부터 부호화된 잔차 블록 및 부호화된 움직임 정보를 추출하는 정보 추출 단계;상기 부호화된 잔차 블록을 복호화하여 복원하는 복호화 단계;영상 부호화 장치와 기 정의된 하나 이상의 추정 기준에 따라 하나 이상의 움직임 벡터를 추정하고 상기 부호화된 움직임 정보를 복호화하여 복원하며, 상기 복원된 움직임 정보와 상기 추정된 하나 이상의 움직임 벡터를 이용하여 현재 블록의 움직임 벡터를 복원하고 상기 복원된 현재 블록의 움직임 벡터를 이용하여 현재 블록의 예측 블록을 생성하는 예측 단계; 및상기 복원된 잔차 블록과 상기 예측 블록을 가산하여 현재 블록을 복원하는 가산 단계를 포함하는 것을 특징으로 하는 영상 복호화 방법.
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/121,895 US8811487B2 (en) | 2008-09-30 | 2009-09-28 | Method and apparatus for inter prediction decoding with selective use of inverse quantization and inverse transform |
US14/302,738 US9137532B2 (en) | 2008-09-30 | 2014-06-12 | Method and an apparatus for inter prediction decoding with selective use of inverse quantization and inverse transform |
US14/693,787 US9264732B2 (en) | 2008-09-30 | 2015-04-22 | Method and an apparatus for decoding a video |
US14/693,778 US9326002B2 (en) | 2008-09-30 | 2015-04-22 | Method and an apparatus for decoding a video |
US14/693,761 US9264731B2 (en) | 2008-09-30 | 2015-04-22 | Method and an apparatus for decoding a video |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020080095871A KR101377660B1 (ko) | 2008-09-30 | 2008-09-30 | 복수 개의 움직임 벡터 추정을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
KR10-2008-0095871 | 2008-09-30 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/121,895 A-371-Of-International US8811487B2 (en) | 2008-09-30 | 2009-09-28 | Method and apparatus for inter prediction decoding with selective use of inverse quantization and inverse transform |
US14/302,738 Continuation US9137532B2 (en) | 2008-09-30 | 2014-06-12 | Method and an apparatus for inter prediction decoding with selective use of inverse quantization and inverse transform |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2010038961A2 true WO2010038961A2 (ko) | 2010-04-08 |
WO2010038961A3 WO2010038961A3 (ko) | 2010-06-24 |
Family
ID=42073994
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2009/005524 WO2010038961A2 (ko) | 2008-09-30 | 2009-09-28 | 복수 개의 움직임 벡터 추정을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (5) | US8811487B2 (ko) |
KR (1) | KR101377660B1 (ko) |
WO (1) | WO2010038961A2 (ko) |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040041865A (ko) * | 2002-11-12 | 2004-05-20 | 김경화 | 감기치료용 생약 조성물 |
KR101441903B1 (ko) * | 2008-10-16 | 2014-09-24 | 에스케이텔레콤 주식회사 | 참조 프레임 생성 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
KR101950419B1 (ko) | 2010-11-24 | 2019-02-21 | 벨로스 미디어 인터내셔널 리미티드 | 움직임 벡터 산출 방법, 화상 부호화 방법, 화상 복호 방법, 움직임 벡터 산출 장치 및 화상 부호화 복호 장치 |
KR101226497B1 (ko) * | 2010-12-28 | 2013-01-25 | 연세대학교 산학협력단 | 움직임 벡터 부호화 방법 및 장치 |
CN106851306B (zh) | 2011-01-12 | 2020-08-04 | 太阳专利托管公司 | 动态图像解码方法和动态图像解码装置 |
MX2013009864A (es) | 2011-03-03 | 2013-10-25 | Panasonic Corp | Metodo de codificacion de imagenes en movimiento, metodo de decodificacion de imagenes en movimiento, aparato de codificacion de imagenes en movimiento, aparato de decodificacion de imagenes en movimiento y aparato de codificacion y decodificacion de imagenes en movimiento. |
US9338458B2 (en) * | 2011-08-24 | 2016-05-10 | Mediatek Inc. | Video decoding apparatus and method for selectively bypassing processing of residual values and/or buffering of processed residual values |
GB2561487B (en) * | 2011-10-18 | 2019-01-02 | Kt Corp | Method for encoding image, method for decoding image, image encoder, and image decoder |
KR101542586B1 (ko) | 2011-10-19 | 2015-08-06 | 주식회사 케이티 | 영상 부호화/복호화 방법 및 그 장치 |
US9571833B2 (en) | 2011-11-04 | 2017-02-14 | Nokia Technologies Oy | Method for coding and an apparatus |
JP6168365B2 (ja) * | 2012-06-12 | 2017-07-26 | サン パテント トラスト | 動画像符号化方法、動画像復号化方法、動画像符号化装置および動画像復号化装置 |
TWI627857B (zh) | 2012-06-29 | 2018-06-21 | Sony Corp | Image processing device and method |
KR101527153B1 (ko) * | 2014-09-03 | 2015-06-10 | 에스케이텔레콤 주식회사 | 움직임정보 병합을 이용한 부호움직임정보생성/움직임정보복원 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 |
KR102365685B1 (ko) | 2015-01-05 | 2022-02-21 | 삼성전자주식회사 | 인코더의 작동 방법과 상기 인코더를 포함하는 장치들 |
US11153600B2 (en) * | 2016-02-08 | 2021-10-19 | Sharp Kabushiki Kaisha | Motion vector generation device, prediction image generation device, video decoding device, and video coding device |
WO2019001741A1 (en) * | 2017-06-30 | 2019-01-03 | Huawei Technologies Co., Ltd. | MOTION VECTOR REFINEMENT FOR MULTI-REFERENCE PREDICTION |
EP3648059B1 (en) * | 2018-10-29 | 2021-02-24 | Axis AB | Video processing device and method for determining motion metadata for an encoded video |
EP4118823A1 (en) * | 2020-03-12 | 2023-01-18 | InterDigital VC Holdings France | Method and apparatus for video encoding and decoding |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100275694B1 (ko) * | 1998-03-02 | 2000-12-15 | 윤덕용 | 실시간 동영상 부호화를 위한 초고속 움직임 벡터 추정방법 |
KR100364789B1 (ko) * | 2000-02-28 | 2002-12-16 | 엘지전자 주식회사 | 움직임 추정 방법 및 장치 |
KR20050042275A (ko) * | 2002-10-04 | 2005-05-06 | 엘지전자 주식회사 | 모션벡터 결정방법 |
KR100542445B1 (ko) * | 2005-06-30 | 2006-01-11 | 주식회사 휴맥스 | 동영상 부호화기에서의 움직임 벡터 추정 방법 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5416854A (en) * | 1990-07-31 | 1995-05-16 | Fujitsu Limited | Image data processing method and apparatus |
ES2431289T3 (es) * | 1993-03-24 | 2013-11-25 | Sony Corporation | Método de decodificación de señal de imagen y aparato asociado |
JP3944225B2 (ja) * | 2002-04-26 | 2007-07-11 | 株式会社エヌ・ティ・ティ・ドコモ | 画像符号化装置、画像復号装置、画像符号化方法、画像復号方法、画像符号化プログラム及び画像復号プログラム |
US8085850B2 (en) * | 2003-04-24 | 2011-12-27 | Zador Andrew M | Methods and apparatus for efficient encoding of image edges, motion, velocity, and detail |
KR100586100B1 (ko) * | 2003-05-12 | 2006-06-07 | 엘지전자 주식회사 | 동영상 코딩 방법 |
JP2005184042A (ja) * | 2003-12-15 | 2005-07-07 | Sony Corp | 画像復号装置及び画像復号方法並びに画像復号プログラム |
US7646814B2 (en) * | 2003-12-18 | 2010-01-12 | Lsi Corporation | Low complexity transcoding between videostreams using different entropy coding |
EP1592258B1 (en) * | 2004-04-30 | 2011-01-12 | Panasonic Corporation | Motion estimation employing adaptive spatial update vectors |
US7623682B2 (en) * | 2004-08-13 | 2009-11-24 | Samsung Electronics Co., Ltd. | Method and device for motion estimation and compensation for panorama image |
KR100588132B1 (ko) * | 2004-10-04 | 2006-06-09 | 삼성전자주식회사 | 디스플레이장치 |
TWI254571B (en) * | 2004-12-07 | 2006-05-01 | Sunplus Technology Co Ltd | Method for fast multiple reference frame motion estimation |
US20060120612A1 (en) * | 2004-12-08 | 2006-06-08 | Sharath Manjunath | Motion estimation techniques for video encoding |
US8929464B2 (en) * | 2005-03-25 | 2015-01-06 | Sharp Laboratories Of America, Inc. | Video entropy decoding with graceful degradation |
KR100772868B1 (ko) * | 2005-11-29 | 2007-11-02 | 삼성전자주식회사 | 복수 계층을 기반으로 하는 스케일러블 비디오 코딩 방법및 장치 |
US7944965B2 (en) * | 2005-12-19 | 2011-05-17 | Seiko Epson Corporation | Transform domain based distortion cost estimation |
KR20070069615A (ko) * | 2005-12-28 | 2007-07-03 | 삼성전자주식회사 | 움직임 추정장치 및 움직임 추정방법 |
US7751631B2 (en) * | 2006-12-22 | 2010-07-06 | Sony Corporation | Bypass using sum of absolute transformed differences value (SATD) in a video coding process |
KR101383540B1 (ko) * | 2007-01-03 | 2014-04-09 | 삼성전자주식회사 | 복수의 움직임 벡터 프리딕터들을 사용하여 움직임 벡터를추정하는 방법, 장치, 인코더, 디코더 및 복호화 방법 |
US8144778B2 (en) * | 2007-02-22 | 2012-03-27 | Sigma Designs, Inc. | Motion compensated frame rate conversion system and method |
KR101408698B1 (ko) * | 2007-07-31 | 2014-06-18 | 삼성전자주식회사 | 가중치 예측을 이용한 영상 부호화, 복호화 방법 및 장치 |
JP5044518B2 (ja) * | 2008-09-17 | 2012-10-10 | 株式会社東芝 | 画像処理装置及びコンピュータ端末 |
-
2008
- 2008-09-30 KR KR1020080095871A patent/KR101377660B1/ko active IP Right Grant
-
2009
- 2009-09-28 US US13/121,895 patent/US8811487B2/en active Active
- 2009-09-28 WO PCT/KR2009/005524 patent/WO2010038961A2/ko active Application Filing
-
2014
- 2014-06-12 US US14/302,738 patent/US9137532B2/en active Active
-
2015
- 2015-04-22 US US14/693,787 patent/US9264732B2/en active Active
- 2015-04-22 US US14/693,778 patent/US9326002B2/en active Active
- 2015-04-22 US US14/693,761 patent/US9264731B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100275694B1 (ko) * | 1998-03-02 | 2000-12-15 | 윤덕용 | 실시간 동영상 부호화를 위한 초고속 움직임 벡터 추정방법 |
KR100364789B1 (ko) * | 2000-02-28 | 2002-12-16 | 엘지전자 주식회사 | 움직임 추정 방법 및 장치 |
KR20050042275A (ko) * | 2002-10-04 | 2005-05-06 | 엘지전자 주식회사 | 모션벡터 결정방법 |
KR100542445B1 (ko) * | 2005-06-30 | 2006-01-11 | 주식회사 휴맥스 | 동영상 부호화기에서의 움직임 벡터 추정 방법 |
Also Published As
Publication number | Publication date |
---|---|
KR20100036583A (ko) | 2010-04-08 |
US9264731B2 (en) | 2016-02-16 |
US20140294083A1 (en) | 2014-10-02 |
WO2010038961A3 (ko) | 2010-06-24 |
US20150229954A1 (en) | 2015-08-13 |
US8811487B2 (en) | 2014-08-19 |
KR101377660B1 (ko) | 2014-03-26 |
US20150229937A1 (en) | 2015-08-13 |
US9326002B2 (en) | 2016-04-26 |
US9137532B2 (en) | 2015-09-15 |
US20150229938A1 (en) | 2015-08-13 |
US20110182362A1 (en) | 2011-07-28 |
US9264732B2 (en) | 2016-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010038961A2 (ko) | 복수 개의 움직임 벡터 추정을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2011031030A2 (ko) | 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2010044563A2 (ko) | 복수 개의 참조 픽처의 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 장치 및 방법 | |
WO2010050706A2 (ko) | 움직임 벡터 부호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2013002549A2 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2013109039A1 (ko) | 가중치예측을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2011031044A2 (ko) | 고해상도 동영상의 부호화/복호화 방법 및 장치 | |
WO2013070006A1 (ko) | 스킵모드를 이용한 동영상 부호화 및 복호화 방법 및 장치 | |
WO2010039015A2 (ko) | 이산 여현 변환/이산 정현 변환을 선택적으로 이용하는 부호화/복호화 장치 및 방법 | |
WO2010027182A2 (ko) | 서브블록 내 임의 화소를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2011068331A2 (ko) | 비디오 인코딩 장치 및 그 인코딩 방법, 비디오 디코딩 장치 및 그 디코딩 방법, 및 거기에 이용되는 방향적 인트라 예측방법 | |
WO2013002550A2 (ko) | 고속 코딩 단위(Coding Unit) 모드 결정을 통한 부호화/복호화 방법 및 장치 | |
WO2012011672A2 (ko) | 확장된 스킵모드를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2012026794A2 (ko) | 인트라 예측을 이용한 부호화 및 복호화 장치와 방법 | |
WO2011111954A2 (ko) | 움직임 벡터 해상도 조합을 이용한 움직임 벡터 부호화/복호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2012046979A2 (ko) | 주파수 마스크 테이블을 이용한 주파수변환 블록 부호화 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2010044569A2 (ko) | 참조 프레임 생성 방법 및 장치와 그를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2012015275A2 (ko) | 블록 분할예측을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2013069996A1 (ko) | 변환을 이용한 주파수 도메인 상의 적응적 루프 필터를 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2011037337A2 (ko) | 저주파수 성분을 고려한 영상 부호화/복호화 방법 및 장치 | |
WO2012033344A2 (ko) | 효과적인 화면내 예측모드 집합 선택을 이용한 영상 부호화/복호화 방법 및 장치 | |
WO2011021910A2 (ko) | 인트라 예측 부호화/복호화 방법 및 장치 | |
WO2011108879A2 (ko) | 영상 부호화 장치, 그 영상 부호화 방법, 영상 복호화 장치 및 그 영상 복호화 방법 | |
WO2012021040A2 (ko) | 필터링모드 생략가능한 영상 부호화/복호화 방법 및 장치 | |
WO2010044559A2 (ko) | 동영상 부호화/복호화 장치 및 그를 위한 가변 단위의 적응적 중첩 블록 움직임 보상 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09817971 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13121895 Country of ref document: US |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/07/2011) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09817971 Country of ref document: EP Kind code of ref document: A2 |