MX2012011646A - Method and apparatus for performing interpolation based on transform and inverse transform. - Google Patents

Method and apparatus for performing interpolation based on transform and inverse transform.

Info

Publication number
MX2012011646A
MX2012011646A MX2012011646A MX2012011646A MX2012011646A MX 2012011646 A MX2012011646 A MX 2012011646A MX 2012011646 A MX2012011646 A MX 2012011646A MX 2012011646 A MX2012011646 A MX 2012011646A MX 2012011646 A MX2012011646 A MX 2012011646A
Authority
MX
Mexico
Prior art keywords
interpolation
filter
pixel
unit
image
Prior art date
Application number
MX2012011646A
Other languages
Spanish (es)
Inventor
Woo-Jin Han
Tammy Lee
Alexander Alshin
Elena Alshina
Byeong-Doo Choi
Nikolay Shlyakhov
Yoon-Mi Hong
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of MX2012011646A publication Critical patent/MX2012011646A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/48Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using compressed domain processing techniques other than decoding, e.g. modification of transform coefficients, variable length coding [VLC] data or run-length data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/17Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method
    • G06F17/175Function evaluation by approximation methods, e.g. inter- or extrapolation, smoothing, least mean square method of multidimensional data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding
    • H04N19/52Processing of motion vectors by encoding by predictive encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors

Abstract

Provided are a method and apparatus for interpolating an image. The method includes: selecting a first filter, from among a plurality of different filters, for interpolating between pixel values of integer pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the integer pixel units by using the selected first filter.

Description

METHOD AND APPARATUS FOR PERFORMING INTERPOLATION BASED ON TRANSFORMED AND REVERSE TRANSFORM Field of Invention, The apparatuses and methods consistent with the exemplary embodiments relate to the interpolation of an image, and more particularly, to the interpolation between pixel values of whole pixel units.
Background of the Invention In a method of encoding and decoding the related art image, a frame is divided into a plurality of macroblocks for encoding an image. Then, each of the plurality of macroblocks is encoded by prediction by performing inter prediction or intra prediction in these.
The inter prediction is a method to compress an image by removing the temporal redundancy between the frames. A representative example of inter prediction is the encoding of motion estimation. In the motion estimation coding, each block of a current frame is predicted using at least one reference frame. A reference block that is most similar to a current block is searched in a predetermined search interval using a predetermined evaluation function.
The current block is predicted based on the block Ref. : 236105 reference, a residual block is obtained by subtracting a predicted block, which is the result of the prediction, from the current block, and then the residual block is coded. In this case, to precisely predict the current block, the sub-pixels that are smaller than the whole pixel units are generated by interpolating in a search interval of the reference frame, and the inter-prediction is made based on the sub-pixels. pixels.
Brief Description of the Invention Solution to the problem The aspects of one or more exemplary embodiments provide a method and apparatus for generating pixel values of fractional pixel units by interpolating the pixel values of whole pixel units.
The aspects of one or more exemplary embodiments also provide a computer-readable recording medium which has recorded in this a computer program for executing the method.
Advantageous Effects of the Invention According to the present application, a fractional pixel unit can be generated more accurately.
Brief Description of the Figures The foregoing and other features will become more apparent by describing in detail the exemplary embodiments with reference to the accompanying figures in which: FIG. 1 is a block diagram of an apparatus for encoding an image according to an exemplary embodiment; FIG. 2 is a block diagram of an apparatus for decoding an image in accordance with an exemplary embodiment; FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment; FIG. 4 is a block diagram of an image encoder based on a coding unit, in accordance with an exemplary embodiment; FIG. 5 is a block diagram of an image decoder based on a coding unit, according to an exemplary embodiment; FIG. 6 illustrates a maximum coding unit, a sub-coding unit, and a prediction unit, according to an exemplary embodiment; FIG. 7 illustrates a coding unit and a transform unit, according to an exemplary embodiment; FIGS. 8a to 8d illustrate forms of division of a coding unit, a prediction unit, and a transform unit, according to an exemplary embodiment; FIG. 9 is a block diagram of an image interpolation apparatus in accordance with an exemplary embodiment; FIG. 10 is a diagram illustrating a two-dimensional (2D) interpolation method performed by the image interpolation apparatus of FIG. 9, according to a modality and emplar; FIG. 11 is a diagram illustrating an interpolation region according to an exemplary embodiment; FIG. 12 is a diagram illustrating a one-dimensional interpolation method (ID) according to an exemplary embodiment; FIG. 13 is a diagram specifically illustrating an ID interpolation method performed by the image interpolation apparatus of FIG. 9, according to an exemplary embodiment; FIG. 14 is a block diagram of an image interpolation apparatus according to an exemplary embodiment; FIG. 15 illustrates 2D interpolation filters according to an exemplary embodiment; FIGS. 16a to I6f illustrate ID interpolation filters according to exemplary embodiments; FIGS. 17a to 17 and illustrate optimized ID interpolation filters in accordance with exemplary embodiments; FIGS. 18a and 18b illustrate methods for interpolating pixel values in various directions using an ID interpolation filter, in accordance with exemplary embodiments; FIG. 19a illustrates a 2D interpolation method according to an exemplary embodiment; FIG. 19b illustrates a 2D interpolation method using an ID interpolation filter, in accordance with another exemplary embodiment; FIG. 19c illustrates a 2D interpolation method using an ID interpolation filter, in accordance with another exemplary embodiment; FIG. 20 is a flow diagram illustrating an image interpolation method according to an exemplary embodiment; FIG. 21 is a flow chart illustrating an image interpolation method according to another exemplary embodiment; FIG. 22 is a flow diagram illustrating an image interpolation method according to another exemplary embodiment; Y FIGS. 23 a to 23 e illustrate methods for scaling and rounding in relation to an ID interpolation filter, in accordance with exemplary embodiments.
Detailed description of the invention According to one aspect of an exemplary embodiment, a method is provided for interpolating an image, the method includes: selecting a first filter, from a plurality of different filters, for interpolation between the pixel values of whole pixel units, according to an interpolation location; and generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the whole pixel units using the first selected filter for interpolation between the pixel values of the whole pixel units.
The method may additionally include selecting a second filter, from a plurality of different filters, for interpolation between at least one generated pixel value of at least one fractional pixel unit, according to an interpolation location; and interpolating between at least one generated pixel value of at least one fractional pixel unit using the second filter selected for interpolation between at least one generated pixel value of at least one fractional pixel unit.
The first filter for interpolation between the pixel values of the whole pixel units can be a spatial field filter to transform the pixel values of the whole pixel units using a plurality of base functions having different frequencies, and transform inverse a plurality of coefficients, which are obtained by transforming the pixel values of the whole pixel units, using the plurality of base functions, phases of which they are displaced.
The second filter for the interpolation between at least one generated pixel value of at least one fractional pixel unit can be a spatial field filter for the transformation of at least one generated pixel value of at least one fractional pixel unit using a plurality of base functions having different frequencies, and inverse transformation of a plurality of coefficients, which are obtained by transforming at least one pixel value generated from at least one fractional pixel unit, using the plurality of base functions, phases of which they move.
In accordance with an aspect of another exemplary embodiment, an apparatus for interpolating an image is provided, the apparatus includes: a filter selector which selects a first filter, from among a plurality of different filters, for interpolation between pixel values of units of whole pixel, according to an interpolation location; and an interpolator which generates at least one pixel value of at least one fractional pixel unit by interpolation between the pixel values of the whole pixel units using the first filter selected for the interpolation between the pixel values of the units of whole pixel.
The filter selector can select a second filter, from a plurality of different filters, for interpolation between at least one generated pixel value of at least one fractional pixel unit, according to an interpolation location, and the interpolator can interpolating between at least one generated pixel value of at least one fractional pixel unit using the second filter selected for interpolation between at least one generated pixel value of at least one fractional pixel unit.
In accordance with an aspect of another exemplary embodiment, a computer-readable recording medium is provided which has included in this a computer program for executing the method described above.
In accordance with an aspect of another exemplary embodiment, a method for interpolating an image is provided, the method includes: transforming pixel values into a spatial field using a plurality of base functions having different frequencies; shifting the phases of the plurality of base functions; and inverse transform a plurality of coefficients, obtained by transforming the pixel values, using the plurality of displaced phases of base functions.
Mode of the Invention Then, one or more exemplary embodiments will be described more fully with reference to the accompanying figures. Expressions such as "at least one of", when they precede a list of elements, modify the complete list of elements but do not modify the individual elements of the list. In the present specification, an "image" may denote a still image for a video or a moving image, ie, the video itself.
FIG. 1 is a block diagram of an apparatus 100 for encoding an image, in accordance with an exemplary embodiment. With reference to FIG. 1, apparatus 100 for encoding an image includes a maximum encoding unit divider 110, an encoding depth determiner 120, an image data encoder 130, and an encoding information encoder 140.
The maximum coding unit divider 110 can divide a current segment or frame based on a maximum coding unit which is a coding unit of the largest size. That is, the maximum coding unit divider 110 can divide the current segment or frame into at least one maximum encoding unit.
According to an exemplary embodiment, a coding unit can be represented using a maximum coding unit and a depth. As described above, the maximum coding unit indicates a coding unit having the largest size among the coding units of the current frame, and the depth indicates a degree of hierarchical decrease of the coding unit. When a depth increases, a coding unit may decrease from a maximum coding unit to a minimum coding unit, where a depth of the maximum coding unit is defined as a minimum depth and a depth of the minimum coding unit is Defined as a maximum depth. Since the size of a coding unit decreases from a maximum coding unit when a depth increases, a sub-coding unit of a depth ca can include a plurality of sub-coding units of one (k + n) at depth (where k and n are integers equal to or greater than 1.
According to an increase in the size of a frame to be encoded, encoding an image in a larger encoding unit can cause a higher image compression ratio. However, if a larger coding unit is set, an image can not be efficiently encoded reflecting continuously changing image characteristics.
For example, when a smooth area such as sea or sky is encoded, the greater a coding unit is, the more the compression ratio can increase. However, when a complex area such as people or buildings is encoded, the smaller the coding unit is, the more it can increase a compression ratio.
Accordingly, according to an exemplary embodiment, a different maximum image coding unit and a different maximum depth can be established for each frame or segment. Since a maximum depth denotes the maximum number of times by which a coding unit can decrease, the size of each minimum coding unit included in a maximum image coding unit can be set variably according to a maximum depth. The maximum depth can be determined differently for each frame or segment or for each maximum coding unit.
The coding depth determiner 120 determines a form of division of the maximum coding unit. The form of division can be determined based on the calculation of distortion rate costs (RD). The determined division shape of the maximum encoding unit is provided to the encoding information encoder 140, and the image data according to the maximum encoding units is provided to the image data encoder 130.
A maximum coding unit can be divided into sub-coding units having different sizes according to different depths, and the sub-coding units having different sizes, which are included in the maximum coding unit, can be predicted or transform based on processing units that have different sizes. In other words, apparatus 100 for encoding an image may perform a plurality of processing operations for image coding based on processing units having various sizes and various shapes. To encode the image data, processing operations are performed, such as at least one of prediction, transform, and entropy coding, wherein the processing units having the same size or different sizes can be used for processing operations , respectively.
For example, the apparatus 100 for encoding an image may select a processing unit that is different from a coding unit for predicting the coding unit.
When the size of a coding unit is 2Nx2N (where N is a positive integer), the processing units for the prediction can be 2Nx2N, 2NxN, Nx2N, and Nx. In other words, the motion prediction can be made based on a processing unit having a shape by which at least one of the height and width of one coding unit is equally divided by two. Then, a processing unit, which is the prediction base, is defined as a 'prediction unit'.
A prediction mode can be at least one of an intra mode, an inter mode, and an omission mode, and a specific prediction mode can be realized for only one prediction unit having a specific size or shape. For example, the intra mode can only be performed for prediction units that have the sizes of 2Nx2N or NxN of which the shape is a square. In addition, the omission mode can also be performed only for a prediction unit that has the size of 2Nx2N. If a plurality of prediction units exists in a coding unit, the prediction mode with the minimum coding errors can be selected after making the prediction for each prediction unit.
Alternatively, the apparatus 100 for encoding an image may perform the transform into image data, based on a processing unit having a different size of a coding unit. For the transform into the coding unit, the transform can be performed based on a processing unit having a size equal to or smaller than that of the coding unit. Then, a processing unit, which is the basis of the transform, is defined as a 'transform unit'. The transform can be discrete cosine transform (DCT) or Karhunen Loeve transform (KLT) or any other fixed point spatial transform.
The coding depth determiner 120 can determine the sub-coding units included in a maximum coding unit using RD optimization based on a Lagrangian multiplier. In other words, the coding depth determiner 120 can determine which shape has a plurality of divided sub-coding units of the maximum coding unit, wherein the plurality of the sub-coding units have different sizes according to their depths The image data encoder 130 produces a bitstream encoding the maximum coding unit based on the division forms determined by the coding depth determiner 120.
The encoding information encoder 140 encodes the information about a coding mode of the maximum coding unit determined by the encoding depth determiner 120. In other words, the encoding information encoder 140 produces a bit stream encoding the information about a form of division of the maximum coding unit, information about the maximum depth, and information about a coding mode of a sub-coding unit for each depth. The information about the coding mode of the sub-coding unit can include information about a unit of prediction of the sub-coding unit, information about a prediction mode for each unit of prediction, and information about a unit of transformation of the sub-coding unit.
The information about the division form of the maximum coding unit can be information for example, indicator information, which indicates whether each coding unit is divided. For example, when the maximum coding unit is divided and encoded, the information that indicates whether the maximum coding unit is divided is coded. Further, when a divided sub-coding unit of the maximum coding unit is divided and encoded, the information indicating whether the sub-coding unit is divided is coded.
Since sub-coding units having different sizes exist for each maximum coding unit and information about a coding mode must be determined for each subset unit, information about at least one encoding mode is can determine for a maximum coding unit.
The apparatus 100 for encoding an image can generate sub-coding units by equally dividing both the height and width of a maximum coding unit by two according to an increase in depth. That is, when the size of a coding unit of a depth ka is 2Nx2N, the size of a coding unit of one (k + 1) at depth is NxN.
Accordingly, the apparatus 100 for encoding an image can determine an optimal division shape for each maximum coding unit, based on the maximum coding unit sizes and a maximum depth in consideration of the image characteristics. By varying the size of a maximum coding unit in consideration of the image and coding characteristics of an image through the division of a maximum coding unit into sub-coding units of different depths, images having different resolutions are They can code more efficiently.
FIG. 2 is a block diagram of an apparatus 200 for decoding an image according to an exemplary embodiment. With reference to FIG. 2, apparatus 200 for decoding an image includes an image data acquisition unit 210, an encoding information extractor 220, and an image data decoder 230.
The image data acquisition unit 210 acquires image data according to the maximum coding units by analyzing a bitstream received by the apparatus 200 to decode an image, and sends the image data to the image data decoder 230. The image data acquisition unit 210 may extract information about the maximum coding units of a current segment or frame of a heading of the current segment or frame. In other words, the image data acquisition unit 210 divides the bitstream according to the maximum coding units so that the image data decoder 230 can decode the image data according to the maximum coding units. .
The encoding information extractor 220 extracts information about a maximum encoding unit, a maximum depth, a form of division of the maximum encoding unit, and a coding mode of sub-encoding units of the current frame header by analyzing the bitstream received by the apparatus 200 to decode an image. Information about the division form and information about the coding mode are provided to the image data decoder 230.
The information about the form of division of the maximum coding unit may include information of sub-coding units having different sizes according to the depths and included in the maximum coding unit, and may be information (eg information indicator) that indicates whether each coding unit is divided. The information about the encoding mode can include information about a prediction unit according to the sub-coding units, information about a prediction mode, and information about a transform unit.
The image data decoder 230 restores the current frame by decoding the image data of each maximum coding unit, based on the information extracted by the encoding information extractor 220.
The image data decoder 230 can decode the sub-coding units included in a maximum coding unit, based on the information about the division shape of the maximum coding unit. The decoding can include intra prediction, inter prediction that includes motion compensation, and inverse transform.
The image data decoder 230 can perform intra prediction or inter prediction based on the information about a prediction unit and information about a prediction mode for predicting a prediction unit. The image data decoder 230 can also perform reverse transforms for each sub-coding unit based on the information about a transform unit of a sub-coding unit.
FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment. With reference to FIG. 3, the hierarchical coding units may include coding units whose widths and heights are 64x64, 32x32, 16x16, 8x8, and 4x4. In addition to these coding units having perfect square shapes, there may also be coding units whose widths and heights are 64x32, 32x64, 32x16, 16x32, 16x8, 8x16, 8x4, and 4x8.
With reference to FIG. 3, for the image data 319 whose resolution is 1920x1080, the size of a maximum coding unit is set to 64x64, and a maximum depth is set to 2.
For image data 320 whose resolution is 1920x1080, the size of a maximum coding unit is set to 64x64, and a maximum depth is set to 3. For image data 330 whose resolution is 352x288, the size of a unit of Maximum coding is set to 16x16, and a maximum depth is set to 1.
When the resolution is high or the amount of data is large, a maximum size of one coding unit may be relatively large to increase a compression ratio and accurately reflect the image characteristics. Accordingly, for image data 310 and 320 having higher resolution than image data 330, 64x64 can be selected as the size of a maximum encoding unit.
A maximum depth indicates the total number of layers in the hierarchical coding units. Since the maximum depth of the image data 310 is 2, a coding unit 315 of the image data 310 may include a maximum encoding unit whose largest axis size is 64 and sub-encoding units whose axis sizes longer are 32 and 16, according to an increase of a depth.
On the other hand, since the maximum depth of the image data 330 is 1, a coding unit 335 of the image data 330 may include a maximum coding unit whose longest axis size is 16 and coding units whose sizes Longer axes are 8 and 4, according to an increase of a depth.
However, since the maximum depth of the image data 320 is 3, a coding unit 325 of the image data 320 may include a maximum encoding unit whose largest axis size is 64 and sub-encoding units whose Longer axis sizes are 32, 16, 8 and 4 according to an increase of a depth. Since an image is encoded based on a smaller subcode unit when a depth increases, the current exemplary mode is suitable for encoding an image that includes more minute scenes.
FIG. 4 is a block diagram of an image encoder 400 based on a coding unit, in accordance with an exemplary embodiment. An intra prediction unit 410 performs the intra prediction in prediction units of the intra mode in a current frame 405, and a motion estimator 420 and a motion compensator 425 performs inter prediction and motion compensation in prediction units of the inter mode. using the current frame 405 and a frame of reference 495.
The residual values are generated based on the prediction units produced from the intra prediction unit 410, the motion estimator 420, and the motion compensator 425, and then they are produced as quantized transform coefficients passing through a 430 transformer and a 440 quantifier.
The quantized transform coefficients are restored to the residual values by passing through a reverse quantizer 460 and a reverse transformer 470, are post-processed by going through a unlocking unit 480 and a loop filtering unit 490, and then they produce as the reference frame 495. The quantized transform coefficients can be produced as a bit stream 455 passing through an entropy coder 450.
To perform coding based on a coding method in accordance with an exemplary embodiment, the components of the image encoder 400, i.e. the intra prediction unit 410, the motion estimator 420, the motion compensator 425, the transformer 430 , the quantizer 440, the entropy coder 450, the inverse quantizer 460, the inverse transformer 470, the unlocking unit 480, and the loop filtering unit 490, can perform the image coding processes, based on a unit of maximum coding, units of sub-coding according to the depths, a unit of prediction, and a unit of transformation.
FIG. 5 is a block diagram of an image decoder 500 based on a coding unit, according to an exemplary embodiment. With reference to FIG. 5, a bitstream 505 is analyzed by an analyzer 510 to obtain encoded image data that is decoded and encoding information which is necessary for decoding. The encoded image data is produced as inverse quantized data by passing through an entropy decoder 520 and a reverse quantizer 530, and is restored to the residual values by passing through a reverse transformer 540. The residual values are restored in accordance with the coding units being aggregated to an intra prediction result of an intra prediction unit 550 or a motion compensation result of a motion compensator 560. The restored coding units are used for the prediction of following coding units or a next frame passing through a unlocking unit 570 and a loop filtering unit 580.
To perform the decoding based on a decoding method in accordance with an exemplary embodiment, the components of the image decoder 500, i.e., the analyzer 510, the entropy decoder 520, the inverse quantizer 530, the inverse transformer 540, the intra prediction unit 550, motion compensator 560, unlocking unit 570, and loop filtering unit 580, can perform image decoding processes based on a maximum coding unit, the sub-coding units of according to the depths, a unit of prediction, and a unit of transformation.
In particular, the intra prediction unit 550 and the motion compensator 560 determine a prediction unit and a prediction mode in a sub-coding unit considering a maximum coding unit and a depth, and the inverse transformer 540 performs inverse transform considering the size of a unit of transforms.
FIG. 6 illustrates a maximum coding unit, a sub-coding unit, and a prediction unit, according to an exemplary embodiment. The apparatus 100 for encoding an image illustrated in FIG. 1 and apparatus 200 for decoding an image illustrated in FIG. 2 use hierarchical coding units to perform coding and decoding in consideration of the characteristics of the image. A maximum coding unit and a maximum depth can be set adaptively according to the characteristics of the image or established in a varied manner according to the requirements of a user.
In FIG. 6, a hierarchical coding unit structure 600 has a maximum coding unit 610 whose height and width are 64 and the maximum depth is 4. A depth increases along a vertical axis of the hierarchical coding unit structure 600, and when a depth increases, the heights and widths of the sub-coding units 620 to 650 decrease. The prediction units of the maximum coding unit 610 and the sub-coding units 620 to 650 are shown along a horizontal axis of the hierarchical coding unit structure 600.
The maximum coding unit 610 has a depth of 0 and the size of a coding unit, i.e. height and width, of 64x64. A depth increases along the vertical axis, and there is a sub-coding unit 620 whose size is 32x32 and the depth is 1, a sub-coding unit 630 whose size is 16x16 and the depth is 2, a unit of sub -codification 640 whose size is 8x8 and the depth is 3, and a sub-coding unit 650 whose size is 4x4 and the depth is 4. The sub-coding unit 650 whose size is 4x4 and the depth is 4 is a unit of minimum coding, and the minimum coding unit can be divided into prediction units, each of which is smaller than the minimum coding unit.
With reference to FIG. 6, the examples of a prediction unit are shown along the horizontal axis according to each depth. That is, a prediction unit of the maximum coding unit 610 whose depth is 0 can be a prediction unit whose size is equal to the coding unit 610, ie 64x64, or a prediction unit 612 whose size is 64x32, a prediction unit 614 whose size is 32x64, or a prediction unit 616 whose size is 32x32, which has a smaller size than the coding unit 610 whose size is 64x64.
A prediction unit of the coding unit 620 whose depth is 1 and the size is 32x32 can be a prediction unit whose size is equal to the coding unit 620, ie 32x32, or a prediction unit 622 whose size is 32x16 , a prediction unit 624 whose size is 16x32, or a prediction unit 626 whose size is 16x16, which is smaller than the coding unit 620 whose size is 32x32.
A prediction unit of the coding unit 630 whose depth is 2 and the size is 16x16 can be a prediction unit whose size is equal to the coding unit 630, ie 16x16, or a prediction unit 632 whose size is 16x8 , a prediction unit 634 whose size is 8x16, or a prediction unit 636 whose size is 8x8, which has a smaller size than the coding unit 630 whose size is 16x16.
A prediction unit of the coding unit 640 whose depth is 3 and the size 8x8 can be a prediction unit whose size is equal to the coding unit 640, ie 8x8, or a prediction unit 642 whose size is 8x4, a prediction unit 644 whose size is 4x8, or a prediction unit 646 whose size is 4x4, which is smaller than the coding unit 640 whose size is 8x8.
Finally, the coding unit 650 whose depth is 4 and the size is 4x4 is a minimum coding unit and a coding unit of a maximum depth, and a prediction unit of the coding unit 650 can be a prediction unit 650 whose size is 4x4, a prediction unit 652 having a size of 4x2, a prediction unit 654 having a size of 2x4, or a prediction unit 656 having a size of 2x2.
FIG. 7 illustrates a coding unit and a transform unit, according to an exemplary embodiment. The apparatus 100 for encoding an image illustrated in FIG. 1 and apparatus 200 for decoding an image illustrated in FIG. 2 perform coding and decoding with a maximum coding unit by themselves or with sub-coding units, which are equal to or less than the maximum coding unit, divided from the maximum coding unit. In the encoding and decoding process, the size of a transform unit for the transform can be selected to be no larger than that of a corresponding coding unit. For example, with reference to FIG. 7, when a current coding unit 710 has the size of 64x64, the transform can be performed using a transform unit 720 which has the size of 32x32.
FIGS. 8A through 8D illustrate forms of division of a coding unit, a prediction unit, and a transform unit, according to an exemplary embodiment. Specifically, FIGS. 8A and 8B illustrate a coding unit and a prediction unit according to an exemplary embodiment.
FIG. 8a shows a division form selected by the apparatus 100 to encode an image illustrated in FIG. 1, for encoding a maximum coding unit 810. The apparatus 100 for coding an image divides the maximum coding unit 810 into various forms, performs coding therein, and selects an optimal division shape by comparing the coding results of the various coding units. forms of division among themselves based on RD costs. When it is optimal for the maximum coding unit 810 to be encoded as is, the maximum coding unit 810 can be encoded without dividing the maximum coding unit 810 as illustrated in FIGS. 8A to 8D.
With reference to FIG. 8b, the maximum coding unit 810 whose depth is 0 is encoded by dividing it into sub-coding units whose depths are equal to or greater than 1. That is, the maximum coding unit 810 is divided into four sub-coding units whose depths are 1, and all or some of the sub-coding units whose depths are 1 are divided into sub-coding units whose depths are 2.
A sub-coding unit located on a right upper side and a sub-coding unit located on a lower left side between the sub-coding units whose depths are 1 are divided into sub-coding units whose depths are equal to or greater 2. Some of the sub-coding units whose depths are equal to or greater than 2 can be divided into sub-coding units whose proficiency is equal to or greater than 3.
FIG. 8b shows a division form of a prediction unit for the maximum coding unit 810. With reference to FIG. 8b, a prediction unit 860 for the maximum coding unit 810 can be divided differently from the maximum coding unit 810. In other words, a prediction unit for each of the sub-coding units can be smaller than a corresponding sub-coding unit, For example, a prediction unit for a sub-coding unit 854 located on a lower right side between the sub-coding units whose depths are 1 may be smaller than the sub-coding unit 854. In addition, the prediction units for some sub-coding units 814, 816, 850, and 852 among the sub-coding units 814, 816, 818, 828, 850, and 852 whose depths are 2 may be less than the sub-coding units 814 , 816, 850, and 852, respectively.
In addition, the prediction units for the sub-coding units 822, 832, and 848 whose depths are 3 may be less than the sub-coding units 822, 832, and 848, respectively. The prediction units may have a form by which the respective sub-coding units are equally divided by two in a height or width direction or have a form by which the respective sub-coding units are equally divided by four in directions of height and width.
FIGS. 8C and 8D illustrate a prediction unit and a transform unit, according to an emplar modality.
FIG. 8c shows a division form of a prediction unit for the maximum coding unit 810 shown in FIG. 8b, and FIG. 8d shows a division form of a transform unit of the maximum coding unit 810.
With reference to FIG. 8d, a form of division of a transform unit 870 can be set differently from the prediction unit 860.
For example, even though a prediction unit for the coding unit 854 whose depth is 1 is selected with a form by which the height of the coding unit 854 is likewise divided by two, a transform unit can be selected with the Same size as the coding unit 854. Likewise, even though the prediction units for the coding units 814 and 850 whose depths are 2 are selected with a form by which the height of each of the coding units 814 and 850 similarly it is divided by two, a transform unit can be selected with the same size as the original size of each of the coding units 814 and 850.
A transform unit can be selected with a smaller size than a prediction unit. For example, when a prediction unit for coding unit 852 whose depth is 2 is selected with a form by which the width of coding unit 852 is equally divided by two, a transform unit may be selected with a form whereby the coding unit 852 is equally divided by four in the height and width directions, which has a smaller size than the shape of the prediction unit.
FIG. 9 is a block diagram of an image interpolation apparatus 900 according to an exemplary embodiment. Image interpolation can be used to convert an image that has a low resolution for an image that has a high resolution. In addition, image interpolation can be used to convert an interlaced image to a progressive image or it can be used to sample an image that has a low resolution at a higher resolution. When the image encoder 400 of FIG. 4 encodes an image, the motion estimator 420 and the motion compensator 425 can perform inter prediction using an interpolated reference frame. That is, with reference to FIG. 4, an image having a high resolution can be generated by interpolating the frame of reference 495, and the compensation and estimation of movement can be made based on the image that has the high resolution, increasing the precision of the inter prediction. Likewise, when the image decoder 500 of FIG. 5 decodes an image, the motion compensator 550 can perform motion compensation using an interpolated reference frame, increasing the accuracy of the inter prediction.
With reference to FIG. 9, the image interpolation apparatus 900 includes a transformer 910 and a reverse transformer 920.
The transformer 910 transforms the pixel values using a plurality of base functions having different frequencies. The transform may be one of the various processes of pixel value transformation in a spatial field in the frequency field coefficients, and may be, for example, DCT as described above. The pixel values of an entire pixel unit are transformed using the plurality of base functions. The pixel values may be pixel values of luminance components or chrominance components. The plurality type of base functions is not limited, and may be one of several types of functions to transform pixel values in a spatial field into frequency field values. For example, the plurality of base functions can be cosine functions to perform DCT or inverse DCT. In addition, various types of base functions, such as sine base functions or polynomial base functions, can be used. Examples of DCT may include modified DCT, and modified DCT that uses windows.
The inverse transformer 920 shifts the phases of the plurality of base functions used to perform the transform by the transformer 910, and inverse transform a plurality of coefficients, that is, the frequency field values, which are generated by the transformer 910, using the plurality of base functions, the phases of which they move. The transform made by the transformer 910 and the reverse transform performed by the inverse transformer 920 will now be described using two-dimensional (2D) DCT and one-dimensional (ID) DCT. 2D DCT and 2D reverse DCT FIG. 10 is a diagram illustrating a 2D interpolation method performed by the image interpolation apparatus 900 of FIG. 9, according to an exemplary mode. With reference to FIG. 10, the image interpolation apparatus 900 generates pixel values at the X locations, i.e., interpolation locations, by interpolation between the pixel values of whole pixel units in the spatial field, eg, pixel values in the locations O in a block 1000. The pixel values in the X locations are pixel values of fractional pixel units, the interpolation locations of which are determined by '???' and lc < Y' . Although FIG. 10 illustrates a case where block 1000 has a size of 4x4, the size of block 1000 is not limited to 4x4, and it would be obvious to those of ordinary experience in the art that pixel values of fractional pixel units can be generated by 2D DCT and 2D reverse DTC in a block that is smaller or larger than block 1000.
First, the transformer 910 performs 2D DCT on the pixel values of the entire pixel units. The 2D DCT can be performed according to the following equation: C = D (x) x REF x D (y) ... (1), where 'C denotes a block that includes frequency field coefficients obtained by performing 2D DCT, XREF' denotes the block 1000 in which the DCT is performed, 'D (x)' is a matrix to perform DCT in the axis direction X, that is, the horizontal direction, and xD (y) 'denotes a matrix to perform DCT in the Y-axis direction, that is, the vertical direction. Here, 'D (x)' and 'D (y)' can be defined by the following equation (2): D "W = - cos (^ -) or < / < .??, _.- i ... (2), where * k 'and' 1 'denote integers each satisfying the condition expressed in Equation (2), vDki (x)' denotes a row and a column of a square matrix D (x), and Sx denotes the horizontal and vertical sizes of the square matrix D (x). 0 = A- = v- 1 o = / = sy-i ... (3), where y% 1 'denote integers each satisfying the condition expressed in Equation (3), Dki (y) denotes a row and a column of a square matrix D (y), and Sy denotes the horizontal and vertical sizes of the square matrix D (y).
The transformer 910 performs 2D DCT in block 100 calculating Equation (1), and inverse transformer 910 performs 2D DCT in the frequency field coefficients generated by transformer 910 by calculating the following equation: P = W (x) x D (x) x REF x D (y) x W (y) .... (4), where 'P' denotes a block that includes pixel values in an interpolation location, that is, location X, which are obtained by performing inverse DCT. Compared to Equation (1), Equation (4) is obtained by multiplying both sides of block C by 'W (x)' and 'W (y)', respectively, to perform inverse DCT in block C. Here, xW (x) 'denotes a matrix for performing DCT inverse in the horizontal direction, and' W (y) 'denotes the realization of inverse DCT in the vertical direction.
As described above, the inverse transformer 910 uses the plurality of base functions, the phases of which are shifted, to perform 2D inverse DCT. '(x)' and lW (y) 'can be defined by the following equations (5) and (6): 1, / (2 / + 1 + 2a?)? T? Wl0 (x) = - 'W¿x) = cos¡ 2S) 0 = A '= SX-1 o = i = sx-i ... (5), where '1' and 'k' denote integers each satisfying the condition expressed in Equation (5), 'Wiklx)' denotes a row and a column of a square matrix W (x), and Sx denotes the sizes horizontal and vertical of the square matrix W (x). ax denotes a horizontal interpolation location as illustrated in FIG. 10, and it can be a fractional number, for example, 1/2, 1/4, 3/4, 1/8, 3/8, 5/8, 7/8, 1/16, or ... However , the fractional number is not limited to these, and x can be a real number. o = i = sy- i ... (6), where * 1 'and' k 'denote integers each satisfying the condition expressed in Equation (6), vWik (y)' denotes a row and a column of a square matrix W (y), and Sy denotes the horizontal and vertical sizes of the square matrix W (y). ay denotes a vertical interpolation location as illustrated in FIG. 10, and it can be a fractional number, for example 1/2, 1/4, 3/4, 1/8, 3/8, 5/8, 7/8, 1/16, or ... However, the fractional number is not limited to these, and ay can be a real number.
In comparison with Equations (2) and (3), the phases of the plurality of base functions used by the inverse transformer 910, that is, a plurality of cosine functions, are displaced by 2ax and 2 and, respectively, in the Equations (5) and (6). If the inverse transformer 910 performs 2D inverse DCT based on the plurality of cosine functions, the phases of which are shifted, as expressed in Equations (5) and (6), then the pixel values of the X locations are generate.
FIG. 11 is a diagram illustrating an interpolation region 1110 according to an exemplary embodiment. When the transformer 910 and the inverse transformer 920 of FIG. 29 generate pixel values in the interpolation locations by performing 2D DCT and 2D inverse DCT, respectively, a region 1120 that is larger than a block that will be interpolated, i.e., an interpolation region 1110, can be used. In general, the interpolation accuracy can be decreased to the boundaries of the interpolation region 1110, and accordingly, the correlation between the pixel values adjacent to an interpolation location can be considered for interpolation. The image interpolation apparatus 900 of FIG. 9 performs 2D DCT on pixel values included in interpolation region 1110 and then performs 2D inverse DCT on the result of the 2D DCT embodiment, wherein the correlation between the pixel values included in interpolation region 1110 and values of pixel outside the interpolation region 1110 is not considered.
Accordingly, the image interpolation apparatus 900 performs interpolation in the region 1120, which is larger than the interpolation region 1110 and includes the interpolation region 1110 and a region adjacent to the interpolation region 1110, and uses the values pixel in the interpolation region 1110 for motion compensation.
DCT ID and DCT Reverse ID FIG. 12 is a diagram illustrating an ID interpolation method according to an exemplary embodiment. With reference to FIG. 12, the image interpolation apparatus 900 of FIG. 9 generates a pixel value 1200 at an interpolation location by interpolating between a pixel value 1210 and a pixel value 1220 of whole pixel units in a spatial field. The pixel value 1200 is a pixel value of a functional pixel unit, the interpolation location of which is determined by * a '. The interpolation method ID according to the current exemplary embodiment will now be described in detail with reference to FIG. 13 FIG. 13 is a diagram specifically illustrating an ID interpolation method performed by the image interpolation apparatus 900 of FIG. 9, according to an exemplary mode. With reference to FIG. 13, a plurality of adjacent pixel values 1310 and 1320 including pixel values 1210 and 1220 of whole pixel units, respectively, are used to generate a pixel value 1200 of a fractional pixel unit by interpolation between the two values of pixel 1210 and 1220. In other words, the DCT ID is performed in - (Ml) ° to M ° pixel values, that is, 2M pixel values, the inverse DCT ID is performed in the result of the realization of the DCT ID, based on a plurality of base functions, the phases of which move, by interpolation between a 0th pixel and a 1st pixel. FIG. 13 illustrates a case where M = 6, but 'M' is not limited to 6 and can be any positive integer greater than 0.
In addition, FIGS. 12 and 13 illustrate cases where interpolation is performed between pixel values adjacent to the horizontal direction, but it would be obvious to those of ordinary skill in the art that the interpolation methods ID of FIGS. 12 and 13 can be used to interpolate between adjacent pixel values in the vertical direction or a diagonal direction (See FIGS 18A and 18B for more details).
Transformer 910 performs DCT ID in pixel values of whole pixel units. The DCT ID can be performed by calculating the following equation: 0 < ? t < 2? ^ - 1 ... (7), where 'p (l)' denotes the - (M-l) ° to M ° pixel values, for example, the -5th to 6th pixel values 1310 and 1320 illustrated in FIG. 13, and? Ck 'denotes a plurality of coefficients obtained by performing the DCT ID in the pixel values. Here, xk 'denotes a positive integer that satisfies the condition expressed in Equation (7).
When the transformer 910 performs DCT ID at the pixel values 1310 and 1320 by calculating Equation (7), the inverse transformer 920 performs DCT inverse ID on the frequency field coefficients generated by the transformer 910 by calculating the following Equation (8).
C0 '?? - \ (2a-1 + 2 ?? £ p ... (8), ? c, eos ? wherein "'denotes an interpolation location between two pixel values as described above with reference to FIG. 13, and it can be one of several fractional numbers, for example, 1/2, 1/4, 3/4, 1 / 8,3 / 8, 5/8, 7/8, 1716, ... The numbers Frames are not limited, and * 'can be a real number. '? (a)' denotes the value of pixel 1200 in the interpolation location generated by performing reverse DCT ID. Compared to Equation (7), the phase of the cosine function expressed in Equation (8), which is a base function used to perform inverse DCT ID, is determined by the fractional number 4 OI 'different from an integer' 1 ', and therefore is different from the phase of the base function used to perform DCT ID.
FIG. 14 is a block diagram of an image interpolation apparatus 1400 in accordance with an exemplary embodiment. With reference to FIG. 14, the image interpolation apparatus 1400 includes a filter selector 1410 and an interpolator 1420. The image interpolation apparatus 900 of FIG. 9 transforms an image and inverse transform the result of the transformation based on a plurality of base functions, the phases of which move. However, if the transformed and inverse transform are performed when the pixel values are input to the image interpolation apparatus 900, the amount of calculation required is large, decreasing the operating speed of an image processing system.
Therefore, image interpolation can be performed quickly in a spatial field without having to transform the spatial field to a frequency field by calculating the filter coefficients to perform transform and inverse transform described above and then filter the pixel values in the field spatial, which will be introduced to the image interpolation apparatus 1400, using the calculated filter coefficients.
The filter selector 1410 receives information with respect to an interpolation location and selects a filter that is used for interpolation. As described above, the filter is used to transform pixel values based on a plurality of base functions having different frequencies and to reverse transform a plurality of coefficients, which are obtained through the transform, based on the plurality of base functions, the phases of which they move. The filter coefficients may vary according to an interpolation location, and the filter is selected according to the interpolation location.
As described above with reference to FIG. 9, the pixel values are transformed using the plurality of base functions having different frequencies, and the phases of the plurality of base functions having different frequencies are shifted according to the interpolation location to perform inverse transform. Then, the pixel values in the interpolation location can be interpolated by inversely transforming the plurality of coefficients using the plurality of base functions, the phases from which they are displaced. In other words, if the transformation is made based on the pixel values of whole pixel units and the inverse transform is performed based on the plurality of base functions, the phases of which are displaced, according to the location of interpolation, then the pixel values of at least one fractional pixel unit can be generated for several interpolation locations. Accordingly, the filter selector 1410 of FIG. 14 pre-establishes a plurality of filters to perform transform and perform inverse transformation based on the different base functions, and selects one of the preset filters, based on the information with respect to an interpolation location.
The interpolator 1420 performs the interpolation using the filter selected by the filter selector 1410. Specifically, the interpolation is performed by filtering a plurality of pixel values of whole pixel units based on the selected filter. As the result of interpolation, a pixel value in a predetermined interpolation location, i.e., a pixel value of a fractional pixel unit, is obtained. With reference to FIG. 10, if a block including a plurality of pixel values of whole pixel units is filtered with a 2D filter, then a plurality of pixel values are generated in the interpolation locations, each of which is determined by 'a ? ' and go. ' With reference to FIG. 13, if a row or column including a plurality of pixel values of whole pixel units is filtered with a filter ID, then a plurality of pixel values are generated in the interpolations a. The interpolation methods performed using the 2D filter and the ID filter, respectively, will now be described below with reference to the accompanying figures. 2D filter P = W (x) x D (x) x REF x D (y) x W (y) as described above in relation to Equation (4). This equation can also be expressed as follows: P = F (x) x REF x F (y) .... (9), where 'F (x)' denotes a filter for the transformation of a REF block in the horizontal direction and for the inverse transformation of the result of the transformation in the horizontal direction using the plurality of base functions, the phases of which are displaced . 'F (y)' denotes a filter for the transformation of the block REF in the vertical direction and for the inverse transformation of the result of the transformation in the vertical direction using the plurality of base functions, the phases from which they move. For example, F (x) 'may denote a filter to perform DCT on the REF block in the horizontal direction, and perform inverse DCT on the result of the embodiment in the horizontal direction using a plurality of cosine functions, the phases of which they move. xF (y) 'can denote a filter to perform DCT on the REF block in the vertical direction, and perform inverse DCT on the result of the realization in the vertical direction using a plurality of cosine functions, the phases of which move.
According to Equations (2), (3), (5), and (6), the filters F (x) and F (y) can be defined by the following Equations (10) and (11): or = / = ¿> x-i ... (10), where 1 k 'and' 1 'denote integers each satisfying the condition expressed in Equation (10), vFki (x)' denotes a row and a column of a matrix F (x), and Sx denotes the sizes horizontal and vertical square matrices W (x) and D (x). Since the square matrices W (x) and D (x) have the same size, the horizontal and vertical sizes of them are also the same. 'Wkn (x)' denotes a row and a column of the square matrix W (x) described above in relation to Equation (5). Dni (x) denotes a row and a column of the square matrix D (x) described above in relation to Equation (2). = k = Sy-1 o = / = y- 1 ... (11), where k 'and' 1 'denote integers each satisfying the condition expressed in Equation (11),' Fkiíy) 'denotes a row and a column of a matrix F (y), and Sy denotes the horizontal and vertical sizes. vertical of square matrices W (y) and D (y). Since the square matrices (y) and D (y) have the same size, the horizontal and vertical sizes of them are also the same. 'Wniíy)' denotes a row and a column of the square matrix W (y) described above in relation to Equation (5). 1 Dkn (y) 'denotes a row and a column of the square matrix D (y) described above in relation to Equation (2).
If the interpolation is done by increasing the bit depths of the filters F (x) and F (y), the accuracy of the filtering can be improved. Accordingly, according to an exemplary embodiment, the coefficients of the filters F (x) and F (y) are increased by multiplying them by a predetermined value, and an image can be interpolated using these filters including the increased coefficients. In this case, Equation (9) can be changed as follows: P = (F '(x) x REF x F' (y)) / S2 ... (12), where 4F '(x)' denotes a scaled filter by multiplying the filter coefficients F (x) by an scaling factor S 'and rounding the result of multiplication to an integer, and' F '(y)' denotes a filter obtained by multiplying the filter coefficients F (y) by VS 'and rounding the result of the multiplication to an integer. Since the interpolation is performed using the scaled filter, the pixel values in the interpolation locations are calculated and then divided by 'S2' to compensate for the scaling effect.
FIG. 15 illustrates the 2D interpolation filters according to an exemplary embodiment. Specifically, FIG. 15 illustrates scaled filter coefficients according to Equation (2). That is, FIG. 15 illustrates the F '(x) filters of 2D interpolation when' a? ' is 1/4, 1/2, and 3/4, where the 2D interpolation filters F '(x) are generated by multiplying the coefficients of the 2D interpolation filter F (x) by a scaling factor of 213. A filter F '(and) of 2D interpolation when' ay 'is 1/4, 1/2, and 3/4, it can be used by transposing the filter F' (x).
With reference to FIG. 14, if the filter selector 1410 selects one of the 2D interpolation filters of FIG. 15 based on an interpolation location, interpolator 1420 generates pixel values at the interpolation location by calculating Equation (9) or (12).
ID filter DCT ID according to Equation (7) can be expressed as the following determinant: C = D x REF ... (13), where 4C denotes a matrix (2Mxl) for 2M coefficients described above in relation to Equation (7), and 'REF' denotes a matrix (2Mxl) for the pixel values of whole pixel units described above in relation to the Equation (7), that is, P (m- D, ... to PM.) The total number of pixel values used for the interpolation, that is, 2M, denotes the total number of derivations of an ID interpolation filter. * D 'denotes a square matrix for DCT ID, which can be defined as follows: O »= M COÍ 4) 0 < Ar = 2i -l -CA ^ T- 1) < l = M ... (14), where xk 'and' 1 'denote integers each satisfying the condition expressed in Equation (14),' Dki 'denotes a k row and a column of a square matrix D for the DCT ID expressed in Equation (13) , Y '?' it has been described above in relation to Equation (13).
DCT ID using a plurality of base functions, the phases of which move, according to Equation (8) can be expressed as the following determinant: P (a) = W (a) x C ... (15), where ?? () 'is the same as lP (a)' expressed in Equation (8), and 'W ()' denotes a matrix (lx2M) for inverse DCT ID using a plurality of base functions, the phases of which move. Vt (a) 'can be defined as follows: 1 < / r < 2iW- 1 ... (16) where 'k' denotes an integer satisfying the condition expressed in Equation (16), and 'Wk (a)' denotes a ka column of the matrix W (a) described above in relation to Equation (15). An interpolation filter ID F (a) to perform the DCT ID and reverse DCT ID using a plurality of base functions, the phases of which are displaced, based on Equations (13) and (15), can be defined as follows: P (a) = ^ «XREF 0 < Ar < 2i -l - W-1) < / < Í ... (17), where 4k 'and 41' denote integers each satisfying the condition expressed in Equation (17), 4Fi (a) 'denotes a column of the filter F (oi), and 4W (a)' and 4D 'are the same as 4W (a) 'and 4D' expressed in Equation (13).
The filtering accuracy can be improved by increasing the bit depth of the interpolation filter ID F (a) similar to a 2D interpolation filter. An image can be interpolated by increasing the interpolation filter coefficients ID F () by multiplying them with a predetermined value and using an interpolation filter ID F () including the increased coefficients.
For example, interpolation can be done by multiplying the interpolation filter ID F (a) by a scaling factor 42ScalingBits'. In this case, P (a) = Fia) x REF expressed in Equation (17) can be changed as follows: ) »ScalingBits ... (18), where F'i () denotes a scaled filter by multiplying the coefficients of the interpolation filter ID F (a) by the scaling factor »2ScalingBitS / and rounding the result of the multiplication to an integer, 4 REF]. ' denotes a column of the matrix REF expressed in Equation (17), and * 2ScaUngBits "1" denotes an aggregate value to round a filtered pixel value A pixel value in an interpolation location is calculated by multiplying the scaled filter F 'i () by a matrix for pixel values, the result of the calculation is rounded by adding the value, 2scaiingBits-i, to egtef and the resulting value is shifted by a' ScalingBits' bit to compensate for the scaling effect.
The rounding used in the equations described above is only an example of a method for quantifying filter coefficients. To generalize a method to quantify filter coefficients for easy understanding, the filter coefficients can be modified and optimized as expressed in the following Equations (19) and (20): (^ («) - e) < / ', (<?) = (^ (° + e) ••• (19), where xFi () 'denotes an Io coefficient of a filter that is not quantified,' f'iía) 'denotes an Io coefficient of the filter that is quantified, and' e 'denotes any real number that can be selected according to a degree of quantification and can be, for example, 0.2 * Fi (a). According to Equation (19), when the Io coefficient Fi (o¡) which is a real number is calculated according to Equation (13) to (17), then the Io coefficient Fi (o¡) is changed to Io coefficient f'i (oí) satisfying Equation (19), quantifying the Io coefficient Fi (o¡).
When the filter coefficients are scaled by a predetermined scaling factor, the quantization according to Equation (19) can be changed as follows. (p * F, (o) - p * e) < F ', (o) < (p * Fj (o) + p * e) ...(twenty), where xp 'denotes a scaling factor (which can be? 2Scalin9Bits'), and p * Fi (a) denotes a scaled filter coefficient. According to Equation (20), xp * Fi (a) 'is converted to xF'i (a)'.
FIGS. 16a to 16f illustrate ID interpolation filters according to exemplary embodiments. In FIGS. 16a to 16f, the scaled filters described above in relation to Equation (18) are illustrated according to the number of derivations and an interpolation location. Specifically, FIGS. 16a to 16f illustrate a 4-tap filter, a 6-tap filter, an 8-tap filter, a 10-tap filter, a 12-tap filter, and a 14-tap filter, respectively. In FIGS. 16a to 16f, a scaling factor for the filter coefficients is set to '256', that is, a ScalingBits is set to '8'.
In FIGS. 16a to 16f, the filter coefficients include coefficients for the high frequency components, whereby the interpolation and prediction accuracy can be increased but the image compression efficiency can be degraded due to the high frequency components. However, interpolation is performed to increase the image compression efficiency as described above with reference to FIG. 9. To solve this problem, the filter coefficients illustrated in FIGS. 16a to 16f can be adjusted to increase the image compression efficiency in this case.
For example, an absolute value of each of the filter coefficients can be reduced, and each filter coefficient at the midpoint of each filter of each filter can be multiplied by a large weighted value than the weighted values assigned to the other coefficients of filter. For example, with reference to FIG. 16B, in the 6-lead filter to generate pixel values in an interpolation location 1/2, the filter coefficients,. { 11, -43, 160, 160, -43, 11 ,} Are they adjusted in such a way that the absolute values of? 11 ',? -43', and '160' can be reduced and only? 160 'at the midpoint of the 6-lead filter is multiplied by a weighted value.
FIGS. 17a to 17 and illustrate optimized ID interpolation filters according to exemplary embodiments. The filters illustrated in FIGS. 16a to 16f can also be adjusted to easily include the filter by hardware. When Equation (17) or (18) is calculated using a computer, the filter coefficients can be optimized to minimize an arithmetic operation, for example, binary number shift and addition.
In FIGS. 17A and 17B, the amount of calculation necessary to perform filtering by interpolation of each filter is indicated in both "addition" and "displacement" units. Each of the filters of FIGS. 17A to 17M includes optimized coefficients to minimize the units of "addition" and "displacement" in a corresponding interpolation location.
FIGS. 17A and 17B illustrate a 6-lead filter and a 12-lead filter optimized to interpolate an image with 1/4 pixel precision scaled by an 8-bit offset, respectively. FIGS. 17C, 17D, and 17E illustrate optimized 8-lead filters to interpolate an image with 1/4 pixel precision scaled by an 8-bit offset. The 8-lead filters of FIGS. 17C to 17E are classified according to whether at least one of the filter coefficients will be optimized and a method to optimize filter coefficients. FIGS. 17F and 17G illustrate optimized 8-lead filters to interpolate an image with 1/4 pixel accuracy scaled by a 6-bit offset. The filters of FIGS. 17F and 17G can be classified according to a method to optimize filter coefficients.
FIG. 17H illustrates a 6-byte filter optimized to interpolate an image with the precision of 1/8 pixel scaled by a 6-bit offset. FIG. 17i illustrates a 6-byte filter optimized to interpolate an image with the precision of 1/8 pixel scaled by an 8-bit offset.
FIGS. 17J and 17K illustrate optimized 4-lead filters to interpolate an image with the precision of 1/8 pixel scaled by a 5-bit offset. The filters of FIGS. 17J and 17K can be classified according to a method to optimize filter coefficients. FIGS. 17L and 17M illustrate optimized 4-lead filters to interpolate an image with the precision of 1/8 pixel scaled by an 8-bit offset. The filters of FIGS. 17L and 17M can also be classified according to a method to optimize filter coefficients.
FIGS. 17N to 17Y illustrate a 4-lead filter, a 6-lead filter, an 8-lead filter, a 10-lead filter, and a 12-lead filter optimized to interpolate an image with the precision of 1/8 pixel scaled by an 8-bit compensation, respectively. The filters of FIGS. 17N to 17Y are different from the filters of FIGS. 17A to 17 because some of the filter coefficients are different, but they are the same as the filters in FIGS. 17 to 17M because a filter coefficient to interpolate an interpolation location of 1/8 is symmetric with a filter coefficient to interpolate an interpolation location of 7/8, a filter coefficient to interpolate an interpolation location of 2/8 is symmetric with a filter coefficient to interpolate an interpolation location of 6/8, and a filter coefficient to interpolate an interpolation location of 3/8 is symmetric with a filter coefficient to interpolate an interpolation location of 5/8 .
FIGS. 23 a to 23 e illustrate methods for scaling and rounding in relation to an ID interpolation filter, in accordance with exemplary embodiments.
As described above, the interpolation filtering uses DCT and inverse DCT, and the interpolation filter ID therefore includes filter coefficients, the absolute values of which are less than? '. Therefore, as described above in relation to Equation (12), the filter coefficients are scaled by multiplying them by 12ScalingBits', rounded to integers, respectively, and then used for interpolation.
FIG. 23a illustrates filter coefficients scaled by? 2Scalin9BitS / when the ID interpolation filters are 12-lead filters. With reference to FIG. 32A, the filter coefficients have been scaled but not rounded to integers.
FIG. 23b illustrates the result of the rounding of the scaled filter coefficients of FIG. 23a to integers rounded to the tenth decimal point. With reference to FIG. 23b, some interpolation filters, the sum of the rounding of the scaled filter coefficients of which is less than '256' of between the ID interpolation filters. Specifically, the sum of all the filter coefficients of each of a filter to interpolate pixel values in an interpolation location of 1/8, a filter to interpolate pixel values in an interpolation location of 3/8, a filter to interpolate pixel values at an interpolation location of 5/8, and a filter to interpolate pixel values at an interpolation location of 7/8, is less than '256'. That is, the sum of the filter coefficients of a filter scaled by an 8-bit offset must be '256' but an error occurs during the rounding of the filter coefficients.
That the sums of the filter coefficients are not the same means that the pixel values may vary according to an interpolation location. To solve this problem, a normalized filter can be generated by adjusting the filter coefficients. FIG. 23c illustrates a normalized filter generated by the filter coefficients of the filters illustrated in FIG. 23b.
A comparison of FIGS. 23B and 23C reveals that the sums of all the filter coefficients are normalized to '256' by adjusting some of the filter filter coefficients to interpolate the pixel values in the interpolation location of 1/8, the filter to interpolate the values of pixel in the interpolation location of 3/8, the filter to interpolate the pixel values in the interpolation location of 5/8, and the filter to interpolate pixel values in the interpolation location of 7/8.
FIGS. 23D and 23E illustrate 8-lead filters that are scaled, and the result of the normalization of the 8-lead filters, respectively. If the 8-lead filters that are scaled by compensation are as illustrated in FIG. 23d, then the result of the rounding of the filter coefficients of the 8-lead filters of FIG. 23d to the integer value and the normalization of the rounding result in such a way that the sums of the filter coefficients are? 256 'can be illustrated in FIG. 23e. With reference to FIG. 23e, some of the filter coefficients are different from the result of the rounding of the filter coefficients of the 8-lead filters illustrated in FIG. 23d. This means that some of the filter coefficients are adjusted in such a way that the sums of all the filter coefficients are * 256 '.
As illustrated in FIGS. 23B and 23C, at least one of the resulting filter coefficients obtained by at least one of the scaled and rounded filter coefficients may be different from the result of normalization of the resulting filter coefficients. Accordingly, it would be obvious to those of ordinary skill in the art that an ID interpolation filter, at least one of the filter coefficients of which is changed in a predetermined error interval, for example, + -1 or + - 2, from among the filters illustrated in FIGS. 16a to 16f or the filters illustrated at 17A to 17M should be understood to fall within the scope of the exemplary embodiments.
If the filter selector 1410 selects one of the filters illustrated in FIGS. 16a to 16f or FIGS. 17A to 17Y or FIGS. 23A to 23E based on an interpolation location, then interpolator 1420 generates pixel values at the interpolation location by calculating Equation (17) or (18). The other various factors (such as an inter prediction address, a type of loop filter, a pixel position in a block) can be further considered for the filter selector 1410 to select one of the filters. A size, that is, a size of derivation, of a filter that will be selected can be determined either by the size of a block that will be interpolated or a filtering direction for interpolation. For example, a large filter can be selected when a block that will be interpolated is large, and a small filter can be selected to minimize memory access when interpolation will be performed in the vertical direction.
In accordance with an exemplary embodiment, the information regarding the filter selection can be further encoded. For example, if an image was interpolated during the coding of the image, a decoding side must know the type of filter used to interpolate the image to interpolate and decode the image using the same filter used during image coding. To this end, the information that specifies the filter used to interpolate the image can be encoded together with the image. However, when the filter selection is made based on the result of the previous coding of another block, i.e., context, the information with respect to the filter selection does not need to be further encoded.
If a pixel value generated by performing the interpolation is less than a minimum pixel value or is greater than a maximum pixel value, then the pixel value is changed to the minimum or maximum pixel value. For example, if the generated pixel value is less than a minimum pixel value of 0, this is changed to ax 0 ', and if the generated pixel value is greater than a maximum pixel value of 255, it is changed to 4255' .
When interpolation is performed to precisely perform inter-prediction during the coding of an image, the information that specifies an interpolation filter can be encoded together with the image. In other words, the information regarding the filter type selected by the filter selector 1410 can be encoded as an image parameter together with the image. Since a different type of interpolation filter can be selected in coding units, or in frame or segment units, the information with respect to the filter selection can also be encoded in the coding units or the frame or segment units , along with the image. However, if the filter selection is made according to an implicit rule, the information regarding the filter selection can not be encoded together with the image.
The methods for performing the interpolation by the interpolator 1420 in accordance with exemplary embodiments will now be described in detail with reference to FIGS. 18A, 18B, and 19.
FIGS. 18a and 18b illustrate methods for interpolating pixel values in various directions using an ID interpolation filter, in accordance with exemplary embodiments. With reference to FIGS. 18a and 18b, the pixel values in interpolation locations in various directions can be generated using an interpolation filter ID that can perform DCT ID in pixel ID values and perform reverse DCT ID in the result of the embodiment using a plurality of functions base, the phases of which move.
With reference to FIG. 18a, a pixel value P (a) 1800 at an interpolation location a in the vertical direction can be generated by interpolation between a pixel value P0 1802 and a pixel value P 1804 that are adjacent in the vertical direction. In comparison with the interpolation method ID of FIG. 13, the interpolation is performed using pixel values 1810 and 1820 arranged in a vertical direction instead of pixel values 1310 and 1320 arranged in the horizontal direction, but the interpolation method described above in relation to Equations (13) a (18) can also be applied to the method of FIG. 18.
Similarly, compared to the interpolation method ID of FIG. 13, in the method of FIG. 18b, the interpolation is performed using pixel values 1840 and 1850 arranged in a diagonal direction instead of the pixel values 1310 and 1320 arranged in the horizontal direction, but a pixel value P () 1830 in an interpolation location a it can generate by interpolation between two adjacent pixel values 1832 and 1834 as described above in relation to Equations (13) to (18).
FIG. 19a illustrates a 2D interpolation method according to an exemplary embodiment. With reference to FIG. 19a, the pixel values 1910 to 1950 of the fractional pixel units can be generated based on pixel values 1900 to 1906 of whole pixel units.
Specifically, first, the filter selector 1410 of the image interpolation apparatus 1400 illustrated in FIG. 14 selects an ID interpolation filter to generate pixel values 1910, 1920, 1930, and 1940 of fractional pixel units that are present between pixel values 1900 to 1906 of whole pixel units. As described above with reference to FIG. 14, a different filter can be selected according to an interpolation location. For example, different filters may be selected for pixel values 1912, 1914, and 1916 of a fractional pixel unit, respectively, to interpolate the pixel value 1910 between two higher pixel values 1900 and 1902. For example, a filter for generating the pixel value 1914 of a 1/2 pixel unit can be different from a filter to generate the pixel values 1912 and 1916 of the same 1/4 pixel unit. further, pixel values 1912 and 1916 of the same 1/4 pixel unit can be generated using different filters, respectively. As described above with reference to FIG. 14, a degree of displacement of the base function phases used to perform the inverse DCT varies according to an interpolation location, and accordingly, a filter to perform the interpolation is selected according to an interpolation location.
Similarly, pixel values 1920, 1930, and 1940 of different fractional pixel units present between pixel values 1900 to 1906 of whole pixel units can be generated based on an interpolation filter ID selected according to a interpolation location.
If the filter selector 1410 selects a filter to generate the pixel values 1910, 1920, 1930, and 1940 of the fractional pixel units present between the pixel values 1900 to 1906 of whole pixel units, then the interpolator 1420 generates the pixel values 1910, 1920, 1930, and 1940 of the fractional pixel units in interpolation locations, respectively, based on the selected filter. According to an exemplary embodiment, since a filter for generating a pixel value in each of the interpolation locations has been previously calculated, the pixel values in all the interpolation locations can be generated based on the pixel values of whole pixel values.
In other words, since the pixel values 1912 and 1916 of the 1/4 pixel unit can be generated directly from the pixel values 1900 and 1920 of a whole pixel unit, it is not necessary to first calculate the pixel value 1914 of a 1/2 pixel unit and then generate the pixel values 1912 and 1916 of the 1/4 pixel unit based on the pixel values 1900 and 1902 of the whole pixel units and the pixel value 1914 of the unit of 1/2 pixel. Since the image interpolation does not need to be performed consecutively according to the size of a pixel unit, the image interpolation can be performed at high speed.
According to another exemplary embodiment, an interpolation method based on an interpolation location according to an exemplary embodiment can be combined with a related interpolation method. For example, a pixel value of a unit of 1/2 pixel and a pixel value of a unit of 1/4 pixel can be generated directly from pixel values 1900 and 1920 of a whole pixel unit using a filter of interpolation according to an exemplary embodiment, and a pixel value of a unit of 1/8 of a pixel can be generated from the pixel value of the 1/4 pixel unit using a related linear interpolation filter. Otherwise, only the pixel value of the 1/2 pixel unit can be generated directly from the pixel values 1900 and 1920 of the entire pixel unit using the interpolation filter in accordance with an exemplary embodiment, the value of pixel of the 1/4 pixel unit can be generated from the pixel value of the 1/2 unit of pixel using the linear interpolation filter of the related art, and the pixel value of the unit of 1/8 of a pixel it can be generated from the pixel value of the 1/4 pixel unit using the linear interpolation filter of the related art.
If all the pixel values 1910, 1920, 1930, and 1940 of the fractional pixel units present between the pixel values 1900 to 1906 of the whole pixel units are generated by performing the interpolation, then the filter selector 1410 selects a new interpolation filter ID for the interpolation between the pixel values 1910, 1920, 1930, and 1940 of the fractional pixel units. In this case, a different filter is selected according to an interpolation location similar to a way in which a filter is selected to interpolate between the pixel values 1900 to 1906 of the whole pixel units.
The interpolator 1420 generates the pixel value 1950 of a fractional pixel unit corresponding to each of the interpolation locations using the filter selected by the filter selector 1410. That is, the pixel value 1950 of the pixel units is generated. fractional between the 1910, 1920, 1930, and 1940 pixel values of the fractional pixel units.
FIG. 19b illustrates a 2D interpolation method using an ID interpolation filter, according to another exemplary embodiment. With reference to FIG. 19b, a pixel value in a 2D interpolation location can be generated by repeatedly performing interpolation in the vertical and horizontal directions using the ID interpolation filter.
Specifically, a pixel value Tempd, j > is generated by interpolation between a pixel value REFfi, j) 1960 and a pixel value REF (i + i, j) 1964 of a whole pixel unit in the horizontal direction. In addition, a pixel value Temp (i (j + 1) is generated by interpolation between a pixel value REF (iij + i) 1962 and a pixel value REF (i + i, j + u 1966 in the horizontal direction Then, a pixel value P (ij) in a 2D interpolation location is generated by interpolating between the pixel value Temp (i, j> and the pixel value Temp (i, - + 1) in the vertical direction.
The interpolation filter ID can be a filter for performing DCT ID and performing reverse DCT ID based on a plurality of base functions, the phases of which are displaced. In addition, the interpolation filter ID may be a scaled filter as described above in relation to Equation (17). When the interpolation is done in the horizontal and vertical directions based on the scaled filter, the interpolation can be done by calculating the following Equation (21): Tempaj) = F ', (ax) · REF (i +] > j) »StageBitsl P (i, j) = (^ M + 1F'i) 'TemP (U + D + »StageBits2 ... (twenty-one), where F'i (¾) and F'i (oyy) correspond to F'i (a) expressed in Equation (18). However, since a vertical interpolation location may be different from a horizontal interpolation location, a different ID interpolation filter may be selected according to an interpolation location.
When horizontal interpolation and vertical interpolation are performed, the first bit offset is performed according to StageBitsl after horizontal interpolation and the second bit offset is performed according to StageBits2 after vertical interpolation. (TotalBits = StageBitsl + StageBits2) If StageBitsl is set to zero, the first bit offset is not performed.
Therefore, if a scaling factor for F'i (y) is »2bltl / and a scaling factor for Fi (x) is» 2blt2 'in Equation (21), then' TotalBits = 'bitl' +, bit2 ' FIG. 19c illustrates a 2D interpolation method using an ID interpolation filter, in accordance with another exemplary embodiment. With reference to FIG. 19c, a pixel value in a 2D interpolation location can be generated by repeatedly performing interpolation in the vertical and horizontal directions using the ID interpolation filter.
Specifically, a pixel value Temp (i, j) is generated by interpolation between pixel values REF (i, j) 1960 and a pixel value REF (i, j + i) 1962 of an entire pixel unit in the vertical direction. Then, a Temp (i + i, j> is generated by interpolation a pixel value REF (i (j + i) 1964 and a value of pixel REF (i + i, j + 1) 1966 in the vertical direction Then a pixel value P (i, j> in a 2D interpolation location is generated by interpolation between the pixel value Tem (i, j) and the pixel value Temp (i + i, -j. When the interpolation is done in horizontal and vertical directions based on a scaled filter, the interpolation can be done by calculating the following Equation (22): FIG. 20 is a flow diagram illustrating an image interpolation method according to an exemplary embodiment. With reference to FIG. 20, in operation 2010, the image interpolation apparatus 900 of FIG. 9 transforms the pixel values into a spatial field using a plurality of base functions having different frequencies. The pixel values may be a plurality of pixel values included in a predetermined block or may be rows or columns of pixel values arranged in the horizontal or vertical direction.
Here, the transform can be DCT 2D or DCT described above in relation to transformer 910 and Equations (1), (2), (3), and (7).
In operation 2020, the image interpolation apparatus 900 shifts the phases of the plurality of base functions used in operation 2010. The phases of the plurality of base functions can be shifted according to a 2D interpolation location determined by 'a ? ' and 'I heard' or according to an interpolation location ID determined by '.
In operation 2030, the image interpolation apparatus 900 inverse transform DCT the coefficients, which were obtained by transforming the pixel values in the spatial field in the operation 2010, using the plurality of base functions, the phases of which they move in operation 2020. That is, the pixel values in the interpolation locations are generated by inversely transforming the DCT coefficients obtained in operation 2010.
If the transform performed in operation 2010 is 2D DCT, then operation 2030, the image interpolation apparatus 900 generates pixel values in 2D interpolation locations by performing 2D inverse DCT in the DCT coefficients using a plurality of cosine functions, the phases of which they move.
If the transform performed in operation 2010 is DCT ID performed in rows or columns of pixel values, then in operation 2030, the image interpolation apparatus 900 generates pixel values in ID interpolation locations by performing inverse DCT ID in the coefficients of DCT using a plurality of cosine functions, the phases of which move.
The plurality of base functions, the phases of which they are displaced and the inverse transform based on this, have been described above in relation to the inverse transformer 920 and Equations (4), (5), (6), and (8) .
FIG. 21 is a flow chart illustrating an image interpolation method according to another exemplary embodiment. With reference to FIG. 21, in step 2110, the image interpolation apparatus 1400 of FIG. 14 selects a filter to perform the transform and performs the inverse transform based on a plurality of base functions, the phases of which are moved, according to an interpolation location. For example, a filter for performing the DCT and performing the inverse DCT based on a plurality of cosine functions, the phases from which they are moved, is selected according to an interpolation location. If the pixel values to be interpolated are included in a predetermined block, then a filter to perform 2D DCT and 2D DCT reverse is selected with base in y 'ay'. If the pixel values to be interpolated are rows or columns of pixel values, then a filter to perform DCT ID and reverse DCT ID is selected based on ''. One of the filters described above with reference to FIG. 15, FIGS. 16A to 16F, and FIG. 17 can be selected according to an interpolation location. However, the size of a filter can be determined by the various factors other than an interpolation location as described above in relation to the filter selector 1410 and with reference to FIG. 17 In operation 2120, the image interpolation apparatus 1400 performs interpolation based on the filter selected in step 2110. The pixel values in a 2D interpolation location or a pixel value in an interpolation location ID can be generated filtering pixel values in a spatial field using the filter selected in step 2110. Interpolation performed using filtering has been described above in relation to Equations (9) to (19).
FIG. 22 is a flow diagram illustrating an image interpolation method according to another exemplary embodiment. With reference to FIG. 22, in step 2210, the image interpolation apparatus 1400 of FIG. 14 selects a different filter for interpolation between the pixel values 1900 to 1906 of the whole pixel units, according to an interpolation location. In the current exemplary embodiment, the pixel values 1910, 1920, 1930, and 1940 of at least one fractional pixel unit can be generated directly from the pixel values 1900 to 1906 of the whole pixel values. Accordingly, the image interpolation apparatus 1400 selects the interpolation filters corresponding to the interpolation locations, respectively, in step 2210.
In operation 2220, the image interpolation apparatus 1400 generates the pixel values 1910, 1920, 1930, and 1940 of at least one fractional pixel unit by interpolation between the pixel values 1900 to 1906 of the whole pixel units , based on different filter selected according to each of the interpolation locations in operation 2210.
In operation 2230, the image interpolation apparatus 1400 selects a different filter for interpolation between the pixel values 1920, 1920, 1930, and 1940 of at least one fractional pixel unit generated in step 2220, in accordance with a interpolation location. A different filter for generating the 1950 pixel values of another fractional pixel unit illustrated in FIG. 19, which are present between the pixel values 1910, 1920, 1930, and 1940 of at least one fractional pixel unit, is selected according to an interpolation location.
In operation 2240, the image interpolation apparatus 1400 generates the pixel values 1950 of another fractional pixel unit by interpolating the pixel values 1910, 1920, 1930, and 1940 of at least one fractional pixel unit, with base on the filter selected in step 2230.
While the exemplary embodiments have been shown and described particularly above, it will be understood by one of ordinary skill in the art that various changes in form and detail may be made herein without departing from the spirit and scope of the concept of the invention as set forth herein. defined by the following claims and their equivalents. In addition, a system according to an exemplary embodiment can be included as a computer readable code on a computer-readable recording medium.
For example, each of an apparatus for encoding an image, an apparatus for decoding an image, an image encoder, and an image decoder according to exemplary embodiments as illustrated in FIGS. 1, 2, 4, 5, 9, and 14, may include a bus coupled to units thereof, at least one processor connected to the bus, and memory that is connected to the bus to store a command or message received or generated and it is coupled to at least one processor to execute the command.
The computer-readable recording medium can be any data storage device that can store data to be read by a computer system. Examples of computer-readable recording media include read-only memory (ROM), random access memory (RAM), compact disc (CD) -ROM, magnetic tapes, discs flexible, and optical data storage devices. The computer-readable recording medium can also be distributed over computer systems coupled to networks so that computer-readable code can be stored and executed in a distributed manner.
It is noted that in relation to this date, the best method known to the applicant to carry out the aforementioned invention, is that which is clear from the present description of the invention.

Claims (11)

CLAIMS Having described the invention as above, the content of the following claims is claimed as property:
1. A method for interpolating an image, characterized in that it comprises: selecting a first filter, from among a plurality of different filters, for interpolation between the pixel values of whole pixel units, according to an interpolation location; Y generating at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the whole pixel units using the first selected filter.
2. The method in accordance with the claim 1, characterized in that it additionally comprises: selecting a second filter, from a plurality of different filters, for interpolation between at least one generated pixel value of at least one fractional pixel unit, according to an interpolation location; and interpolating between at least one generated pixel value of at least one fractional pixel unit using the second selected filter.
3. The method according to claim 2, characterized in that the first filter for the interpolation between the pixel values of the whole pixel units is a spatial field filter for transforming the pixel values of the whole pixel units using a plurality of base functions having different frequencies, and inverse transforming a plurality of coefficients, which are obtained by transforming the pixel values of the whole pixel units, using the plurality of base functions, phases of which they are displaced.
4. The method in accordance with the claim 3, characterized in that the second filter for the interpolation between at least one generated pixel value of at least one fractional pixel unit is a spatial field filter for the transformation of at least one generated pixel value of at least one pixel unit fractional using a plurality of base functions having different frequencies, and inverse transformation of a plurality of coefficients, which are obtained by transforming at least one generated pixel value of at least one fractional pixel unit, using the plurality of functions bases, phases of which move.
5. An apparatus for interpolating an image, characterized in that it comprises: a filter selector which selects a first filter, from among a plurality of different filters, for the interpolation between pixel values of whole pixel units, according to an interpolation location; Y an interpolator which generates at least one pixel value of at least one fractional pixel unit by interpolating between the pixel values of the whole pixel units using the first selected filter.
6. The apparatus in accordance with the claim 5, characterized in that: the filter selector selects a second filter, from among a plurality of different filters, for interpolation between at least one generated pixel value of at least one fractional pixel unit, according to an interpolation location; Y the interpolator interpolates between at least one generated pixel value of at least one fractional pixel unit using the second selected filter.
7. The apparatus in accordance with the claim 6, characterized in that the first filter for interpolation between the pixel values of the whole pixel units is a spatial field filter for transforming the pixel values of the whole pixel units using a plurality of base functions having different frequencies, and inverse transforming a plurality of coefficients, which are obtained by transforming the pixel values of the whole pixel units, using the plurality of base functions, phases of which they are displaced.
8. The apparatus according to claim 7, characterized in that the second filter for the interpolation between at least one pixel value generated from at least one fractional pixel unit is a spatial field filter for the transformation of at least one generated pixel value. of at least one fractional pixel unit using a plurality of base functions having different frequencies, and inverse transformation of a plurality of coefficients, which are obtained by transforming at least one generated pixel value of at least one pixel unit fractional, using the plurality of base functions, phases of which move.
9. A computer-readable recording medium, characterized in that it has included in this a computer program for executing the method according to one of claims 1 to 4.
10. The method in accordance with the claim 1, characterized in that: the selected filter is scaled by multiplying coefficients of the selected filter by an escalation factor, and the generation of at least one pixel value of at least one fractional pixel unit comprises the interpolation between the pixel values of the whole pixel units using the selected scaled filter.
11. The method according to claim 1, characterized in that the selection of the first filter comprises selecting the first filter according to the interpolation location and at least one of a size of a block comprising the whole pixel units and a filtering direction for interpolation.
MX2012011646A 2010-04-05 2011-04-05 Method and apparatus for performing interpolation based on transform and inverse transform. MX2012011646A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US32084710P 2010-04-05 2010-04-05
US36749810P 2010-07-26 2010-07-26
KR1020100095956A KR101682147B1 (en) 2010-04-05 2010-10-01 Method and apparatus for interpolation based on transform and inverse transform
PCT/KR2011/002388 WO2011126287A2 (en) 2010-04-05 2011-04-05 Method and apparatus for performing interpolation based on transform and inverse transform

Publications (1)

Publication Number Publication Date
MX2012011646A true MX2012011646A (en) 2012-11-29

Family

ID=45028064

Family Applications (1)

Application Number Title Priority Date Filing Date
MX2012011646A MX2012011646A (en) 2010-04-05 2011-04-05 Method and apparatus for performing interpolation based on transform and inverse transform.

Country Status (13)

Country Link
US (6) US8676000B2 (en)
EP (1) EP2556675A4 (en)
JP (2) JP2013524678A (en)
KR (6) KR101682147B1 (en)
CN (5) CN105959698B (en)
AU (1) AU2011239142B2 (en)
BR (1) BR112012025307B1 (en)
CA (5) CA2887942C (en)
MX (1) MX2012011646A (en)
MY (1) MY153844A (en)
RU (5) RU2612611C2 (en)
WO (1) WO2011126287A2 (en)
ZA (5) ZA201208292B (en)

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101682147B1 (en) 2010-04-05 2016-12-05 삼성전자주식회사 Method and apparatus for interpolation based on transform and inverse transform
MY182191A (en) * 2010-07-09 2021-01-18 Samsung Electronics Co Ltd Image interpolation method and apparatus
US8848779B2 (en) * 2010-07-15 2014-09-30 Sharp Laboratories Of America, Inc. Method of parallel video coding based on block size
US8873617B2 (en) * 2010-07-15 2014-10-28 Sharp Laboratories Of America, Inc. Method of parallel video coding based on same sized blocks
US8855188B2 (en) * 2010-07-15 2014-10-07 Sharp Laboratories Of America, Inc. Method of parallel video coding based on mapping
US9172972B2 (en) 2011-01-05 2015-10-27 Qualcomm Incorporated Low complexity interpolation filtering with adaptive tap size
KR20130099242A (en) 2011-01-07 2013-09-05 노키아 코포레이션 Motion prediction in video coding
US9049454B2 (en) 2011-01-19 2015-06-02 Google Technology Holdings Llc. High efficiency low complexity interpolation filters
US20120224639A1 (en) * 2011-03-03 2012-09-06 General Instrument Corporation Method for interpolating half pixels and quarter pixels
US9313519B2 (en) 2011-03-11 2016-04-12 Google Technology Holdings LLC Interpolation filter selection using prediction unit (PU) size
EP2724534A2 (en) 2011-06-24 2014-04-30 Motorola Mobility LLC Selection of phase offsets for interpolation filters for motion compensation
EP2727358A1 (en) * 2011-07-01 2014-05-07 Motorola Mobility LLC Joint sub-pixel interpolation filter for temporal prediction
US8948248B2 (en) * 2011-07-21 2015-02-03 Luca Rossato Tiered signal decoding and signal reconstruction
FR2980068A1 (en) * 2011-09-13 2013-03-15 Thomson Licensing METHOD FOR ENCODING AND RECONSTRUCTING A BLOCK OF PIXELS AND CORRESPONDING DEVICES
AU2012200345B2 (en) * 2012-01-20 2014-05-01 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding the significance map residual coefficients of a transform unit
CN102833550A (en) * 2012-09-03 2012-12-19 北京大学深圳研究生院 Low-complexity sub-pixel interpolation filter
US20140078394A1 (en) * 2012-09-17 2014-03-20 General Instrument Corporation Selective use of chroma interpolation filters in luma interpolation process
CN104769950B (en) 2012-09-28 2018-11-13 Vid拓展公司 Crossing plane filtering for the carrier chrominance signal enhancing in Video coding
US10291827B2 (en) * 2013-11-22 2019-05-14 Futurewei Technologies, Inc. Advanced screen content coding solution
US9463057B2 (en) * 2014-01-16 2016-10-11 Amendia, Inc. Orthopedic fastener
EP3055830A4 (en) 2014-03-21 2017-02-22 Huawei Technologies Co., Ltd. Advanced screen content coding with improved color table and index map coding methods
US10091512B2 (en) 2014-05-23 2018-10-02 Futurewei Technologies, Inc. Advanced screen content coding with improved palette table and index map coding methods
WO2016072722A1 (en) 2014-11-04 2016-05-12 삼성전자 주식회사 Video encoding method and apparatus, and video decoding method and apparatus using interpolation filter on which image characteristic is reflected
US10200713B2 (en) * 2015-05-11 2019-02-05 Qualcomm Incorporated Search region determination for inter coding within a particular picture of video data
EP3320684A1 (en) * 2015-07-08 2018-05-16 VID SCALE, Inc. Enhanced chroma coding using cross plane filtering
CN105427258B (en) * 2015-11-25 2018-09-14 惠州Tcl移动通信有限公司 Circular pattern shows smooth optimized treatment method, system and smart machine
US10009622B1 (en) 2015-12-15 2018-06-26 Google Llc Video coding with degradation of residuals
WO2017122997A1 (en) * 2016-01-11 2017-07-20 삼성전자 주식회사 Image encoding method and apparatus, and image decoding method and apparatus
US11494547B2 (en) * 2016-04-13 2022-11-08 Microsoft Technology Licensing, Llc Inputting images to electronic devices
US10341659B2 (en) * 2016-10-05 2019-07-02 Qualcomm Incorporated Systems and methods of switching interpolation filters
US10728548B2 (en) * 2017-04-04 2020-07-28 Futurewei Technologies, Inc. Processing reference samples used for intra-prediction of a picture block
JP7026450B2 (en) * 2017-04-24 2022-02-28 ソニーグループ株式会社 Transmitter, transmitter, receiver and receiver
JP6982990B2 (en) 2017-06-19 2021-12-17 ソニーグループ株式会社 Transmitter, transmitter, receiver and receiver
EP3471418A1 (en) * 2017-10-12 2019-04-17 Thomson Licensing Method and apparatus for adaptive transform in video encoding and decoding
US10841610B2 (en) * 2017-10-23 2020-11-17 Avago Technologies International Sales Pte. Limited Block size dependent interpolation filter selection and mapping
SG11202003260WA (en) 2017-11-07 2020-05-28 Huawei Tech Co Ltd Interpolation filter for an inter prediction apparatus and method for video coding
GB2574380A (en) * 2018-05-30 2019-12-11 Realvnc Ltd Processing image data
CN108848380B (en) * 2018-06-20 2021-11-30 腾讯科技(深圳)有限公司 Video encoding and decoding method, device, computer device and storage medium
US10674151B2 (en) * 2018-07-30 2020-06-02 Intel Corporation Adaptive in-loop filtering for video coding
CN113411575B (en) 2018-09-24 2022-07-22 华为技术有限公司 Image processing apparatus, method and storage medium for performing quality optimized deblocking
GB2577339A (en) * 2018-09-24 2020-03-25 Sony Corp Image data encoding and decoding
CN111083491A (en) 2018-10-22 2020-04-28 北京字节跳动网络技术有限公司 Use of refined motion vectors
WO2020098655A1 (en) 2018-11-12 2020-05-22 Beijing Bytedance Network Technology Co., Ltd. Motion vector storage for inter prediction
KR20210089149A (en) * 2018-11-16 2021-07-15 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 Inter- and intra-integrated prediction mode weights
CN117319644A (en) 2018-11-20 2023-12-29 北京字节跳动网络技术有限公司 Partial position based difference calculation
JP2022521554A (en) 2019-03-06 2022-04-08 北京字節跳動網絡技術有限公司 Use of converted one-sided prediction candidates
CN113767623B (en) 2019-04-16 2024-04-02 北京字节跳动网络技术有限公司 Adaptive loop filtering for video coding and decoding
CN113841404A (en) * 2019-06-18 2021-12-24 韩国电子通信研究院 Video encoding/decoding method and apparatus, and recording medium storing bitstream

Family Cites Families (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9012326D0 (en) * 1990-06-01 1990-07-18 Thomson Consumer Electronics Wide screen television
DE4327177A1 (en) 1993-08-13 1995-02-16 Putzmeister Maschf Arrangement for opening and closing a cap
US5774598A (en) * 1993-11-30 1998-06-30 Polaroid Corporation System and method for sample rate conversion of an image using discrete cosine transforms
WO1995015538A1 (en) * 1993-11-30 1995-06-08 Polaroid Corporation Coding methods and apparatus for scaling and filtering images using discrete cosine transforms
US5845015A (en) * 1995-10-12 1998-12-01 Sarnoff Corporation Method and apparatus for resizing images using the discrete cosine transform
JP3596194B2 (en) * 1996-10-29 2004-12-02 ソニー株式会社 Image processing apparatus and method
US6539120B1 (en) * 1997-03-12 2003-03-25 Matsushita Electric Industrial Co., Ltd. MPEG decoder providing multiple standard output signals
JPH10322705A (en) * 1997-05-21 1998-12-04 Sony Corp Motion detection and motion compensation prediction circuit
DE19730305A1 (en) * 1997-07-15 1999-01-21 Bosch Gmbh Robert Method for generating an improved image signal in the motion estimation of image sequences, in particular a prediction signal for moving images with motion-compensating prediction
JP3042459B2 (en) * 1997-08-25 2000-05-15 日本電気株式会社 Video display device
JPH11238121A (en) * 1998-02-20 1999-08-31 Dainippon Screen Mfg Co Ltd Picture interpolation method and picture interpolation device
US6819333B1 (en) * 2000-05-12 2004-11-16 Silicon Graphics, Inc. System and method for displaying an image using display distortion correction
GB2365646B (en) * 2000-07-31 2004-10-13 Sony Uk Ltd Image processor and method of processing images
JP2002152745A (en) * 2000-11-13 2002-05-24 Sony Corp Image information conversion apparatus and method
JP2002197454A (en) * 2000-12-27 2002-07-12 Sony Corp Device and method for transforming image
FR2820255A1 (en) 2001-01-26 2002-08-02 France Telecom METHODS FOR ENCODING AND DECODING IMAGES, DEVICES, SYSTEMS, SIGNALS AND APPLICATIONS THEREOF
FR2829345B1 (en) 2001-09-06 2003-12-12 Nextream Sa DEVICE AND METHOD FOR ENCODING VIDEO IMAGES BY MOTION COMPENSATION
US6950469B2 (en) * 2001-09-17 2005-09-27 Nokia Corporation Method for sub-pixel value interpolation
JP2003122338A (en) * 2001-10-18 2003-04-25 Sony Corp Image converting device, image display device and image converting method
JP3861698B2 (en) * 2002-01-23 2006-12-20 ソニー株式会社 Image information encoding apparatus and method, image information decoding apparatus and method, and program
US7110459B2 (en) 2002-04-10 2006-09-19 Microsoft Corporation Approximate bicubic filter
JP4120301B2 (en) * 2002-04-25 2008-07-16 ソニー株式会社 Image processing apparatus and method
AU2003246987A1 (en) * 2002-07-09 2004-01-23 Nokia Corporation Method and system for selecting interpolation filter type in video coding
AU2003251964A1 (en) * 2002-07-16 2004-02-02 Nokia Corporation A method for random access and gradual picture refresh in video coding
JP3791922B2 (en) * 2002-09-06 2006-06-28 富士通株式会社 Moving picture decoding apparatus and method
US20040081238A1 (en) 2002-10-25 2004-04-29 Manindra Parhy Asymmetric block shape modes for motion estimation
AU2003248913A1 (en) * 2003-01-10 2004-08-10 Thomson Licensing S.A. Defining interpolation filters for error concealment in a coded image
JP3997171B2 (en) * 2003-03-27 2007-10-24 株式会社エヌ・ティ・ティ・ドコモ Moving picture encoding apparatus, moving picture encoding method, moving picture encoding program, moving picture decoding apparatus, moving picture decoding method, and moving picture decoding program
KR100468786B1 (en) * 2003-04-02 2005-01-29 삼성전자주식회사 Interpolator providing for high resolution scaling by interpolation with direction estimation and polynomial filtering, scaler comprising it, and method thereof
RU2305377C2 (en) * 2003-05-20 2007-08-27 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method for decreasing distortion of compressed video image and device for realization of the method
HUP0301368A3 (en) 2003-05-20 2005-09-28 Amt Advanced Multimedia Techno Method and equipment for compressing motion picture data
CN100394792C (en) * 2003-07-08 2008-06-11 皇家飞利浦电子股份有限公司 Motion-compensated image signal interpolation
US7515072B2 (en) * 2003-09-25 2009-04-07 International Rectifier Corporation Method and apparatus for converting PCM to PWM
CN1216495C (en) * 2003-09-27 2005-08-24 浙江大学 Video image sub-picture-element interpolation method and device
CN1256849C (en) * 2003-11-04 2006-05-17 浙江大学 Method and apparatus for 1/4 pixel precision interpolation
KR20050045746A (en) 2003-11-12 2005-05-17 삼성전자주식회사 Method and device for motion estimation using tree-structured variable block size
US6999105B2 (en) * 2003-12-04 2006-02-14 International Business Machines Corporation Image scaling employing horizontal partitioning
NO320114B1 (en) * 2003-12-05 2005-10-24 Tandberg Telecom As Improved calculation of interpolated pixel values
JP2005217532A (en) * 2004-01-27 2005-08-11 Canon Inc Resolution conversion method and resolution conversion apparatus
US20050281339A1 (en) 2004-06-22 2005-12-22 Samsung Electronics Co., Ltd. Filtering method of audio-visual codec and filtering apparatus
KR20050121627A (en) 2004-06-22 2005-12-27 삼성전자주식회사 Filtering method of audio-visual codec and filtering apparatus thereof
CN1973534A (en) * 2004-06-23 2007-05-30 皇家飞利浦电子股份有限公司 Pixel interpolation
US7430238B2 (en) * 2004-12-10 2008-09-30 Micronas Usa, Inc. Shared pipeline architecture for motion vector prediction and residual decoding
US7623575B2 (en) * 2005-01-05 2009-11-24 Lsi Corporation Method and apparatus for sub-pixel motion compensation
US20070009050A1 (en) * 2005-04-11 2007-01-11 Nokia Corporation Method and apparatus for update step in video coding based on motion compensated temporal filtering
US20060285590A1 (en) 2005-06-21 2006-12-21 Docomo Communications Laboratories Usa, Inc. Nonlinear, prediction filter for hybrid video compression
DE602005012504D1 (en) * 2005-08-22 2009-03-12 Panasonic Corp Combined OFDM and wavelet multicarrier transceiver
JP4643454B2 (en) 2006-01-10 2011-03-02 株式会社東芝 Moving picture decoding apparatus and moving picture decoding method
CN1794821A (en) * 2006-01-11 2006-06-28 浙江大学 Method and device of interpolation in grading video compression
KR100728031B1 (en) 2006-01-23 2007-06-14 삼성전자주식회사 Method and apparatus for deciding encoding mode for variable block size motion estimation
JP4178480B2 (en) * 2006-06-14 2008-11-12 ソニー株式会社 Image processing apparatus, image processing method, imaging apparatus, and imaging method
US8644643B2 (en) * 2006-06-14 2014-02-04 Qualcomm Incorporated Convolution filtering in a graphics processor
CN100551073C (en) * 2006-12-05 2009-10-14 华为技术有限公司 Decoding method and device, image element interpolation processing method and device
CN101212672B (en) * 2006-12-30 2011-01-05 安凯(广州)微电子技术有限公司 Video content adaptive sub-pixel interpolation method and device
KR101369746B1 (en) * 2007-01-22 2014-03-07 삼성전자주식회사 Method and apparatus for Video encoding and decoding using adaptive interpolation filter
KR100842558B1 (en) 2007-01-26 2008-07-01 삼성전자주식회사 Determining method of block mode, and the apparatus therefor video encoding
EP1983759A1 (en) * 2007-04-19 2008-10-22 Matsushita Electric Industrial Co., Ltd. Estimation of separable adaptive interpolation filters for hybrid video coding
US8023562B2 (en) 2007-09-07 2011-09-20 Vanguard Software Solutions, Inc. Real-time video coding/decoding
AU2008306503A1 (en) * 2007-10-05 2009-04-09 Nokia Corporation Video coding with pixel-aligned directional adaptive interpolation filters
US8090031B2 (en) * 2007-10-05 2012-01-03 Hong Kong Applied Science and Technology Research Institute Company Limited Method for motion compensation
EP2048886A1 (en) * 2007-10-11 2009-04-15 Panasonic Corporation Coding of adaptive interpolation filter coefficients
KR100952340B1 (en) 2008-01-24 2010-04-09 에스케이 텔레콤주식회사 Method and Apparatus for Determing Encoding Mode by Using Temporal and Spartial Complexity
JP2009182776A (en) * 2008-01-31 2009-08-13 Hitachi Ltd Coder, decoder, moving image coding method, and moving image decoding method
US8731062B2 (en) * 2008-02-05 2014-05-20 Ntt Docomo, Inc. Noise and/or flicker reduction in video sequences using spatial and temporal processing
MX2010009194A (en) 2008-03-07 2010-09-10 Toshiba Kk Dynamic image encoding/decoding method and device.
US8971412B2 (en) 2008-04-10 2015-03-03 Qualcomm Incorporated Advanced interpolation techniques for motion compensation in video coding
US8705622B2 (en) * 2008-04-10 2014-04-22 Qualcomm Incorporated Interpolation filter support for sub-pixel resolution in video coding
US8831086B2 (en) * 2008-04-10 2014-09-09 Qualcomm Incorporated Prediction techniques for interpolation in video coding
KR101517768B1 (en) * 2008-07-02 2015-05-06 삼성전자주식회사 Method and apparatus for encoding video and method and apparatus for decoding video
KR101530549B1 (en) 2008-10-10 2015-06-29 삼성전자주식회사 Method and apparatus for image encoding and decoding using serial 1-dimensional adaptive filters
KR20110017719A (en) * 2009-08-14 2011-02-22 삼성전자주식회사 Method and apparatus for video encoding, and method and apparatus for video decoding
KR101682147B1 (en) * 2010-04-05 2016-12-05 삼성전자주식회사 Method and apparatus for interpolation based on transform and inverse transform
KR101750046B1 (en) 2010-04-05 2017-06-22 삼성전자주식회사 Method and apparatus for video encoding with in-loop filtering based on tree-structured data unit, method and apparatus for video decoding with the same
CN103238320B (en) * 2010-09-30 2016-06-01 三星电子株式会社 By the method and apparatus using smooth interpolation wave filter that image is interpolated
CN102833550A (en) * 2012-09-03 2012-12-19 北京大学深圳研究生院 Low-complexity sub-pixel interpolation filter

Also Published As

Publication number Publication date
RU2015116169A (en) 2015-09-20
KR20150035936A (en) 2015-04-07
US9424625B2 (en) 2016-08-23
ZA201208292B (en) 2016-09-28
KR20110112176A (en) 2011-10-12
CA2795626A1 (en) 2011-10-13
CN102939760B (en) 2016-08-17
RU2012146739A (en) 2014-07-10
US20150178893A1 (en) 2015-06-25
CA2887940C (en) 2017-09-05
RU2612613C2 (en) 2017-03-09
JP2016187191A (en) 2016-10-27
WO2011126287A2 (en) 2011-10-13
RU2612614C2 (en) 2017-03-09
KR101682152B1 (en) 2016-12-02
CN102939760A (en) 2013-02-20
RU2612611C2 (en) 2017-03-09
ZA201600679B (en) 2016-11-30
RU2580057C2 (en) 2016-04-10
BR112012025307A2 (en) 2017-09-12
ZA201600680B (en) 2016-11-30
KR101682151B1 (en) 2016-12-02
CA2887942A1 (en) 2011-10-13
ZA201600681B (en) 2016-11-30
US9547886B2 (en) 2017-01-17
CN105955933A (en) 2016-09-21
CN106231310B (en) 2019-08-13
CA2887940A1 (en) 2011-10-13
CN106131566B (en) 2019-06-14
CA2887941A1 (en) 2011-10-13
CN105959698B (en) 2019-03-12
CA2887944A1 (en) 2011-10-13
BR112012025307B1 (en) 2022-01-11
CN106131566A (en) 2016-11-16
KR20150035940A (en) 2015-04-07
CA2795626C (en) 2017-02-14
US9436975B2 (en) 2016-09-06
AU2011239142A1 (en) 2012-11-01
KR20150035939A (en) 2015-04-07
KR101682149B1 (en) 2016-12-02
EP2556675A2 (en) 2013-02-13
CN106231310A (en) 2016-12-14
EP2556675A4 (en) 2015-08-26
US20150178891A1 (en) 2015-06-25
US9262804B2 (en) 2016-02-16
US20150178890A1 (en) 2015-06-25
WO2011126287A3 (en) 2012-01-26
US9390470B2 (en) 2016-07-12
CA2887941C (en) 2017-03-21
CA2887944C (en) 2017-03-21
KR101682150B1 (en) 2016-12-02
KR20150035938A (en) 2015-04-07
JP2013524678A (en) 2013-06-17
RU2612612C2 (en) 2017-03-09
CA2887942C (en) 2017-03-21
MY153844A (en) 2015-03-31
RU2015116277A (en) 2015-09-10
US20110243471A1 (en) 2011-10-06
CN105955933B (en) 2018-12-07
US20140198996A1 (en) 2014-07-17
US20150178892A1 (en) 2015-06-25
KR101682147B1 (en) 2016-12-05
AU2011239142B2 (en) 2015-07-02
US8676000B2 (en) 2014-03-18
RU2015116279A (en) 2015-09-10
ZA201600703B (en) 2016-11-30
KR101682148B1 (en) 2016-12-02
KR20150035937A (en) 2015-04-07
RU2015116285A (en) 2015-09-20
CN105959698A (en) 2016-09-21

Similar Documents

Publication Publication Date Title
MX2012011646A (en) Method and apparatus for performing interpolation based on transform and inverse transform.
US9317896B2 (en) Image interpolation method and apparatus
AU2015230828B2 (en) Method and apparatus for performing interpolation based on transform and inverse transform

Legal Events

Date Code Title Description
FG Grant or registration