USRE43060E1 - Method and apparatus for encoding interlaced macroblock texture information - Google Patents
Method and apparatus for encoding interlaced macroblock texture information Download PDFInfo
- Publication number
- USRE43060E1 USRE43060E1 US12/819,207 US81920710A USRE43060E US RE43060 E1 USRE43060 E1 US RE43060E1 US 81920710 A US81920710 A US 81920710A US RE43060 E USRE43060 E US RE43060E
- Authority
- US
- United States
- Prior art keywords
- block
- undefined
- texture
- field
- field block
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/649—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding the transform being applied to non rectangular image segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/16—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter for a given display mode, e.g. for interlaced or progressive display mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/563—Motion estimation with padding, i.e. with filling of non-object values in an arbitrarily shaped picture block or region for estimation purposes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/20—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
Definitions
- reissue application Ser. Nos. 12/819,208, 12/819,209 and 12/819,210 are all filed Jun. 20, 2010 are continuation cases of reissue application Ser. No. 12/131,723 filed Jun. 2, 2008.
- the present reissue application Ser. No. 12/819,207, filed Jun. 20, 2010, is a continuation of the reissue application Ser. No. 12/131,723 filed Jun. 2, 2008, which issued on Nov. 23, 2010 as U.S. Pat. No. Re. 41,951E, wherein reissue application Ser. No. 12/131,723 is a continuation case of reissue application Ser. No. 10/611,938 filed Jul.
- the present invention relates to a method and apparatus for encoding interlaced macroblock texture information; and, more particularly, to a method and apparatus for padding interlaced texture information on a reference VOP on a texture macroblock basis in order to perform a motion estimation while using the interlaced coding technique.
- One of such techniques for encoding video signals for a low bit-rate encoding system is the so-called object-oriented analysis-synthesis coding technique, wherein an input video image is divided into objects, and three sets of parameters for defining the motion, contour and pixel data of each object are processed through different encoding channels.
- MPEG-4 Motion Picture Express Group
- MPEG-4 Video Verification Model Version 7.0, International Organisation for Standardisation, ISO/IEC JTC1/SC29/WG11 MPEG97/N1642, Apr. 1997.
- an input video image is divided into a plurality of video object planes(VOP's), which correspond to entities in a bitstream that a user can access and manipulate.
- VOP can be referred to as an object and represented by a bounding rectangle whose width and height may be the smallest multiples of 16 pixels(a macroblock size) surrounding each object so that the encoder may process the input video image on a VOP-by-VOP basis, i.e., an object-by-object basis.
- a VOP disclosed in the MPEG-4 includes shape information and texture information for an object therein which are represented by a plurality of macroblocks on the VOP, each of macroblocks having, e.g., 16 ⁇ 16 pixels, wherein the shape information is represented in binary shape signals and the texture information includes luminance and chrominances data.
- the texture information for two input video images sequentially received has temporal redundancies, it is desirable to reduce the temporal redundancies therein by using a motion estimation and compensation technique in order to efficiently encode the texture information.
- a reference VOP e.g., a previous VOP
- a progressive image padding technique i.e., a conventional repetitive padding technique.
- the repetitive padding technique fills the transparent area outside the object of the VOP by repeating boundary pixels of the object, wherein the boundary pixels are located on the contour of the object. It is preferable to perform the repetitive padding technique with respect to the reconstructed shape information. If transparent pixels in a transparent area outside the object can be filled by the repetition of more than one boundary pixel, the average of the repeated values is taken as a padded value.
- This progressive padding process is generally divided into 3 steps: a horizontal repetitive padding; a vertical repetitive padding and an exterior padding(see, MPEG-4 Video Verification Model Version 7.0)
- an interlaced padding process may be preferable to the progressive padding process, wherein in the interlaced padding process a macroblock is divided into two field blocks and padding is carried out on a field block basis.
- an object of the invention to provide a method and apparatus capable of padding the interlaced texture information considering its correlation between fields.
- FIG. 1 shows a schematic block diagram of an apparatus for encoding interlaced texture information of an object in a video signal in accordance with the present invention
- FIG. 2 presents a flow chart for illustrating the operation of the reference frame processing circuit shown in FIG. 1 ;
- FIGS. 3A and 3B describe an exemplary boundary macroblock and a top and a bottom boundary field blocks for the boundary macroblock, respectively;
- FIGS. 3C to 3E represent a padding procedure of the top and the bottom boundary field blocks sequentially in accordance with the present invention.
- FIG. 4 depicts a plurality of undefined adjacent blocks for an exemplary VOP and the padding directions for each undefined adjacent block.
- FIG. 1 there is shown a schematic block diagram of an apparatus for encoding texture information on a current VOP.
- the texture information partitioned into a plurality of texture macroblocks is applied to a division circuit 102 on a texture macroblock basis, wherein each texture macroblock has MxN texture pixels, M and N being positive even integers typically ranging between 4 and 16.
- the division circuit 102 divides each texture macroblock into a top and a bottom field blocks which may be referred to as interlaced texture information, wherein the top field block having M/2 ⁇ N texture pixels contains every odd row of each texture macroblock and the bottom field block having the other M/2 ⁇ N texture pixels contains every even row of each texture macroblock.
- the top and the bottom field blocks for each texture macroblock are sequentially provided as a current top and a current bottom field blocks, respectively, to a subtractor 104 and a motion estimator 116 .
- Reference e.g., previous interlaced texture information, i.e., interlaced texture information of a reference VOP, is read out from a reference frame processing circuit 114 and provided to the motion estimator 116 and a motion compensator 118 .
- the reference VOP is also partitioned into a plurality of search regions and each search region is divided into a top and a bottom search regions, wherein the top search regions having a predetermined number, e.g., P(M/2 ⁇ N) of reference pixels contains every odd row of each search region and the bottom search region having the predetermined number of reference pixels contains every even row of each search region, P being a positive integer, typically, 2.
- the motion estimator 116 determines a motion vector for each current top or bottom field block on a field-by-field basis. First, the motion estimator 116 detects two reference field blocks, i.e., a reference top and a reference bottom field blocks for each current top or bottom field block, wherein the two reference field blocks within the top and bottom search regions, respectively, are located at a same position as each current top or bottom field block. Since the top and the bottom search regions have a plurality of candidate top and candidate bottom field blocks including the reference top and the reference bottom field blocks.
- each current top or bottom field block can be displaced on a pixel-by-pixel basis within the top and the bottom search regions to correspond with a candidate top and a candidate bottom field blocks for each displacement, respectively; at all possible displacements, errors between each current top or bottom field block and all candidate top and bottom field blocks therefore are calculated to be compared with one another; and selects, as an optimum candidate field block or a most similar field block, a candidate top or bottom field block which yields a minimum error.
- Outputs from the motion estimator 116 are a motion vector and a field indication flag being provided to the motion compensator 118 and a statistical coding circuit 108 by using, e.g., a variable length coding(VLC) discipline, wherein the motion vector denotes a displacement between each current top or bottom field block and the optimum candidate field block in and the field indication flag represents whether the optimum candidate field block belongs, to the top search region or not.
- VLC variable length coding
- the motion compensator 118 provides the optimum candidate field block as a predicted top or bottom field block for each current top or bottom field block based on the motion vector and the field indication flag to the subtractor 104 and an adder 112 .
- the subtractor 104 obtains an error field block by subtracting the predicted top or bottom field block from each current top or bottom field block on a corresponding pixel-by-pixel basis, to provide the error field block to a texture encoding circuit 106 .
- the error field block is subjected to an orthogonal transform for removing spatial redundancy thereof and then transform coefficients are quantized, to thereby provide the quantized transform coefficients to the statistical coding circuit 108 and a texture reconstruction circuit 110 .
- a conventional orthogonal transform such as a discrete cosine transform(DCT) is performed on a DCT block-by-DCT block basis, each DCT block having typically 8 ⁇ 8 texture pixels, the error field block having 8 ⁇ 16 error texture pixels may be preferably divided into two DCT blocks in the texture encoding circuit 106 .
- each error field block may be DCT-padded based on the shape information or the reconstructed shape information of each VOP in order to reduce higher frequency components which may be generated in the DCT processing. For example, a predetermined value, e.g., ‘0’, may be assigned to the error texture pixels at the exterior of the contour in each VOP.
- the statistical coding circuit 108 performs a statistical encoding on the quantized transform coefficients fed from the texture encoding circuit 106 and the field indication flag and the motion vector, for each current top or bottom field block, fed from the motion estimator 116 by using, e.g., a conventional variable length coding technique, to thereby provide statistically encoded data to a transmitter (not shown) for the transmission thereof.
- the texture reconstruction circuit 110 performs an inverse quantization and inverse transform on the quantized transform coefficients to provide a reconstructed error field block, which corresponds to the error field block, to the adder 112 .
- the adder 112 combines the reconstructed error field block from the texture reconstruction circuit 110 and the predicted top or bottom field block from the motion compensator 118 on a pixel-by-pixel basis, to thereby provide a combined result as a reconstructed top or bottom field block for each current top or bottom field block to the reference frame processing circuit 114 .
- the reference frame processing circuit 114 sequentially pads the reconstructed top or bottom field block based on the shape information or the reconstructed shape information for the current VOP, to thereby store the padded top and bottom field blocks as another reference interlaced texture information for a subsequent current VOP to the motion estimator 116 and the motion compensator 118 .
- FIG. 2 there is a flow chart for illustrating the operation of the reference frame processing circuit 114 shown in FIG. 1 .
- the reconstructed top or bottom field block is sequentially received and, at step S 203 , exterior pixels in the reconstructed top or bottom field block are eliminated based on the shape information, wherein the exterior pixels are located at the outside of the contour for the object.
- the reconstructed shape information may be used on behalf of the shape information. While the exterior pixels are eliminated to be set as transparent pixels, i.e., undefined texture pixels, the remaining interior pixels in the reconstructed top or bottom field block are provided as defined texture pixels on a field block-by-field block basis.
- each reconstructed block having a reconstructed top and its corresponding reconstructed bottom field blocks is determined whether or not being traversed by the contour of the object.
- each reconstructed block is determined as an interior block, a boundary block, or an exterior block, wherein the interior block has only the defined texture pixels, the exterior block has only the undefined texture pixels and the boundary block has both the defined texture pixels and the undefined texture pixels. If the reconstructed block is determined as an interior block, at step S 210 , no padding is performed and the process goes to step S 208 .
- the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof to generate an extrapolated boundary block, wherein each of squares is a texture pixel, each shaded square being a defined texture pixel and each white one being a undefined texture pixel.
- the boundary block is divided into a top and a bottom boundary field blocks T and B as shown in FIG. 3B , wherein each boundary field block has M/2 ⁇ N texture pixels, i.e., 8 ⁇ 16 texture pixels so that the top and the bottom field blocks T and B have M/2, i.e., 8 rows T 1 to T 8 and B 1 to B 8 , respectively.
- the undefined texture pixels are padded on a row-by-row basis by using a horizontal repetitive padding technique as shown in FIG. 3C to generate a padded row for each of rows B 1 , B 2 and B 4 to B 8 .
- the undefined texture pixels are filled by repeating boundary pixels toward the arrows as shown in FIG. 3C , wherein each boundary pixel among the defined texture pixels is located on the contour, i.e., the border, of the object. If there exist undefined texture pixels which may be padded by the repetition of more than one boundary pixel, the average value of the repeated values is used.
- each transparent row is padded by using one or more nearest defined or padded rows among the corresponding top or bottom field block, wherein the defined row has all the defined texture pixels therein.
- each undefined texture pixel of the transparent row B 3 shown in the bottom field block is padded with an average of two defined or padded texture pixels based on a nearest upward and a nearest downward padded rows, i.e., the 2nd and the 4th padded rows B 2 and B 4 in the bottom field block B.
- the transparent row is located at the highest or the lowest row, i.e., corresponds to the 1st row 1 or the 8th row, each texture pixel is padded with a defined or padded texture pixel of the nearest padded or defined row.
- the transparent boundary field block is padded based on the other boundary field block of the boundary block, wherein the transparent boundary field block, i.e., an undefined field block has no defined texture pixel therein.
- the transparent boundary field block i.e., an undefined field block has no defined texture pixel therein.
- the transparent boundary field block i.e., an undefined field block has no defined texture pixel therein.
- all the undefined texture pixels thereof may be padded with a constant value P as shown in FIG. 3E , e.g., a mean value of the defined texture pixels within the bottom field block.
- the mean value of both the defined and the padded pixels within the bottom field block can also be used to fill the transparent field block.
- a middle value 2 L ⁇ 1 of all the possible values for any texture pixel may be used based on the channel characteristics, wherein L is the number of bits assigned for each pixel. For example, if L is equal to 8, there are 256 texture pixels 0 to 255 and the middle value is determined to be 128.
- the padding must be further extended to undefined adjacent blocks, i.e., exterior blocks which are adjacent to one or more interior or boundary blocks.
- the adjacent blocks can stretch outside the VOP, if necessary.
- the undefined texture pixels in the undefined adjacent block are padded based on one of the extrapolated boundary blocks and the interior blocks to generate an extrapolated adjacent block for the undefined adjacent block, wherein each extrapolated boundary block has a part of the contour A of an object and each undefined adjacent block is shown as a shaded region as shown in FIG. 4 .
- one of the left, the upper, the right and the below extrapolated boundary blocks of the undefined adjacent block is selected in this priority and, then, a vertical or a horizontal border of the selected extrapolated boundary block is repeated rightwards, downwards, leftwards or upwards, wherein the vertical or the horizontal border adjoins the undefined adjacent block. As shown in FIG.
- the undefined adjacent blocks JB 4 , JB 10 , JB 15 , JB 21 and JB 28 select their respective left extrapolated boundary blocks a 2 , a 5 , a 9 , a 13 and a 14 ; the undefined adjacent blocks JB 20 , JB 27 and JB 22 select their respective upper extrapolaced boundary blocks a 10 , a 14 and a 13 ; the undefined adjacent blocks JB 1 , JB 9 , JB 14 and JB 19 select their respective right extrapolated boundary blocks a 1 , a 3 , a 6 and a 10 ; and the undefined adjacent blocks JB 2 and JB 3 select their respective below extrapolated boundary blocks a 1 and a 2 .
- a rightmost vertical border of the extrapolated boundary block a 2 is expanded rightward to fill the undefined adjacent block JB 4
- a lowermost horizontal border of the extrapolated boundary block a 10 is expanded downward to fill the undefined adjacent block JB 20 and so on.
- undefined diagonal blocks such as M 1 , M 2 , MS and M 7 to M 11 may be padded with a constant value, e.g., ‘128’ to be the extrapolated adjacent block for the undefined diagonal block, wherein each undefined diagonal block is diagonally adjacent to the extrapolated boundary block and has all undefined texture pixels.
- step S 211 the extrapolated boundary and the extrapolated adjacent blocks as well as the interior blocks are stored.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Image Generation (AREA)
- Analogue/Digital Conversion (AREA)
Abstract
A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.
Description
More than one reissue application has been filed, in that reissue application Ser. Nos. 12/819,208, 12/819,209 and 12/819,210 are all filed Jun. 20, 2010 are continuation cases of reissue application Ser. No. 12/131,723 filed Jun. 2, 2008. The present reissue application Ser. No. 12/819,207, filed Jun. 20, 2010, is a continuation of the reissue application Ser. No. 12/131,723 filed Jun. 2, 2008, which issued on Nov. 23, 2010 as U.S. Pat. No. Re. 41,951E, wherein reissue application Ser. No. 12/131,723 is a continuation case of reissue application Ser. No. 10/611,938 filed Jul. 3, 2003, now abandoned, which is a reissue application of U.S. Pat. No. 6,259,732 B1, which issued on Jul. 10, 2001 from U.S. application Ser. No. 09/088,375, and which claims priority under 35 U.S.C. 119 from Korean Patent Application KR 98-8637 filed Mar. 14, 1998.
The present invention relates to a method and apparatus for encoding interlaced macroblock texture information; and, more particularly, to a method and apparatus for padding interlaced texture information on a reference VOP on a texture macroblock basis in order to perform a motion estimation while using the interlaced coding technique.
In digitally televised systems such as video-telephone, teleconference and high definition television systems, a large amount of digital data is needed to define each video frame signal since a video line signal in the video frame signal comprises a sequence of digital data referred to as pixel values. Since, however, the available frequency bandwidth of a conventional transmission channel is limited, in order to transmit the large amount of digital data therethrough, it is necessary to compress or reduce the volume of data through the use of various data compression techniques, especially in the case of such low bit-rate video signal encoders as video-telephone and teleconference systems.
One of such techniques for encoding video signals for a low bit-rate encoding system is the so-called object-oriented analysis-synthesis coding technique, wherein an input video image is divided into objects, and three sets of parameters for defining the motion, contour and pixel data of each object are processed through different encoding channels.
One example of object-oriented coding scheme is the so-called MPEG(Moving Picture Express Group) phase 4(MPEG-4), which is designed to provide an audio-visual coding standard for allowing content-based interactivity, improved coding efficiency and/or universal accessibility in such applications as low-bit rate communication, interactive multimedia(e.g., games, interactive TV, etc.) and area surveillance(see, for instance, MPEG-4 Video Verification Model Version 7.0, International Organisation for Standardisation, ISO/IEC JTC1/SC29/WG11 MPEG97/N1642, Apr. 1997).
According to the MPEG-4, an input video image is divided into a plurality of video object planes(VOP's), which correspond to entities in a bitstream that a user can access and manipulate. A VOP can be referred to as an object and represented by a bounding rectangle whose width and height may be the smallest multiples of 16 pixels(a macroblock size) surrounding each object so that the encoder may process the input video image on a VOP-by-VOP basis, i.e., an object-by-object basis.
A VOP disclosed in the MPEG-4 includes shape information and texture information for an object therein which are represented by a plurality of macroblocks on the VOP, each of macroblocks having, e.g., 16×16 pixels, wherein the shape information is represented in binary shape signals and the texture information includes luminance and chrominances data.
Since the texture information for two input video images sequentially received has temporal redundancies, it is desirable to reduce the temporal redundancies therein by using a motion estimation and compensation technique in order to efficiently encode the texture information.
In order to perform the motion estimation and compensation, a reference VOP, e.g., a previous VOP, should be padded by a progressive image padding technique, i.e., a conventional repetitive padding technique. In principle, the repetitive padding technique fills the transparent area outside the object of the VOP by repeating boundary pixels of the object, wherein the boundary pixels are located on the contour of the object. It is preferable to perform the repetitive padding technique with respect to the reconstructed shape information. If transparent pixels in a transparent area outside the object can be filled by the repetition of more than one boundary pixel, the average of the repeated values is taken as a padded value. This progressive padding process is generally divided into 3 steps: a horizontal repetitive padding; a vertical repetitive padding and an exterior padding(see, MPEG-4 Video Verification Model Version 7.0)
While the progressive padding process as described above may be used to encode progressive texture information which has a larger spacial correlation between rows on a macroblock basis, the coding efficiency thereof may be low if the motion of an object within a VOP or a frame is considerably large. Therefore, prior to performing the motion estimation and compensation on a field-by-field basis for an interlaced texture information with the fast movement such as a sporting event, horse racing and car racing, an interlaced padding process may be preferable to the progressive padding process, wherein in the interlaced padding process a macroblock is divided into two field blocks and padding is carried out on a field block basis.
However, if all field blocks are padded without considering their correlation between fields, certain field blocks may not be properly padded.
It is, therefore, an object of the invention to provide a method and apparatus capable of padding the interlaced texture information considering its correlation between fields.
In accordance with the invention, there is provided a method for encoding interlaced texture information on a texture macroblock basis through a motion estimation between a current VOP and its one or more reference VOP's, wherein each texture macroblock of the current and the reference VOP's has M×N defined or undefined texture pixels, M and N being positive even integers, respectively, the method comprising the steps of:
-
- (a) detecting whether said each texture macroblock of each reference VOP is a boundary block or not, wherein the boundary block has at least one defined texture pixel and at least one undefined texture pixel;
- (b) dividing the boundary block into two field blocks, each field block having M/2×N texture pixels;
- (c) extrapolating the undefined texture pixels of each field block based on the defined texture pixels thereof to generate an extrapolated boundary block for said two field blocks; and
- (d) if the boundary block has an undefined field block and a defined field block, padding the undefined field block based on the defined field block, wherein the undefined field block and the defined field block represent one field block having the undefined texture pixels only and the other field block having at least one defined texture pixel, respectively.
The above and other objects and features of the present invention will become apparent from the following description of preferred embodiments given in conjunction with the accompanying drawings, in which:
Referring to FIG. 1 , there is shown a schematic block diagram of an apparatus for encoding texture information on a current VOP. The texture information partitioned into a plurality of texture macroblocks is applied to a division circuit 102 on a texture macroblock basis, wherein each texture macroblock has MxN texture pixels, M and N being positive even integers typically ranging between 4 and 16.
The division circuit 102 divides each texture macroblock into a top and a bottom field blocks which may be referred to as interlaced texture information, wherein the top field block having M/2×N texture pixels contains every odd row of each texture macroblock and the bottom field block having the other M/2×N texture pixels contains every even row of each texture macroblock. The top and the bottom field blocks for each texture macroblock are sequentially provided as a current top and a current bottom field blocks, respectively, to a subtractor 104 and a motion estimator 116.
Reference, e.g., previous interlaced texture information, i.e., interlaced texture information of a reference VOP, is read out from a reference frame processing circuit 114 and provided to the motion estimator 116 and a motion compensator 118. The reference VOP is also partitioned into a plurality of search regions and each search region is divided into a top and a bottom search regions, wherein the top search regions having a predetermined number, e.g., P(M/2×N) of reference pixels contains every odd row of each search region and the bottom search region having the predetermined number of reference pixels contains every even row of each search region, P being a positive integer, typically, 2.
The motion estimator 116 determines a motion vector for each current top or bottom field block on a field-by-field basis. First, the motion estimator 116 detects two reference field blocks, i.e., a reference top and a reference bottom field blocks for each current top or bottom field block, wherein the two reference field blocks within the top and bottom search regions, respectively, are located at a same position as each current top or bottom field block. Since the top and the bottom search regions have a plurality of candidate top and candidate bottom field blocks including the reference top and the reference bottom field blocks. respectively, each current top or bottom field block can be displaced on a pixel-by-pixel basis within the top and the bottom search regions to correspond with a candidate top and a candidate bottom field blocks for each displacement, respectively; at all possible displacements, errors between each current top or bottom field block and all candidate top and bottom field blocks therefore are calculated to be compared with one another; and selects, as an optimum candidate field block or a most similar field block, a candidate top or bottom field block which yields a minimum error. Outputs from the motion estimator 116 are a motion vector and a field indication flag being provided to the motion compensator 118 and a statistical coding circuit 108 by using, e.g., a variable length coding(VLC) discipline, wherein the motion vector denotes a displacement between each current top or bottom field block and the optimum candidate field block in and the field indication flag represents whether the optimum candidate field block belongs, to the top search region or not.
The motion compensator 118 provides the optimum candidate field block as a predicted top or bottom field block for each current top or bottom field block based on the motion vector and the field indication flag to the subtractor 104 and an adder 112.
The subtractor 104 obtains an error field block by subtracting the predicted top or bottom field block from each current top or bottom field block on a corresponding pixel-by-pixel basis, to provide the error field block to a texture encoding circuit 106.
In the texture encoding circuit 106, the error field block is subjected to an orthogonal transform for removing spatial redundancy thereof and then transform coefficients are quantized, to thereby provide the quantized transform coefficients to the statistical coding circuit 108 and a texture reconstruction circuit 110. Since a conventional orthogonal transform such as a discrete cosine transform(DCT) is performed on a DCT block-by-DCT block basis, each DCT block having typically 8×8 texture pixels, the error field block having 8×16 error texture pixels may be preferably divided into two DCT blocks in the texture encoding circuit 106. If necessary, before performing the DCT, each error field block may be DCT-padded based on the shape information or the reconstructed shape information of each VOP in order to reduce higher frequency components which may be generated in the DCT processing. For example, a predetermined value, e.g., ‘0’, may be assigned to the error texture pixels at the exterior of the contour in each VOP.
The statistical coding circuit 108 performs a statistical encoding on the quantized transform coefficients fed from the texture encoding circuit 106 and the field indication flag and the motion vector, for each current top or bottom field block, fed from the motion estimator 116 by using, e.g., a conventional variable length coding technique, to thereby provide statistically encoded data to a transmitter (not shown) for the transmission thereof.
In the meantime, the texture reconstruction circuit 110 performs an inverse quantization and inverse transform on the quantized transform coefficients to provide a reconstructed error field block, which corresponds to the error field block, to the adder 112. The adder 112 combines the reconstructed error field block from the texture reconstruction circuit 110 and the predicted top or bottom field block from the motion compensator 118 on a pixel-by-pixel basis, to thereby provide a combined result as a reconstructed top or bottom field block for each current top or bottom field block to the reference frame processing circuit 114.
The reference frame processing circuit 114 sequentially pads the reconstructed top or bottom field block based on the shape information or the reconstructed shape information for the current VOP, to thereby store the padded top and bottom field blocks as another reference interlaced texture information for a subsequent current VOP to the motion estimator 116 and the motion compensator 118.
Referring to FIG. 2 , there is a flow chart for illustrating the operation of the reference frame processing circuit 114 shown in FIG. 1 .
At step S201, the reconstructed top or bottom field block is sequentially received and, at step S203, exterior pixels in the reconstructed top or bottom field block are eliminated based on the shape information, wherein the exterior pixels are located at the outside of the contour for the object. The reconstructed shape information may be used on behalf of the shape information. While the exterior pixels are eliminated to be set as transparent pixels, i.e., undefined texture pixels, the remaining interior pixels in the reconstructed top or bottom field block are provided as defined texture pixels on a field block-by-field block basis.
At step S204, each reconstructed block having a reconstructed top and its corresponding reconstructed bottom field blocks is determined whether or not being traversed by the contour of the object. In other words, each reconstructed block is determined as an interior block, a boundary block, or an exterior block, wherein the interior block has only the defined texture pixels, the exterior block has only the undefined texture pixels and the boundary block has both the defined texture pixels and the undefined texture pixels. If the reconstructed block is determined as an interior block, at step S210, no padding is performed and the process goes to step S208.
If the reconstructed block is a boundary block BB as shown in FIG. 3A , at steps S221 to S224, the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof to generate an extrapolated boundary block, wherein each of squares is a texture pixel, each shaded square being a defined texture pixel and each white one being a undefined texture pixel.
First, at step S221, the boundary block is divided into a top and a bottom boundary field blocks T and B as shown in FIG. 3B , wherein each boundary field block has M/2×N texture pixels, i.e., 8×16 texture pixels so that the top and the bottom field blocks T and B have M/2, i.e., 8 rows T1 to T8 and B1 to B8, respectively.
At step S222, the undefined texture pixels are padded on a row-by-row basis by using a horizontal repetitive padding technique as shown in FIG. 3C to generate a padded row for each of rows B1, B2 and B4 to B8. In other words, the undefined texture pixels are filled by repeating boundary pixels toward the arrows as shown in FIG. 3C , wherein each boundary pixel among the defined texture pixels is located on the contour, i.e., the border, of the object. If there exist undefined texture pixels which may be padded by the repetition of more than one boundary pixel, the average value of the repeated values is used.
If there exist one or more transparent rows, having the undefined texture pixels only, on each top or bottom field block, at step S223, each transparent row is padded by using one or more nearest defined or padded rows among the corresponding top or bottom field block, wherein the defined row has all the defined texture pixels therein. For example, as shown in FIG. 3D , each undefined texture pixel of the transparent row B3 shown in the bottom field block is padded with an average of two defined or padded texture pixels based on a nearest upward and a nearest downward padded rows, i.e., the 2nd and the 4th padded rows B2 and B4 in the bottom field block B. If the transparent row is located at the highest or the lowest row, i.e., corresponds to the 1st row 1 or the 8th row, each texture pixel is padded with a defined or padded texture pixel of the nearest padded or defined row.
If there exists one transparent boundary field block in the boundary block as shown in FIG. 3B , at step S224, the transparent boundary field block is padded based on the other boundary field block of the boundary block, wherein the transparent boundary field block, i.e., an undefined field block has no defined texture pixel therein. In other words, if a top field block is transparent, all the undefined texture pixels thereof may be padded with a constant value P as shown in FIG. 3E , e.g., a mean value of the defined texture pixels within the bottom field block. The mean value of both the defined and the padded pixels within the bottom field block can also be used to fill the transparent field block. If necessary, a middle value 2L−1 of all the possible values for any texture pixel may be used based on the channel characteristics, wherein L is the number of bits assigned for each pixel. For example, if L is equal to 8, there are 256 texture pixels 0 to 255 and the middle value is determined to be 128.
After all the interior and boundary blocks are padded as described above, in order to cope with a VOP of fast motion, the padding must be further extended to undefined adjacent blocks, i.e., exterior blocks which are adjacent to one or more interior or boundary blocks. The adjacent blocks can stretch outside the VOP, if necessary. At step S208, the undefined texture pixels in the undefined adjacent block are padded based on one of the extrapolated boundary blocks and the interior blocks to generate an extrapolated adjacent block for the undefined adjacent block, wherein each extrapolated boundary block has a part of the contour A of an object and each undefined adjacent block is shown as a shaded region as shown in FIG. 4 . If more than one extrapolated boundary blocks surround the undefined adjacent block, one of the left, the upper, the right and the below extrapolated boundary blocks of the undefined adjacent block is selected in this priority and, then, a vertical or a horizontal border of the selected extrapolated boundary block is repeated rightwards, downwards, leftwards or upwards, wherein the vertical or the horizontal border adjoins the undefined adjacent block. As shown in FIG. 4 , the undefined adjacent blocks JB4, JB10, JB15, JB21 and JB28 select their respective left extrapolated boundary blocks a2, a5, a9, a13 and a14; the undefined adjacent blocks JB20, JB27 and JB22 select their respective upper extrapolaced boundary blocks a10, a14 and a13; the undefined adjacent blocks JB1, JB9, JB14 and JB19 select their respective right extrapolated boundary blocks a1, a3, a6 and a10; and the undefined adjacent blocks JB2 and JB3 select their respective below extrapolated boundary blocks a1 and a2. A rightmost vertical border of the extrapolated boundary block a2 is expanded rightward to fill the undefined adjacent block JB4, a lowermost horizontal border of the extrapolated boundary block a10 is expanded downward to fill the undefined adjacent block JB20 and so on. Also, undefined diagonal blocks such as M1, M2, MS and M7 to M11 may be padded with a constant value, e.g., ‘128’ to be the extrapolated adjacent block for the undefined diagonal block, wherein each undefined diagonal block is diagonally adjacent to the extrapolated boundary block and has all undefined texture pixels.
As described above, at step S211, the extrapolated boundary and the extrapolated adjacent blocks as well as the interior blocks are stored.
While the present invention has been described with respect to the particular embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.
Claims (18)
1. A method for encoding interlaced texture information on a texture macroblock basis through a motion estimation between a current VOP and its one or more reference VOP's, wherein each texture macroblock of the current and the reference VOP's has M×N defined or undefined texture pixels, M and N being positive even integers, respectively, the method comprising the steps of:
(a) detecting whether said each texture macroblock of each reference VOP is a boundary block or not, wherein the boundary block has at least one defined texture pixel and at least one undefined texture pixel;
(b) dividing the boundary block into two field blocks, each field block having M/2×N texture pixels;
(c) extrapolating the undefined texture pixels of each field block based on the defined texture pixels thereof to generate an extrapolated boundary block for said two field blocks;
(d) if the boundary block has an undefined field block and a defined field block, padding the undefined field block based on the defined field block, wherein the undefined field block and the defined field block represent one field block having the undefined texture pixels only and the other field block having at least one defined texture pixel, respectively; and
(f) expanding an undefined adjacent block based on the extrapolated boundary block, wherein the undefined adjacent block is adjacent to the extrapolated boundary block and has only undefined texture pixels,
wherein the step (c) further includes the step of (c1) field-padding said at least one undefined texture pixel in a field block from said at least one defined texture pixel therein, to thereby generate a padded field block for the field block,
wherein the step (c1) has the steps of:
(c11) row-padding said at least one undefined texture pixel on a row-by-row basis to generate a padded row; and
(c12) padding, if there exists a transparent row, the transparent row from at least one nearest padded row, wherein the transparent row represents a row having the undefined texture pixels only.
2. The method as recited in claim 1 , wherein said step (f) includes the steps of:
(f1) selecting, if said undefined adjacent block is surrounded by a plurality of extrapolated boundary blocks, one of the left, the upper, the right and the below extrapolated boundary blocks of said undefined adjacent block in this priority; and
(f2) replicating a vertical or a horizontal border of the selected extrapolated boundary block rightwards, downwards, leftwards or upwards, to thereby expand the undefined adjacent block, wherein the vertical or the horizontal border adjoins said undefined adjacent block.
3. The method as recited in claim 1 , wherein all the undefined texture pixels of said undefined field block are padded with a constant value.
4. The method as recited in claim 3 , wherein all the undefined texture pixels of said undefined field block are padded with a mean value of both the defined texture pixels and padded texture pixels within the padded field block for the other field block, wherein the padded texture pixels are field-padded through the step (c1).
5. The method as recited in claim 3 , wherein all the undefined texture pixels of said undefined field block are padded with a mean value of the defined texture pixels within the padded field block for the other field block.
6. The method as recited in claim 3 , wherein the constant value is 2L−1, wherein L is the number of bits assigned for each pixel.
7. The method as recited in claim 6 , wherein L is 8.
8. An apparatus for encoding interlaced texture information on a texture macroblock basis through a motion estimation between a current VOP and its one or more reference VOP's, wherein each texture macroblock of the current and reference VOP's has M×N texture pixels, M and N being positive even integers, respectively, the apparatus comprising:
a boundary block detector for detecting whether said each texture macroblock of each reference VOP is a boundary block or not, wherein the boundary block has at least one defined texture pixel and at least one undefined texture pixel;
a field divider for dividing the boundary block into two field blocks, each field block having M/2×N texture pixels;
a texture pixel padding circuit for extrapolating the undefined texture pixels of each field block based on the defined texture pixels thereof to generate an extrapolated boundary block for said two field blocks;
a transparent field padding circuit for padding an undefined field block of the boundary block based on the other field block thereof, wherein the undefined field block represents a field block having the undefined texture pixels only;
an adjacent block padding circuit for expanding an undefined adjacent block based on the extrapolated boundary block, wherein the undefined adjacent block is adjacent to the extrapolated boundary block and has the undefined texture pixels only; and
a field-padding circuit for field-padding the undefined texture pixels in a field block from the defined texture pixels therein, to thereby generate a padded field block for the field block, wherein the field-padding circuit includes:
a horizontal padding circuit for padding the undefined texture pixels on a row-by-row basis to generate a padded row; and
a transparent row padding circuit for padding the transparent row from at least one nearest padded row, wherein the transparent row represents a row having the defined texture pixels only.
9. The apparatus as recited in claim 8 , wherein said adjacent block padding circuit includes:
a selector for selecting one of the left, the upper, the right and the below extrapolated boundary blocks of said undefined adjacent block in this priority; and
means for replicating a vertical or a horizontal border of the selected extrapolated boundary block rightwards, downwards, leftwards or upwards, to thereby expand the undefined adjacent block, wherein the vertical or the horizontal border adjoins said undefined adjacent block.
10. The apparatus as recited in claim 8 , wherein all the undefined texture pixels of said undefined field block are padded with a constant value.
11. The apparatus as recited in claim 10 , wherein all the undefined texture pixels of said undefined field block are padded with a mean value of both the defined texture pixels and padded texture pixels within the padded field block for the other field block, wherein the padded texture pixels are field-padded through the field-padding circuit.
12. The apparatus as recited in claim 10 , wherein all the undefined texture pixels of said undefined field block are padded with a mean value of the defined texture pixels within the padded field block for the other field block.
13. The apparatus as recited in claim 10 , wherein the constant value is 2L−, L being the number of bits assigned for each pixel.
14. The apparatus as recited in claim 13 , wherein L is 8.
15. An apparatus for encoding interlaced texture information on a texture macroblock basis using a field prediction between a current VOP and one or more reference VOP's, the apparatus comprising:
a motion estimator configured to determine a field motion vector for each current top or bottom field block on a field-by-field basis, the each current top or bottom field block comprising undefined texture pixels and defined texture pixels;
a motion compensator configured to provide a predicted top or bottom field block for each current top or bottom field block;
a subtractor configured to subtract the predicted top or bottom field block from each current top or bottom field block on a corresponding pixel-by-pixel basis to obtain the error field block;
a texture encoding circuit configured to discrete-cosine transform the error field block on a DCT block-by-DCT block basis, and to quantize the discrete-cosine-transformed coefficients;
a statistical encoding circuit configured to perform a statistical encoding on the quantized coefficient fed from the texture encoding circuit and the field motion vector for each current top or bottom field block fed from the motion estimator;
a texture reconstruction circuit configured to perform an inverse quantization and inverse transform on the quantized transform coefficients to obtain a reconstructed error field block;
an adder configured to combine the reconstructed error field block from the texture reconstruction circuit and the predicted top or bottom field block from the motion compensator on a pixel-by-pixel basis; and
a reference frame processing circuit configured to pad a reconstructed top or bottom field block based on shape information for the current VOP, to thereby store the padded top or bottom field blocks as reference interlaced texture information, the reference frame processing circuit comprising:
a texture pixel padding circuit configured to extrapolate the undefined texture pixels based on the defined texture pixels to generate an extrapolated boundary block for the top or the bottom field block;
a transparent field padding circuit configured to pad an undefined field block of the extrapolated boundary block based on the other field block thereof, wherein the undefined field block represents a field block comprising the undefined texture pixels only;
an adjacent block padding circuit configured to expand an undefined adjacent block based on the extrapolated boundary block, wherein the undefined adjacent block is adjacent to the extrapolated boundary block and has the undefined texture pixels only;
a field-padding circuit configured to pad the undefined texture pixels in a field block from the defined texture pixels therein, to thereby generate a padded field block for the field block;
a transparent row padding circuit configured to pad the transparent row from at least one nearest padded row, wherein the transparent row represents a row having the defined texture pixels only;
a first padding device configured to pad the undefined texture pixel in a row of the field block having at least one defined texture based on one or more of the defined texture pixels in said row,
a second padding device configured to pad the undefined texture pixel in a row of the field block having at least one defined texture based on one or more of the defined texture pixels in another one or more rows in said field block, and
a third padding device configured to pad the field block of boundary VOP having only the undefined texture pixels with a constant value.
16. The apparatus of claim 15, wherein the reference frame processing circuit further comprises:
an adjacent macroblock padding circuit for expanding an undefined adjacent macroblock based on the padded boundary macroblock, wherein the undefined adjacent macroblock is adjacent to the padded boundary macroblock and has only undefined texture pixels, and
a remaining macroblock padding circuit for padding the exterior macroblock not adjacent to the padded boundary macroblock with a constant value.
17. The apparatus of claim 15, wherein the constant value is 2L−1, and L is a number of bits assigned for each pixel.
18. The apparatus of claim 15, wherein said padding of the field block with a constant value includes padding said field block with a constant value of 128.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/819,207 USRE43060E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR98-8637 | 1998-03-14 | ||
KR1019980008637A KR100285599B1 (en) | 1998-03-14 | 1998-03-14 | Device and method for texture padding for motion estimation in alternate line encoding |
US09/088,375 US6259732B1 (en) | 1998-03-14 | 1998-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
US61193803A | 2003-07-03 | 2003-07-03 | |
US12/131,723 USRE41951E1 (en) | 1998-03-14 | 2008-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,207 USRE43060E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/088,375 Reissue US6259732B1 (en) | 1998-03-14 | 1998-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
Publications (1)
Publication Number | Publication Date |
---|---|
USRE43060E1 true USRE43060E1 (en) | 2012-01-03 |
Family
ID=19534803
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/088,375 Ceased US6259732B1 (en) | 1998-03-14 | 1998-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
US12/131,712 Expired - Lifetime USRE41383E1 (en) | 1998-03-14 | 2008-06-02 | Method and apparatus for encoding interplaced macroblock texture information |
US12/131,723 Expired - Lifetime USRE41951E1 (en) | 1998-03-14 | 2008-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,210 Expired - Lifetime USRE43130E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,207 Expired - Lifetime USRE43060E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,208 Expired - Lifetime USRE43061E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,209 Expired - Lifetime USRE43129E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/088,375 Ceased US6259732B1 (en) | 1998-03-14 | 1998-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
US12/131,712 Expired - Lifetime USRE41383E1 (en) | 1998-03-14 | 2008-06-02 | Method and apparatus for encoding interplaced macroblock texture information |
US12/131,723 Expired - Lifetime USRE41951E1 (en) | 1998-03-14 | 2008-06-02 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,210 Expired - Lifetime USRE43130E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/819,208 Expired - Lifetime USRE43061E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
US12/819,209 Expired - Lifetime USRE43129E1 (en) | 1998-03-14 | 2010-06-20 | Method and apparatus for encoding interlaced macroblock texture information |
Country Status (8)
Country | Link |
---|---|
US (7) | US6259732B1 (en) |
EP (3) | EP1940177B1 (en) |
JP (1) | JPH11298901A (en) |
KR (1) | KR100285599B1 (en) |
CN (1) | CN1159915C (en) |
AU (2) | AU762187B2 (en) |
ES (3) | ES2380590T3 (en) |
WO (1) | WO1999048298A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160227215A1 (en) * | 2001-12-17 | 2016-08-04 | Microsoft Technology Licensing, Llc | Video coding / decoding with re-oriented transforms and sub-block transform sizes |
US10958917B2 (en) | 2003-07-18 | 2021-03-23 | Microsoft Technology Licensing, Llc | Decoding jointly coded transform type and subblock pattern information |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
ES2162411T3 (en) * | 1997-01-30 | 2001-12-16 | Matsushita Electric Ind Co Ltd | DIGITAL IMAGE FILLING PROCEDURE, IMAGE PROCESSING DEVICE AND DATA RECORDING MEDIA. |
US6952501B2 (en) * | 2000-02-24 | 2005-10-04 | Canon Kabushiki Kaisha | Image processing apparatus, image encoding apparatus, and image decoding apparatus |
US6718066B1 (en) * | 2000-08-14 | 2004-04-06 | The Hong Kong University Of Science And Technology | Method and apparatus for coding an image object of arbitrary shape |
US7197070B1 (en) * | 2001-06-04 | 2007-03-27 | Cisco Technology, Inc. | Efficient systems and methods for transmitting compressed video data having different resolutions |
US8265163B2 (en) * | 2001-12-21 | 2012-09-11 | Motorola Mobility Llc | Video shape padding method |
US7362374B2 (en) * | 2002-08-30 | 2008-04-22 | Altera Corporation | Video interlacing using object motion estimation |
US7860326B2 (en) * | 2003-09-22 | 2010-12-28 | Kddi Corporation | Adaptable shape image encoding apparatus and decoding apparatus |
US7480335B2 (en) * | 2004-05-21 | 2009-01-20 | Broadcom Corporation | Video decoder for decoding macroblock adaptive field/frame coded video data with spatial prediction |
US7623682B2 (en) * | 2004-08-13 | 2009-11-24 | Samsung Electronics Co., Ltd. | Method and device for motion estimation and compensation for panorama image |
KR100688383B1 (en) * | 2004-08-13 | 2007-03-02 | 경희대학교 산학협력단 | Motion estimation and compensation for panorama image |
JP2008543209A (en) * | 2005-06-03 | 2008-11-27 | エヌエックスピー ビー ヴィ | Video decoder with hybrid reference texture |
CN100466746C (en) * | 2005-07-21 | 2009-03-04 | 海信集团有限公司 | Method for information selecting and dividing based on micro block inner edge |
KR101536732B1 (en) | 2008-11-05 | 2015-07-23 | 삼성전자주식회사 | Device and method of reading texture data for texture mapping |
US9787966B2 (en) * | 2012-09-04 | 2017-10-10 | Industrial Technology Research Institute | Methods and devices for coding interlaced depth data for three-dimensional video content |
US9544612B2 (en) * | 2012-10-04 | 2017-01-10 | Intel Corporation | Prediction parameter inheritance for 3D video coding |
JP6095955B2 (en) * | 2012-11-16 | 2017-03-15 | ルネサスエレクトロニクス株式会社 | Measuring method, measuring apparatus and measuring program |
US10249029B2 (en) | 2013-07-30 | 2019-04-02 | Apple Inc. | Reconstruction of missing regions of images |
US10104397B2 (en) * | 2014-05-28 | 2018-10-16 | Mediatek Inc. | Video processing apparatus for storing partial reconstructed pixel data in storage device for use in intra prediction and related video processing method |
US10142613B2 (en) * | 2015-09-03 | 2018-11-27 | Kabushiki Kaisha Toshiba | Image processing apparatus, image processing system, and image processing method |
FI20165256A (en) * | 2016-03-24 | 2017-09-25 | Nokia Technologies Oy | Hardware, method and computer program for video encoding and decoding |
EP4447452A2 (en) * | 2016-10-04 | 2024-10-16 | B1 Institute of Image Technology, Inc. | Image data encoding/decoding method and apparatus |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0577365A2 (en) | 1992-06-29 | 1994-01-05 | Sony Corporation | Encoding and decoding of picture signals |
US5623310A (en) * | 1995-03-31 | 1997-04-22 | Daewoo Electronics Co., Ltd. | Apparatus for encoding a video signal employing a hierarchical image segmentation technique |
US5929915A (en) * | 1997-12-02 | 1999-07-27 | Daewoo Electronics Co., Ltd. | Interlaced binary shape coding method and apparatus |
US5991453A (en) * | 1996-09-30 | 1999-11-23 | Kweon; Ji-Heon | Method of coding/decoding image information |
US6026195A (en) * | 1997-03-07 | 2000-02-15 | General Instrument Corporation | Motion estimation and compensation of video object planes for interlaced digital video |
US6035070A (en) * | 1996-09-24 | 2000-03-07 | Moon; Joo-Hee | Encoder/decoder for coding/decoding gray scale shape data and method thereof |
US6055330A (en) * | 1996-10-09 | 2000-04-25 | The Trustees Of Columbia University In The City Of New York | Methods and apparatus for performing digital image and video segmentation and compression using 3-D depth information |
US6069976A (en) * | 1998-04-02 | 2000-05-30 | Daewoo Electronics Co., Ltd. | Apparatus and method for adaptively coding an image signal |
-
1998
- 1998-03-14 KR KR1019980008637A patent/KR100285599B1/en active IP Right Grant
- 1998-05-15 AU AU74562/98A patent/AU762187B2/en not_active Expired
- 1998-05-15 EP EP20080004286 patent/EP1940177B1/en not_active Expired - Lifetime
- 1998-05-15 EP EP19980921904 patent/EP1076998B1/en not_active Expired - Lifetime
- 1998-05-15 EP EP20080004285 patent/EP1933567B1/en not_active Expired - Lifetime
- 1998-05-15 ES ES98921904T patent/ES2380590T3/en not_active Expired - Lifetime
- 1998-05-15 ES ES08004286T patent/ES2380502T3/en not_active Expired - Lifetime
- 1998-05-15 WO PCT/KR1998/000122 patent/WO1999048298A1/en active IP Right Grant
- 1998-05-15 ES ES08004285T patent/ES2380501T3/en not_active Expired - Lifetime
- 1998-06-02 US US09/088,375 patent/US6259732B1/en not_active Ceased
- 1998-06-08 CN CNB981023681A patent/CN1159915C/en not_active Expired - Lifetime
- 1998-06-10 JP JP16235598A patent/JPH11298901A/en active Pending
-
2003
- 2003-09-10 AU AU2003244627A patent/AU2003244627B2/en not_active Expired
-
2008
- 2008-06-02 US US12/131,712 patent/USRE41383E1/en not_active Expired - Lifetime
- 2008-06-02 US US12/131,723 patent/USRE41951E1/en not_active Expired - Lifetime
-
2010
- 2010-06-20 US US12/819,210 patent/USRE43130E1/en not_active Expired - Lifetime
- 2010-06-20 US US12/819,207 patent/USRE43060E1/en not_active Expired - Lifetime
- 2010-06-20 US US12/819,208 patent/USRE43061E1/en not_active Expired - Lifetime
- 2010-06-20 US US12/819,209 patent/USRE43129E1/en not_active Expired - Lifetime
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0577365A2 (en) | 1992-06-29 | 1994-01-05 | Sony Corporation | Encoding and decoding of picture signals |
US5623310A (en) * | 1995-03-31 | 1997-04-22 | Daewoo Electronics Co., Ltd. | Apparatus for encoding a video signal employing a hierarchical image segmentation technique |
US6035070A (en) * | 1996-09-24 | 2000-03-07 | Moon; Joo-Hee | Encoder/decoder for coding/decoding gray scale shape data and method thereof |
US5991453A (en) * | 1996-09-30 | 1999-11-23 | Kweon; Ji-Heon | Method of coding/decoding image information |
US6055330A (en) * | 1996-10-09 | 2000-04-25 | The Trustees Of Columbia University In The City Of New York | Methods and apparatus for performing digital image and video segmentation and compression using 3-D depth information |
US6026195A (en) * | 1997-03-07 | 2000-02-15 | General Instrument Corporation | Motion estimation and compensation of video object planes for interlaced digital video |
US5929915A (en) * | 1997-12-02 | 1999-07-27 | Daewoo Electronics Co., Ltd. | Interlaced binary shape coding method and apparatus |
US6069976A (en) * | 1998-04-02 | 2000-05-30 | Daewoo Electronics Co., Ltd. | Apparatus and method for adaptively coding an image signal |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160227215A1 (en) * | 2001-12-17 | 2016-08-04 | Microsoft Technology Licensing, Llc | Video coding / decoding with re-oriented transforms and sub-block transform sizes |
US10075731B2 (en) * | 2001-12-17 | 2018-09-11 | Microsoft Technology Licensing, Llc | Video coding / decoding with re-oriented transforms and sub-block transform sizes |
US10158879B2 (en) | 2001-12-17 | 2018-12-18 | Microsoft Technology Licensing, Llc | Sub-block transform coding of prediction residuals |
US10390037B2 (en) | 2001-12-17 | 2019-08-20 | Microsoft Technology Licensing, Llc | Video coding/decoding with sub-block transform sizes and adaptive deblock filtering |
US10958917B2 (en) | 2003-07-18 | 2021-03-23 | Microsoft Technology Licensing, Llc | Decoding jointly coded transform type and subblock pattern information |
Also Published As
Publication number | Publication date |
---|---|
CN1159915C (en) | 2004-07-28 |
EP1940177A2 (en) | 2008-07-02 |
KR100285599B1 (en) | 2001-04-02 |
ES2380590T3 (en) | 2012-05-16 |
AU2003244627A1 (en) | 2003-10-09 |
AU7456298A (en) | 1999-10-11 |
AU2003244627B2 (en) | 2005-07-14 |
EP1940177B1 (en) | 2012-01-11 |
USRE41951E1 (en) | 2010-11-23 |
USRE41383E1 (en) | 2010-06-22 |
KR19990074806A (en) | 1999-10-05 |
EP1940177A3 (en) | 2010-08-11 |
EP1933567B1 (en) | 2012-01-11 |
US6259732B1 (en) | 2001-07-10 |
EP1933567A2 (en) | 2008-06-18 |
CN1229323A (en) | 1999-09-22 |
ES2380501T3 (en) | 2012-05-14 |
EP1933567A3 (en) | 2010-08-11 |
ES2380502T3 (en) | 2012-05-14 |
USRE43129E1 (en) | 2012-01-24 |
EP1076998A1 (en) | 2001-02-21 |
USRE43130E1 (en) | 2012-01-24 |
USRE43061E1 (en) | 2012-01-03 |
AU762187B2 (en) | 2003-06-19 |
EP1076998B1 (en) | 2012-01-11 |
JPH11298901A (en) | 1999-10-29 |
WO1999048298A1 (en) | 1999-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE43060E1 (en) | Method and apparatus for encoding interlaced macroblock texture information | |
US5973743A (en) | Mode coding method and apparatus for use in an interlaced shape coder | |
US6094225A (en) | Method and apparatus for encoding mode signals for use in a binary shape coder | |
EP1135934B1 (en) | Efficient macroblock header coding for video compression | |
JP4357506B2 (en) | Chrominance shape information generator | |
KR100246168B1 (en) | Hierachical motion estimation for interlaced video | |
US20040028129A1 (en) | Picture encoding method and apparatus, picture decoding method and apparatus and furnishing medium | |
US5978048A (en) | Method and apparatus for encoding a motion vector based on the number of valid reference motion vectors | |
KR100238622B1 (en) | A motion video compression system with novel adaptive quantisation | |
US6069976A (en) | Apparatus and method for adaptively coding an image signal | |
US6133955A (en) | Method for encoding a binary shape signal | |
US6049567A (en) | Mode coding method in a binary shape encoding | |
EP0809405B1 (en) | Method and apparatus for determining an optimum grid for use in a block-based video signal coding system | |
US7426311B1 (en) | Object-based coding and decoding apparatuses and methods for image signals | |
EP0923250A1 (en) | Method and apparatus for adaptively encoding a binary shape signal | |
GB2341030A (en) | Video motion estimation | |
KR100283579B1 (en) | Method and apparatus for coding mode signals in interlaced shape coding technique | |
MXPA00008676A (en) | Method and apparatus for padding interlaced macroblock texture information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
AS | Assignment |
Owner name: ZTE CORPORATION, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DAEWOO ELECTRONICS CORPORATION;REEL/FRAME:029594/0117 Effective date: 20121214 |