AU758254B2 - Padding of video object planes for interlaced digital video - Google Patents

Padding of video object planes for interlaced digital video Download PDF

Info

Publication number
AU758254B2
AU758254B2 AU18393/01A AU1839301A AU758254B2 AU 758254 B2 AU758254 B2 AU 758254B2 AU 18393/01 A AU18393/01 A AU 18393/01A AU 1839301 A AU1839301 A AU 1839301A AU 758254 B2 AU758254 B2 AU 758254B2
Authority
AU
Australia
Prior art keywords
vop
block
pixels
motion vector
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU18393/01A
Other versions
AU1839301A (en
Inventor
Xuemin Chen
Robert O. Eifrig
Ajay Luthra
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arris Technology Inc
Original Assignee
Arris Technology Inc
General Instrument Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU57401/98A external-priority patent/AU728756B2/en
Application filed by Arris Technology Inc, General Instrument Corp filed Critical Arris Technology Inc
Priority to AU18393/01A priority Critical patent/AU758254B2/en
Publication of AU1839301A publication Critical patent/AU1839301A/en
Application granted granted Critical
Publication of AU758254B2 publication Critical patent/AU758254B2/en
Assigned to GENERAL INSTRUMENT CORPORATION reassignment GENERAL INSTRUMENT CORPORATION Alteration of Name(s) of Applicant(s) under S113 Assignors: GENERAL INSTRUMENT CORPORATION
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Description

S&F Ref. 410905D1I
AUSTRALIA
PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant: Actual Inventor(s): Address for Service: Invention Title: General Instrument Corporation 8770 West Br'~'n M~awr Avenue, 13th Flooi;of Tou1Lfl4men{ 6rWh'e Chicaago 111ineis 60631 ,ot United States of America
P-A
Robert 0 Eifrig Xuemin Chen Ajay Luthra Spruson Ferguson St Martins Tower,Level 31 Market Street Sydney NSW 2000 Padding of Video Object Planes for Interlaced Digital Video The following statement is a full description of this invention, including the best method of performing it known to me/us:- IWAustiia- Documents £d~l CD Batch No- 5845c 1 PADDING OF VIDEO OBJECT PLANES FOR INTERLACED DIGITAL VIDEO BACKGROUND OF THE INVENTION The present invention relates to a padding technique for extending the area of an interlaced coded reference video object plane (VOP).
The invention is particularly suitable for use with various multimedia applications, and is compatible with the MPEG-4 Verification Model (VM) standard described in document ISO/IEC/JTC1/SC29/WG1l N1642, entitled "MPEG-4 Video Verification Model Version April 1997, incorporated herein by reference. The MPEG-2 standard is a precursor to the MPEG-4 standard, and is described in document ISO/IEC 13818-2, entitled "Information Technology Generic Coding of Moving Pictures and Associated Audio, Recommendation H.262," March 25, 1994, incorporated herein by reference.
MPEG-4 is a new coding standard which provides a flexible framework and an 20 open set of coding tools for communication, access, and manipulation of e ee *g [R:\LIBQ]00620.doc:GMM 2 digital audio-visual data. These tools support a wide range of features. The flexible framework of MPEG-4 supports various combinations of coding tools and their corresponding functionalities for applications required by the computer, telecommunication, and entertainment TV and film) industries, such as database browsing, information retrieval, and interactive communications.
MPEG-4 provides standardized core technologies allowing efficient storage, transmission and manipulation of video data in multimedia environments. MPEG-4 achieves efficient compression, object scalability, spatial and 15 temporal scalability, and error resilience.
The MPEG-4 video VM coder/decoder (codec) is a block- and object-based hybrid coder with motion compensation. Texture is encoded with an 8x8 Discrete Cosine Transformation (DCT) utilizing 20 overlapped block-motion compensation. Object shapes are represented as alpha maps and encoded using a Content-based Arithmetic Encoding (CAE) algorithm or a modified DCT coder, both using temporal prediction. The coder can handle sprites as they are known from computer graphics. Other coding methods, such as wavelet and sprite coding, may also be used for special applications.
Motion compensated texture coding is a well known approach for video coding, and can be modeled as a three-stage process. The first stage is signal processing which includes motion estimation and S-3compensation (ME/MC) and a two-dimensional spatial transformation. The objective of ME/MC and the spatial transformation is to take advantage of temporal and spatial correlations in a video sequence to optimize the rate-distortion performance of quantization and entropy coding under a complexity constraint. The most common technique for ME/MC has been block matching, and the most common spatial transformation has been the DCT.
However, special concerns arise for ME/MC of VOPs, particularly when the VOP is itself interlaced coded, and/or uses reference images which are interlaced coded.
Moreover, for arbitrarily shaped VOPs which are interlaced coded, special attention must be paid to the area of the reference image used for motion prediction.
Accordingly, it would be desirable to have an efficient technique for padding the area of a reference image for coding of interlaced VOPs.
[R:\LIBQ]00620.doc:GMM SUMMARY OF THE INVENTION According to a first aspect of the invention there is provided a method for padding a digital video image which includes a field coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said method comprising the steps of: reordering said top and bottom field pixel lines from said interleaved order to lo provide a top field block comprising said top field pixel lines, and a bottom field block comprising said bottom field pixel lines; and padding said exterior pixels separately within said respective top and bottom field blocks.
According to a second aspect of the invention there is provided an apparatus for S 15 padding a digital video image which includes a field coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said apparatus comprising: means for reordering said top and bottom field pixel lines from said interleaved order to provide a top field block comprising said top field pixel lines, and a bottom field S block comprising said bottom field pixel lines; and means for padding said exterior pixels separately within said respective top and S bottom field blocks.
S 25 After the exterior pixels have been padded, the top and bottom field pixel lines comprising the padded exterior pixels are reordered back to the interleaved order to provide the padded reference image.
According to a third aspect of the invention there is provided a decoder for recovering a padded digital video image which includes a field coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said decoder comprising: Sa detector for detecting padding in the exterior pixels separately within 6 35 spective top and bottom field blocks; [R:\LIBQ]00620.doc:GMM said top and bottom field blocks being representative of said top and bottom field pixel lines reordered from said interleaved order; said top field block comprising reordered data from said top field pixel lines; and said bottom field block comprising reordered data from said bottom field pixel lines.
According to a fourth aspect of the invention there is provided a signal carrying a padded digital video image which includes a field coded video object plane (VOP) having top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels 1o which are exterior to boundary pixels of said VOP, said signal including: a top field block comprising top field pixel lines reordered from said interleaved order; a bottom field block comprising bottom field pixel lines reordered from said o interleaved order; and o* o 15 separately padded exterior pixels within said respective top and bottom field blocks.
According to a fifth aspect of the invention there is provided a communication signal for use in a system in which horizontal and vertical motion vector components are
/I.
used to differentially encode respective horizontal and vertical motion vector components of a current block of a digital video image, wherein: candidate first, second and third blocks have associated horizontal and vertical motion vector components; said first block being at least a portion of a first macroblock which immediately precedes said current block in a current row; 25 said second block being at least a portion of a second macroblock which is immediately above said current block in a preceding row; said third block being at least a portion of a third macroblock which immediately follows said second macroblock in said preceding row; and at least one of said first, second and third candidate blocks and said current block is field-coded; said communications signal including at least one of: a selected horizontal motion vector component used to differentially encode the horizontal motion vector component of said current block according to a value derived PU,3from the horizontal motion vector components of said first, second and third candidate blocks; and [R\LIBQ]00620.doc:GMM a selected vertical motion vector component used to differentially encode the vertical motion vector component of said current block according to a value derived from the vertical motion vector components of said first, second and third candidate blocks.
According to a sixth aspect of the invention there is provided a communications channel carrying a padded digital video image signal which includes a field coded video object plane (VOP) having top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said signal including: a top field block comprising top field pixel lines reordered from said interleaved order; a bottom field block comprising bottom field pixel lines reordered from said interleaved order; and separately padded exterior pixels within said respective top and bottom field 15 blocks.
According to a seventh aspect of the invention there is provided a communications channel carrying a signal for use in a system in which horizontal and vertical motion vector components are used to differentially encode respective horizontal and vertical motion vector components of a current block of a digital video image, wherein: candidate first, second and third blocks have associated horizontal and vertical o. motion vector components; said first block being at least a portion of a first macroblock which immediately precedes said current block in a current row; 25 said second block being at least a portion of a second macroblock which is immediately above said current block in a preceding row; said third block being at least a portion of a third macroblock which immediately follows said second macroblock in said preceding row; and at least one of said first, second and third candidate blocks and said current block is field-coded; said communications signal including at least one of: a selected horizontal motion vector component used to differentially encode the horizontal motion vector component of said current block according to a value derived Sfrom the horizontal motion vector components of said first, second and third candidate blocks; and [R:\LIBQ]00620.doc:GMM a selected vertical motion vector component used to differentially encode the vertical motion vector component of said current block according to a value derived from the vertical motion vector components of said first, second and third candidate blocks.
to (THE NEXT PAGE IS PAGE 9) •••oo oo*o ee •m* •oo• ease e [R:\LIBQ]00620.doc:GMM During padding, when a particular one of the exterior pixels is located between two of the boundary pixels of the VOP in the corresponding top or bottom field block, the exterior pixel is assTgned 9 a value according to an average of the two boundary pixels. When a particular one of the exterior pixels is located between one of the boundary pixels of said VOP and an edge of the region in the corresponding field block, but not between two VOP boundary pixels in the corresponding field block, the exterior pixel is assigned a value according to one of the boundary pixels. The term "between" means bounded by interior pixels along a horizontal or vertical pixel grid line. For example, the region may be a 16x16 15 macroblock.
When a particular exterior pixel is located between two edges of the region in the corresponding field block, but not between a VOP boundary pixel and an edge of the region, and not between two of the VOP boundary pixels, the particular exterior pixel is assigned a value according to at least one of: a padded exterior pixel which is closest to the particular exterior pixel moving horizontally in the region; and a padded exterior pixel which is closest to the particular exterior pixel moving vertically in the region. For example, when padded exterior pixels are available moving both horizontally and vertically from the particular exterior pixel in the region, the average may be used.
BRIEF DESCRIPTION OF THE DRAWINGS FIGURE 1 is an illustration of a video object plane (VOP) coding and decoding process.
FIGURE 2 is a block diagram of an encoder.
FIGURE 3 illustrates an interpolation scheme for a half-pixel search.
FIGURE 4 illustrates a scheme for motion estimation with a restricted motion vector.
FIGURE 5 illustrates a scheme for motion estimation with an unrestricted motion vector.
FIGURE 6 illustrates reordering of pixel lines in an adaptive frame/field prediction scheme.
FIGURE 7 illustrates a current field mode macroblock with neighboring frame mode blocks having associated candidate motion vector predictors.
FIGURE 8 illustrates a current field mode macroblock with neighboring frame mode blocks and field mode macroblock having associated candidate motion vector predictors.
FIGURE 9 illustrates a current advanced prediction mode block with 20 neighboring frame mode blocks and field mode macroblock having associated **ee [R:\LIBQ]00620.doc:GMM -11candidate motion vector preditors.
FIGURE 10 illustrates macroblock-based VOP padding for motion prediction.
FIGURE 11 illustrates repetitive padding within a macroblock for motion prediction.
FIGURE 12 illustrates repetitive padding within a macroblock for motion prediction after grouping same-field pixel lines.
FIGURE 13 is a block diagram of a decoder.
FIGURE 14 illustrates a macroblock layer structure.
o o r [R:\LIBQ]00620.doc:GMM 12- DETAILED DESCRIPTION OF PREFERED EMBODIMENTS FIGURE 1 is an illustration of a video object plane (VOP) coding and decoding process. Frame 105 includes three pictorial elements, including a square foreground element 107, an oblong foreground element 108, and a landscape backdrop element 109.
In frame 115, the elements are designated VOPs using a segmentation mask such that VOP 117 represents the square foreground element 107, VOP 118 represents the oblong foreground element 108, and VOP 119 represents the landscape backdrop element 109. A VOP can have an arbitrary shape, and a succession of VOPs is known as a video object.
A full rectangular video frame may also be considered to be a VOP. Thus, the term "VOP" will be used herein to indicate both arbitrary and non-arbitrary rectangular) image area shapes. A segmentation mask is obtained using known techniques, and has a format similar to that of ITU-R 601 luminance data. Each pixel is identified as belonging to a certain region in the video frame.
The frame 105 and VOP data from frame 115 are supplied to separate encoding functions. In oooo* [R:\UBQ]00620.doc:GMM particular, VOPs 117, 118 and 119 undergo shape, motion and texture encoding at encoders 137, 138 and 139, respectively. With shape coding, binary and gray scale shape information is encoded. With motion coding, the shape information is coded using motion estimation within a frame. With texture coding, a spatial transformation such as the DCT is performed to obtain transform coefficients which can be variable-length coded for compression.
The coded VOP data is then combined at a multiplexer (MUX) 140 for transmission over a channel 145. Alternatively, the data may be stored on a recording medium. The received coded VOP data is separated by a demultiplexer (DEMUX) 150 so that 15 the separate VOPs 117-119 are decoded and recovered.
Frames 155, 165 and 175 show that VOPs 117, 118 and .119, respectively, have been decoded and recovered and can therefore be individually manipulated using a compositor 160 which interfaces with a video 20 library 170, for example.
The compositor may be a device such as a personal computer which is located at a user's home to allow the user to edit the received data to provide a customized image. For example, the user's personal video library 170 may include a previously stored VOP 178 a circle) which is different than the received VOPs. The user may compose a frame 185 where the circular VOP 178 replaces the square VOP 117. The frame 185 thus includes the received VOPs 118 and 119 and the locally stored VOP 178.
-14- In another example, the background VOP 109 may be replaced by a background of the user's choosing. For example, when viewing a television news broadcast, the announcer may be coded as a VOP which is separate from the background, such as a news studio. The user may select a background from the library 170 or from another television program, such as a channel with stock price or weather information. The user can therefore act as a video editor.
The video library 170 may also store VOPs which are received via the channel 145, and may access VOPs and other image elements via a network such as the Internet.
Generally, a video session comprises a single VOP, or a sequence of VOPs.
The video object coding and decoding process of FIGURE 1 enables many entertainment, business and educational applications, including personal computer games, virtual environments, graphical user interfaces, videoconferencing, Internet applications and the like. In particular, the capability for ME/MC with interlaced coded field mode) VOPs in accordance with the present invention provides even greater capabilities.
FIGURE 2 is a block diagram of an encoder. The encoder is suitable for use with both predictive-coded VOPs (P-VOPs) and bi-directionally coded VOPs (B-VOPs).
P-VOPs may include a number of macroblocks which may be coded individually using an intra-frame mode or an inter-frame mode. With intra-frame [R:\LIBQ]637.doc:MSOfficc (INTRA) coding, the macroblock is coded without reference to another macroblock. With inter-frame (INTER) coding, the macroblock is differentially coded with respect to a temporally subsequent frame in a mode known as forward prediction. The temporally subsequent frame is known as an anchor frame or reference frame. The anchor frame VOP) must be a P-VOP, not a B-VOP.
With forward prediction, the current macroblock is compared to a search area of macroblocks in the anchor frame to determine the best match. A corresponding motion vector describes the relative displacement of the current macroblock relative to the best match macroblock. Additionally, an
S
15 advanced prediction mode for P-VOPs may be used, se.. where motion compensation is performed on 8x8 blocks .o rather than 16x16 macroblocks. Moreover, both intra-frame and inter-frame coded P-VOP macroblocks can be coded in a frame mode or a field mode.
20 B-VOPs can use the forward prediction mode as described above in connection with P-VOPs as well as backward prediction, bi-directional prediction, and direct mode, which are all inter-frame techniques.
B-VOPs do not currently use intra-frame coded macroblocks under the MPEG-4 Video Verification Model Version 7.0 referred to previously, although this is subject to change. The anchor frame VOP) must be a P-VOP, not a B-VOP.
With backward prediction of B-VOPs, the current macroblock is compared to a search area of macroblocks in a temporally previous anchor frame to determine the best match. A corresponding motion vector describes the relative displacement of the current macroblock relative to the best match macroblock. With bi-directional prediction of B- VOPs, the current macroblock is compared to a search area of macroblocks in both a temporally previous anchor frame and a temporally subsequent anchor frame to determine the best match. Forward and backward motion vectors describes the relative displacement of the current macroblock relative to the best match macroblocks.
With direct mode prediction of B-VOPs, a motion Svector is derived for an 8x8 block when the collocated macroblock in the following P-VOP uses S: 15 the 8x8 advanced prediction mode. The motion vector of the 8x8 block in the P-VOP is linearly scaled to derive a motion vector for the block in the B-VOP without the need for searching to find a best match block.
20 The encoder, shown generally at 200, includes a shape coder 210, a motion estimation function 220, a motion compensation function 230, and a texture coder 240, which each receive video pixel data input at terminal 205. The motion estimation function 25 220, motion compensation function 230, texture coder 240, and shape coder 210 also receive VOP shape information input at terminal 207, such as the MPEG- 4 parameter VOP_ofarbitrary_shape. When this parameter is zero, the VOP has a rectangular shape, and the shape coder 210 therefore is not used.
A reconstructed anchor VOP function 250 provides a..reconstructed anchor VOP for use by the motion estimation function 220 and motion compensation function 230. For P-VOPs, the anchor VOP occurs after the current VOP in presentation order, and may be separated from the current VOP by one or more intermediate images. The current VOP is subtracted from a motion compensated anchor VOP at subtractor 260 to provide a residue which is encoded at the texture coder 240. The texture coder 240 performs the DCT to provide texture information transform coefficients) to a multiplexer (MUX) 280. The texture coder 240 also provides information which is summed with the output from the motion compensator 230 at a summer 270 for input to •0.0 "the reconstructed anchor VOP function 250.
Motion information (e.g.,.motion vectors) is provided from the motion estimation function 220 to g the MUX 280, while shape information which indicates 20 the shape of the VOP is provided from the shape coding function 210 to the MUX 280. The MUX 280 provides a corresponding multiplexed data stream to a buffer 290 for subsequent communication over a data channel.
e* 25 The pixel data which is input to the encoder may have a YUV 4:2:0 format. The VOP is represented by means of a bounding rectangle. The top left coordinate of the bounding rectangle is rounded to the nearest even number not greater than the top left coordinates of the tightest rectangle.
Accordingly, the top left coordinate of the bounding rectangle in the chrominance component is one-half that of the luminance component.
FIGURE 3 illustrates an interpolation scheme for a half-pixel search. Motion estimation andmotion compensation (ME/MC) generally involve matching a block of a current video frame a current block) with a block in a search area of a reference frame a predicted block or reference block). The reference frame(s) may be separated from the current frame by one or more intermediate images. The displacement of the reference block relative to the current block is the motion vector which has horizontal and vertical components. Positive values of the MV 15 components indicate that the predicted block is to the right of, and below, the current block.
A motion compensated difference block is formed by subtracting the pixel values of the predicted block from those of the current block point by point. Texture coding is then performed on the difference block. The coded MV and the coded texture information of the difference block are transmitted to the decoder. The decoder can then reconstruct an approximated current block by adding S" 25 the quantized difference block to the predicted block according to the MV. The block for ME/MC can be a 16x16 frame block (macroblock), an 8x8 frame block or a 16x8 field block.
In ME/MC, it is generally desirable to have small residue values for the difference block, use few bits for the motion vectors, and have a low computational complexity. Due.to its lower computational complexity relative to other difference measures, the Sum of Absolute Difference (SAD) is commonly used to select the motion vector which meets these criteria, as follows. Let be the pixels of the current block and be the pixels in the search range of the reference frame. Then N-1 N-1 {Z Zc(i,j) p(ij)l} C, if(x,y) (0,0) SAD,(x,y) I c(ij) -p(i +x,j otherwise i =o J=O 10 where R and C are positive constants.
The y) pair resulting in the minimum SAD value is the optimum full-pixel motion vector (MV) having horizontal and vertical components, e.g., 15 Note that a MV of is favored because a positive constant C is subtracted from the SAD when y) For example, for a 16x16 block, C 128. Accordingly, the distribution of MVs can be concentrated near 0) so that entropy coding of the MVs is more efficient.
Accuracy of is set at half-pixel.
Interpolation must be used on the anchor frame so that p(i+x,j+y) is defined for x or y being half of an integer. Interpolation is performed as shown in FIGURE 3. Integer pixel positions are represented by the symbol as shown at A, B, C and D. Halfpixel positions are indicated by circles, as shown at a, b, c and d. As seen, a A, b (A B)//2 c (A and d (A B C where denotes rounded division, as discussed below in connection with Tables 2 and 3.
A motion compensated difference block is defined as The difference block d(i,j) is transformed, quantized, and entropy coded. At a decoder, the motion vector and quantized difference block are available to reconstruct the current frame as follows: d(i,j) p(i An ,j N 1.
For a color format of Y:U:V 4:2:0, the 15 macroblock size is 16x16 pixels for the Y (luminance) component, and 8x8 pixels for the U/V (chrominance) components. The search range R (in half-pixel units) can be selected by the user and is signified by a parameter called f_code, where R <3 2co as shown in Table 1, below.
For example, for f code=l, the motion vector components can assume values from -16 to +15.5, in half-pixel increments. For f code=2, the motion vector components can assume values from -32 to +31.5 in half-pixel increments.
21 Table 1 fcode R 1 32 2 64 3 128 4 256 Motion vectors for the chrominance blocks are derived from those of the luminance blocks. Since one pixel in a chrominance block corresponds to two pixels in each direction in the corresponding luminance block, the chrominance component is at half the resolution of the luminance, the MV for the chrominance block is one-half the MV of the luminance block. Moreover, since the MV of the 10 luminance block may have half-pixel values, the MV for the chrominance block may consequently have quarter pixel values. But, since only half-pixel interpolation is used for MC of the chrominance block, the quarter pixel values have to be rounded into half-pixel values. Table 2, below, shows how to perform the required rounding operation. For example, 1/4 is rounded to 1/2, 2/4 is the same as 1/2 and 3/4 is rounded to 1/2.
Table 2 1/4 pixel 0 1 2 3 value 1/2 pixel 0 1 1 1 value Although only the reconstructed previous frame is available at the decoder for MC, there is a choice at the encoder for ME to use either the reconstructed previous frame or the original previous frame. It is advantageous to use the original previous frame in ME, but not MC, since the complexity is lower, and the MV may represent true motion more closely so that the chrominance 10 components may be more accurately predicted.
For some video sequences, such as where there is fast motion or a scene change, coding the difference block may require more bits than direct DCT coding of the actual intensity values of the current block. Accordingly, it is desirable to have a decision criteria for adaptively choosing to code the current block directly INTRA mode) or differentially INTER mode). The following parameters are calculated to make the INTRA/INTER 20 decision: k N* N- NN- N-Nmean c(i,j) and A lc(i,j)-mean, i= =0 =i=0 j=o where N is the size of the block N=16). The INTRA mode is chosen if A<(XAD,,(MV,MV,)-2*Nc), otherwise, the INTER mode is used. Note that the MV -23- In SADint e(MVx,MVy) for this decision is at integer pixel resolution. If the INTRA mode is chosen, no further operations are necessary for the motion search. If the INTER mode is chosen, the motion search continues for the half-pixel MV.
FIGURE 4 illustrates a scheme for motion estimation with a restricted motion vector. Advanced ME/MC techniques include unrestricted motion vector, advanced prediction, and bi-directional ME/MC. In the basic ME/MC technique, the predicted block is a block in the previous frame. However, if the current block is at a comer or border of the current frame, the range of the MV is restricted. An advanced technique is to allow unrestricted MVs for such corer and border blocks.
With this technique, the previous frame is extended in all four directions left, top, right, and bottom) by repeating padding) the border pixels a number of times according to a code word f_code, described above in Table 1) which indicates the relative range of motion. With a larger range of motion, a correspondingly larger search area is required. The difference block is generated by applying ME/MC against the extended previous frame and taking the difference of the current block and the predicted block that may be partially out of the frame boundary. This technique improves the coding efficiency of the boundary blocks and can result in an improved image.
,6 -24- For example, with the basic ME/MC technique, a previous frame 400 includes a predicted block 410 which is in a search range 420. The relative macroblock (MB) position of the current frame is shown with a dashed line 430. The corresponding motion vector may be, for example, (MVx, MVy) if the predicted block is displaced eight pixels horizontally to the right and zero pixels vertically.
FIGURE 5 illustrates a scheme for motion estimation with an unrestricted motion vector. Like-numbered elements correspond to the elements in FIGURE 4. With this advanced ME/MC technique, the search range 520 can cross over the boundary of the previous frame 400 into an extended previous frame 500. The corresponding motion o0 vector may be, for example, (MVx, MVy) if the predicted block is displaced eight pixels horizontally to the right and ten pixels vertically upward, where vertically downward is taken as the positive direction.
FIGURE 6 illustrates reordering of pixel lines in an adaptive frame/field prediction scheme. In a first aspect of the advanced prediction technique, an adaptive technique is used to decide whether a current macroblock of 16x16 pixels should be ME/MC coded as is, or divided into four blocks of 8x8 pixels each, where each 8x8 block is ME/MC coded separately, or whether field based motion estimation should be used, where pixel lines of the macroblock *o *oo [R:\LIBQ]00620.doc:GMM are reordered to group the same-field lines in two 16x8 field blocks, and each 16x8 block is separately ME/MC coded.
A field mode image, a 16x16 macroblock, is-shown generally at 600. The macroblock includes even-numbered lines 602, 604, 606, 608, 610, 612, 614 and 616, and odd-numbered lines 603, 605, 607, 609, 611, 613, 615 and 617. The even and odd lines are thus interleaved, and form top and bottom (or first and second) fields, respectively.
When the pixel lines in image 600 are permuted to form same-field luminance blocks, the macroblock shown generally at 650 is formed. Arrows, shown generally at 645, indicate the reordering of the 15 lines 602-617. For example, the even line 602, which is the first line of macroblock 600, is also the first line of macroblock 650. The even line 604 is reordered as the second line in macroblock 650.
Similarly, the even lines 606, 608, 610, 612, 614 S. 20 and 616 are reordered as the third through eighth lines, respectively, of macroblock 650. Thus, a 16x8 luminance region 680 with even-numbered lines e. is formed. Similarly, the odd-numbered lines 603, 605, 607, 609, 611, 613, 615 and 617 form a 16x8 25 region 685.
The decision process for choosing the MC mode for P-VOPs is as follows. For frame mode video, first obtain the Sum of Absolute Differences (SAD) for a single 16x16 block, SAD1 6 (MVx,AMVy); and for four 8x8 blocks, e.g., SAD, (MV, SAD, (MV,, 2 M SAD, (MVx, MVy 3 and SAD, (MV4, MVy 4 4 If DSAD,(MV, MV,) SAD 6 (1fV M 128 choose 8x8 i=! prediction; otherwise, choose 16x16 prediction.
For interlaced video, obtain 4D,P(A'MV_,4,MV_,,),SADbo,,, (A b. b where 4 and (MVbo,,,,M are the motion vector for both top (even) and bottom (odd) fields. Then, choose the reference field which has the smallest SAD for SADo,, and SAD,,ttm) from the field half sample search.
The overall prediction mode decision is based S: on choosing the minimum of: 4
SAD,
6 (MVx,MV), SAD,(MVA, MV)+128, **i and SAD,,toP(A x M4y op) SADoom(MVxboo Vyb otto) 64.
If term is the minimum, 16x16 prediction is used. If term is the minimum, 8x8 motion compensation (advanced prediction mode) is used. If term is the minimum, field based motion estimation is used.
If 8x8 prediction is chosen, there are four MVs for the four 8x8 luminance blocks, one MV for each 8x8 block. The MV for the two chrominance blocks is then obtained by taking an average of these four MVs and dividing the average value by two. Since each MV for the 8x8 luminance block has a half-pixel accuracy, the MV for the chrominance blocks may have a sixteenth pixel value. Table 3,
S
S
S.
*5*S
S
S
below, specifies the conversion of a sixteenth pixel value to a half-pixel value for chrominance MVs.
For example, 0 through 2/16 are rounded to 0, 3/16 through 13/16 are rounded to 1/2, and 14/16 and- 15/16 are rounded to 2/2=1.
Table 3 1/16 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 pixel value 1/2 0 0 0 1 1 1 1 1 1 1 1 1 1 1 2 2 pixel value With field prediction, there are two MVs for the two 16x8 blocks. The luminance prediction is generated as follows. The even lines of the macroblock lines 602, 604, 606, 608, 610, 612, 614 and 616) are defined by the top field motion vector using the reference field specified.
The motion vector is specified in frame coordinates such that full pixel vertical displacements correspond to even integral values of the vertical 15 motion vector coordinate, and a half-pixel vertical displacement is denoted by odd integral values.
When a half-pixel vertical offset is specified, only pixels from lines within the same reference field are combined.
The MV for the two chrominance blocks is derived from the (luminance) motion vector by dividing each component by 2, then rounding as follows. The horizontal component is rounded by mapping all fractional values into a half-pixel offset. This is the same procedure as described'in Table 2. The vertical motion vector component is an integer and the resulting chrominance motion vector vertical component is rounded to an integer. If the result of dividing by two yields a non-integral value, it is rounded to the adjacent odd integer.
Note that the odd integral values denote vertical interpolation between lines of the same field.
The second aspect of the advanced prediction technique is overlapped MC for luminance blocks. In the following discussion, four MVs are always 15 assumed for a 16x16 luminance block. The case of one 16x16 MV can be considered as having four identical 8x8 MVs. Each pixel in an 8x8 luminance predicted block is a weighted sum of three prediction values specified in the following 20 equation: j) (H (ij) H r(i,j) H 8 where division by eight is with rounding off to the nearest half-pixel, with rounding away from zero.
The weighting matrices for Ho(ij), and H2(,) 25 are specified in Tables 4-6, respectively, below.
is the upper left hand value in each table, and is the lower right hand corner value.
In Table 5, the top four rows indicate the top neighbor motion vector weights, while the bottom four rows indicate the bottom neighbor motion vector weights. In Table 6, the four left-hand columns indicate the left-hand neighbor motion vector weights, while the four right-hand columns indidcte the right-hand neighbor motion vector weights.
.c S.
S
6S 0 t 00 0O 0 0
S
S.
Table 4 4 5 5 5 5 5 5 4 5 5 5 5 5 5 5 6 6 6 6 5 5 6 6 6 6 5 5 6 6 6 6 5 5 5 6 6 6 6 5 5 5 5 5 5 5 4 5 5 5 5 5 5 4 Table 2 2 2 2 2 2 2 2 1 1 2 2 2 2 1 1 1 1 1 1 1 1 1 1 1 .1 1 1 1 1 1 1 11 1 1 11 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 1 1 2 2 2 2 2 2 2 2 Table 6 2 1 1 1 1 1 1 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 1 1 1 1 1 1 2 The values of q(ij),r(iands(i, are the pixels of the previous frame, defined as follows.
q(i,j) p(i+ MVo, j+MV), r(i,j) p(i +MV, j+MV), s(i,j)=p(i+MV 2 j +MV 2 10 where (MVf,,MV°) is the MV of the current 8x8 luminance block, is the MV of the block either above (forj 0,1,2,3) or below (for j 4,5,6,7) the current block, and (MVx,MVy) is the MV of the block either to the left (for i 0,1,2,3) or right (for i 4,5,6,7) of the current block.
FIGURE 7 illustrates a current field mode macroblock with neighboring frame mode blocks having associated candidate motion vector predictors. A P-VOP may be used. When using INTER mode coding, the motion vectors for the current image blocks must be transmitted. The motion vectors are coded [R:\LIBQ]637.doc:MSOfficc differentially by using a spatial neighborhood of motion vectors which are already transmitted. That is, since data is processed from top to bottom and from left to right across an image in virtually-all video standards, motion vectors from blocks which are above and/or to the left of a current block are available for processing the current block. These motion vectors are therefore candidate predictors for differential coding.
Motion vector coding is performed separately on the horizontal and vertical components of the current block. For each MV component in a P-VOP, for example, the median value of the candidate predictors for the same component may be computed, 15 and the difference value between component and i median values may be coded using variable length codes.
When interlaced coding tools are used, candidate predictors for field-based motion vectors in a current P-VOP can be obtained as follows. Let macroblock 700 be a current field mode macroblock 16x16 pixels). Surrounding macroblocks include a macroblock 710 which immediately precedes the current macroblock 700 in the current row 715, a macroblock 720 which is immediately above the current macroblock in a preceding row 725, and a macroblock 730 which immediately follows the macroblock 720 in the preceding row.
The macroblock 700 has associated first field horizontal and vertical motion vectors MVx.I and MVy,, respectively, and second field horizontal and vertical motion vectors MVxf 2 and MVf 2 respectively.
Vertical and horizontal motion vector components are not shown separately in FIGURE 7 for simplicity.
For example, assume the first field includes evennumbered rows, and the second field includes oddnumbered rows. Moreover, in the example shown, advanced prediction mode is used for macroblocks 710, 720 and 730, so that macroblock 710 includes an 8x8 candidate block 712 with associated horizontal and vertical motion vectors MV1, and MV1y, respectively, macroblock 720 includes an 8x8 candidate block 722 with associated horizontal and vertical motion vector components MV2, and MV2Y, respectively, and macroblock 730 includes an 8x8 S 15 candidate block 732 with associated horizontal and vertical motion vector components MV3. and MV3, respectively. However, it also possible for any or S..all of the candidate blocks 712, 722 and 732 to be macroblocks, where advanced prediction mode is not used. None of the macroblocks 710, 720, 730 is field predicted in the present illustration.
When a particular 8x8 sub-block of a macroblock is used as a candidate, the macroblock will have three other sub-blocks with associated horizontal 25 and vertical motion vector components which are suitable for use in differentially encoding the motion vector components of the current field coded macroblock 700. Generally, is desirable to select the sub-block in the particular macroblock which is closest to the upper left-hand portion of the current macroblock as the candidate block as shown.
S -33- Predictor horizontal and vertical motion vector components, Px and Py, respectively, can be determined from Px=median (MV1x, MV2x, MV3x) and Py=median (MVly, MV2y, MV3y). It has been found that the use of the median provides efficient coding. The median is the middle number in an ordered sequence having an odd number of elements, or the average of the two middle numbers of a sequence having an even number of elements. For example, the median of the sequence 2, 4) is 2, and the median of the sequence 2, 4, 10) is 3. The median is the same as the average when there are two numbers in a sequence.
Other functions besides the median may be used. For example, the average may o0 be used, Px 1/3 MV1x 1/3 MV2x 1/3 MV3x. Alternatively, some other weighting scheme may be used, Px 0.4 MV1x 0.4 MV2x 0.2 MV3x. Moreover, while three candidate blocks are used in the present example, two or more may be used.
Furthermore, the location of the candidate blocks may vary. For example, a candidate block (not shown) which immediately precedes macroblock 720 may be used. Moreover, for coding schemes which employ sufficient buffering capability, candidate blocks which follow the current macroblock 700 in the current row 715, or a subsequent row (not shown) may be used.
For differential coding of the current macroblock to obtain a motion vector difference value, MVD, both fields use the same predictor. That [R:\LIBQ637.doc:MSOffice -34is, MVDxfi=MVxn-Px, MVDyn=MVyn-Py, MVDx2=MVxf-Px, and MVDy2=MVyn-Py.
FIGURE 8 illustrates a current field mode macroblock with neighboring frame mode blocks and field mode macroblock having associated candidate motion vector predictors. A P-VOP may be used. Like-numbered elements correspond to the elements in FIGURE 7. Vertical and horizontal motion vector components are not shown separately in FIGURE 8 for simplicity. Here, candidate macroblock 820 which is immediately above the current macroblock 700 in the previous row 725 is field coded.
Thus, macroblock 820 has associated first and second field vectors, including horizontal motion vector components MV2xfand MV2x,, and vertical motion vector components, o0 MV2yn and MV2yn.
Generally, when the current macroblock is field predicted, and at least one of the spatial neighborhood macroblocks is field predicted, then the candidate motion vector predictors can be generated by using the same field of the candidate blocks. That is, for the first field of the current macroblock, the first field motion vector(s) of the surrounding field predicted macroblock(s) are used. Similarly, for the second field of the current macroblock, the second field motion vector(s) of the surrounding field predicted macroblock(s) are used.
Specifically, the first field horizontal predictor is Pxnf-median (MVlx, MV2xn, MV3x), the first field vertical predictor is Pyn=median (MVly, MV2yn, [R:\LIBQ]637.doc:MSOffice MV3,), the second field horizontal predictor is Px.2=median(MVl1, MV 2 xf 2 MV3x), and the second field vertical predictor is Pyf 2 =median(MVly, MV 2 yf 2 MV3y) The motion vector difference values are MVDxfi=MV Pxf l MVDyf=MVyf-Pyfl, MVDx 2 =MVf 2 -Pxf2, and MVDyf 2 =MVy, 2 Pyf.
Alternatively, the first and second field motion vectors of macroblock 820 (and any other field mode candidate macroblock) may be averaged to obtain averaged horizontal and vertical motion vector components. The processing then proceeds as discussed in connection with FIGURE 7.
Specifically, an averaged horizontal motion vector component for macroblock 820 is MV2x= (MV2xf,+MV2xf 2 15 while an averaged vertical motion vector component for macroblock 820 is MV2= (MV2yf,+MV2yf 2 Predictor horizontal and vertical motion vector components, respectively, are Px=median(MVlx, MV2x, MV3 x and Py=median(MVly, MV2y, MV3y). The motion vector difference values for the current macroblock 700 are MVDxf,=MVxfl-Px, MVDyf,=MVy,,-Py, MVDxf 2 =MVxf 2 and MVDyf 2 =MVy,- Py.
When two or more of the candidate macroblocks are field predicted, processing may proceed as above for each field predicted macroblock.
For coding efficiency, to ensure that the vertical component of a field motion vector is an integer, the vertical differenbe motion vector component is encoded in the bitstream as MVDyf= (MVyf,-int where int(Py) means truncate Py in the direction of zero to the nearest integer.
-36- This assures that all fractional pixel offsets are mapped to a half-pixel displacement. For example, if MVynf 4 and Py 3.5, then MVDyn= (MVyfl-int /2 (4-int /2 /2 0.5. Otherwise, without the "int" function, MVDyn (MVyn -Py) /2 (4-3.5) /2 /2 0.25, which cannot be coded as efficiently. The factor of Y2 is used to reduce the magnitude of MVDyfl, thus making it more efficiently coded by the motion vector VLC.
FIGURE 9 illustrates a current advanced prediction mode block with neighboring frame mode blocks and a field mode macroblock having associated candidate motion vector predictors. A P-VOP may be used. Like-numbered elements correspond to the elements in FIGURES 7 and 8. Vertical and horizontal motion vector components are not shown separately in FIGURE 9 for simplicity. The current block 912 in a current macroblock 900 is shown as an advanced prediction 8x8 block. Alternatively, the current block 912 may be a frame mode (progressive) macroblock which is the same as macroblock 900. Recall that with advanced prediction mode, each of the four 8x8 blocks 1i in a macroblock is ME/MC coded separately.
Generally, when the current block is coded as a progressive macroblock or an advanced prediction (8x8) block, and at least one of the coded spatial neighborhood macroblocks is field predicted, the candidate motion vector components can be generated by averaging the first and second field motion [R:\LIBQ]637.doc:MSOffice vector components, by using both of the first and second field motion vector components as candidates, or by using one of the first and second field motion vector components, but not both. Specifically,-in a first option, the motion vector components of the interlaced candidate macroblock(s) are averaged within the block. For example, for macroblock 820, averaged motion vector components are MV2,= (MV2xf,+MV2xf 2 and MV2y= (MV2,f 1 +MV2y,,) /2.
Predictor horizontal and vertical motion vector components, respectively, are P,=median(MVl1, MV2x, MV3x) and P,=median(MVly, MV2Y, MV3Y), and the motion vector difference values for the current block 912 are MVDx=MV,-P x and MVDy=MVy-Py. MV x and MVy are the 15 horizontal and vertical motion vector components, respectively, of the current block 912.
In a second option, both of the field motion vectors of the macroblock 820 are candidate predictors. For example, for macroblock 820, the predictor horizontal and vertical motion vector components, P, and Py, respectively, can be determined from Px=median(MV1, MV2xfi, MV2xf 2 MV3) and Py=median(MVly, MV2yf,, MV2y, 2 MV) and the motion vector difference values for the current 25 block 912 are MVD,=MVx-Px and MVDy=MVy-Py,.
In a third option, the first field motion vectors of the macroblock 820 are candidate predictors. For example, for macroblock 820, the predictor horizontal and vertical motion vector components, respectively, are P =median(MV1,, MV2x., MV3,) and Py=median(MVly, MV 2 yf 1 MVy). The motion -38vector difference values for the current block 912 are MVDx=MVx-Px and MVDy=MVy- Py.
In a fourth option, the second field motion vectors of the macroblock 820 are candidate predictors. For example, for macroblock 820, the predictor horizontal and vertical motion vector components, respectively, are Px=median (MV1I, MV2xf, MV3x) and Py=median (MVly, MV2yf2, MVy). The motion vector difference values for the current block 912 are MVDx=MVx-Px and MVDy=MVy-Py.
As discussed previously, when averaging pixel data from first and second fields, all fractional pixel offsets are mapped to a half-pixel displacement for coding efficiency.
FIGURE 10 illustrates macroblock-based VOP padding for motion prediction.
At an encoder, padding is used to increase the area of a reference image for motion estimation prior to motion compensation. The technique is particularly suited for use with arbitrarily shaped video object planes (VOPs). A macroblock can similarly be padded at a decoder as soon as the macroblock is reconstructed. For example, a starshaped VOP 1010 in a frame or other image region 1000 may be padded to fill blocks which are at the boundary of the VOP as well as other neighboring macroblocks.
The blocks designated by light shading such as block 1020) are all boundary blocks, and are processed with normal padding. Blocks designated with darker shading such as block 1030) are o [R:\LIBQ]637.doc:MSOffice adjacent to the boundary blocks and are processed with extended padding. Further extended block (not shown) may also be padded. The amount of padding required is related to the parameter f_code discussed above in connection with Table 1. The blocks may be 16x16 luminance blocks or 8x8 chrominance blocks.
Padding fills the areas outside the VOP by repeating the boundary pixels of the VOP. If a pixel outside the VOP can be padded by the repetition of more than one boundary pixel, the average of those particular boundary pixels is used.
Normal padding refers to padding within a boundary block. First, each horizontal line of the blocks in S 15 frame 1000 is scanned to provide continuous line I. 06 segments which are either interior to the VOP including boundary pixels of the VOP), or exterior to the VOP. If the entire line in a block is interior to the VOP, no padding is performed.
If there are both interior and exterior segments in a block, and the exterior segment is positioned between an interior segment and the edge of the block, the pixels in the exterior segment are set to the interior segment pixel value at the 25 boundary of the VOP which is closest to that particular exterior segment. For example, for the left to right sequence El-E5 and 16-116 in a 16 pixel scan line, where denotes and exterior pixel and denotes an interior pixel, El-E5 are set to 16. For the sequence El-E5, 16-110 and Ell- E16, E1-E5 are set to 16, and Ell-E16 are set to If an exterior segment is between two interior segments, the exterior segment is filled with thd average of the two boundary pixels of the interior segments. For example, for the sequence Il-I5, E6- El0 and 111-116, E6-E10 are set to (15+I11)/2.
The above process is repeated for each horizontal and vertical scan line in each block. If a pixel can be padded by both horizontal and vertical boundary pixels, the average is used.
It is possible for an exterior line segment to extend horizontally across a block without encountering an interior line segment. In this 15 case, for each pixel in the line segment, scan horizontally in both directions to find the closest padded exterior pixel. If there is a tie padded pixels to the right and left are equidistant from a current pixel), use the pixel to the left of the current pixel.
Similarly, it is possible for an exterior line segment to extend vertically across a block without encountering an interior line segment. In this case, for each pixel in the line segment, scan .25 vertically in both directions to find the closest padded exterior pixel. If there is a tie padded pixels above and below are equidistant from a current pixel), use the pixel above the current pixel. The exterior pixel is then replaced by the average of the pixels found in the horizontal and vertical bi-directional scans.
-41- However, when the VOP is interlaced coded, a modified padding technique as described below may be used.
FIGURE 11 illustrates repetitive padding within a macroblock for motion prediction. A block such as a 16x16 luminance macroblock 1100 is shown. Each pixel location is designated by an coordinate. The coordinate system used in FIGURES 11 AND 12 does not necessarily correspond to (ij) coordinates or variables used elsewhere in the description. For example, the upper left hand pixel is designated Each column of pixels in the macroblock 1100 is numbered from 0 to 15, while each row is also numbered from 0 to Pixel locations which are shaded are part of a VOP. For example, pixels (0,6- (4,10-15), (5,11-15), (6,12-15), (7,13-15), 14 and (12,15), (13,14 and 15), (14,13-15) and (15,12-15) are part of a VOP. Unshaded exterior) pixels are not part of the VOP. Using the padding technique discussed in connection with FIGURE 10, the exterior pixels can be padded with boundary interior) pixel values of the VOP. For example, exterior pixels are set to VOP boundary pixel Exterior pixel is set to the average of boundary pixels (0,6) and Exterior pixel (10,15) is set to the average of boundary pixels (9,15) and (12,15). Exterior pixel (9,14) is set to 0* o** 0 0*0 0* 0* 0* 0000 [R:\LIBQ]637.doc:MSOffice -42the average of boundary pixels (8,14) and and so forth.
FIGURE 12 illustrates repetitive padding within a macroblock for motion prediction after grouping same-field pixel lines. When VOPs are interlaced coded, a modified padding technique is needed for luminance pixel values of a reference VOP used for ME/MC. In accordance with the present invention, the luminance pixel values are split into top and bottom fields, and padding is performed in each field separately.
For example, as shown in block 1200, the even rows of pixel data from frame 1100 of FIGURE 11 are split into a top field block 1210 VOPtop) and a bottom field block 1220 VOP_bottom), each of which is 16x8. Column numbers, indicated by o0 extend from 0-15, while row numbers, indicated by extend from 0, 2, 14, and from 1, 3, ,15. Each pixel value in the block 1200 can therefore be described by an coordinate.
Top field block 1210 includes rows 0, 2, 4, 6, 8, 10, 12 and 14, while bottom field 1220 block includes rows 1, 3, 5, 7, 9, 11, 13 and Next, each line in the respective blocks is scanned horizontally to provide exterior and interior line segments as discussed previously. For example, (0-8,14) and (13-15,14) are interior line segments, while (9-12,14) is an exterior line segment.
Repetitive padding is then applied separately for each field. For example, in the top field block
I
I
I
OO
O
*O o_*O ooroo« [R:\LIBQ]637.doc:MSOffice S- 43- 1210, exterior pixels and are set to the value of interior pixel exterior pixel (9,14) is set to the average of the values of boundary pixels (8,14) and (13,14), exterior pixel is set to the average values of boundary pixels and and so forth. In the bottom field block 1220, exterior pixels and are set to the value of interior pixel Lastly, after padding, the two field blocks 1210 and 1220 are combined to form a single luminance padded reference VOP. That is, the lines are reordered in the interleaved order shown in FIGURE 11.
FIGURE 13 is a block diagram of a decoder. The decoder, shown generally at 1300, can be used to receive and decode the encoded data signals transmitted from the encoder of FIGURE 2. The encoded video image data and differentially encoded motion vector data are received at terminal 1340 and provided to a demultiplexer (DEMUX) 1342. The encoded video image data is typically differentially encoded in DCT transform coefficients as a prediction error signal residue).
A shape decoding function 1344 processes the data when the VOP has an arbitrary shape to recover shape information, which is, in turn, provided to a motion compensation function 1350 and a VOP reconstruction function 1352. A texture decoding function 1346 performs an inverse DCT on transform coefficients to recover residue information. For •o* o9oo [R:\LIBQ]637.doc:MSOffice -44- INTRA coded macroblocks, pixel information is recovered directly and provided to the VOP reconstruction function 1352. For INTER coded blocks and macroblocks, the pixel information provided from the texture decoding function 1346 to the reconstructed VOP function 1352 represents a residue between the current macroblock and a reference macroblock.
For INTER coded blocks and macroblocks, a motion decoding function 1348 processes the encoded motion vector data to recover the differential motion vectors and provide them to the motion compensation function 1350 and to a motion vector memory 1349, such as a RAM. The motion compensation function 1350 receives the differential motion vector data and determines a reference motion vector motion vector predictor). The reference motion vector is obtained from one or more of the macroblocks which are in a spatial neighborhood of the current macroblock.
For example, when the encoder provides a reference motion vector which is the median of three neighboring macroblocks, the motion compensation function 1350 must re-calculate the median motion vector components horizontal and vertical), and sum the median components with the differential motion vector components of the current macroblock to obtain the full motion vector for the current macroblock. The motion compensation function may also need to have circuitry for averaging motion *0 *S 0 0 0t 0 o* vector components of top and bottom fields of a field coded neighboring macroblock.
Thus, the motion vector memory 1349 is required to store the full motion vectors of the neighboring macroblocks once these full motion vectors are determined. For example, using the scheme disclosed in FIGURES 7-9, the motion vectors of the macroblock which immediately precedes the current macroblock in the current row must be stored, along with preceding row macroblocks which are directly above, and above and to the right, of the current macroblock. For row by row processing of a video frame or VOP, the memory 1349 may need to store up to one row of motion vector data. This can be seen by noting that S 15 when the full motion vector for the current macroblock is determined, this value must be stored for use in determining a subsequent reference motion vector since the current macroblock will be a neighboring macroblock in a preceding row when the *see next row is processed. Furthermore, when the neighboring macroblocks are field coded, motion vector components for both top and bottom fields must be stored.
Once the motion compensation function 1350 25 determines a full reference motion vector and sums it-with the differential motion vector of the current macroblock, the full motion vector of the current macroblock is available. Accordingly, the motion compensation function 1350 can now retrieve anchor frame best match data from a VOP memory 1354, such as a RAM, and provide the anchor frame pixel -46data to the VOP reconstruction function to reconstruct the current macroblock. The image quality of the reconstructed macroblock is improved by using the full motion vectors of neighboring macroblocks to determine the reference motion vector.
Padding may also be performed by the motion compensation function 1350. The retrieved best match data is added back to the pixel residue at the VOP reconstruction function 1352 to obtain the decoded current macroblock or block. The reconstructed block is output as a video output signal and also provided to the VOP memory 1354 to provide new anchor frame data. Note that an appropriate video data buffering capability may be required depending on the frame transmission and presentation orders since the anchor frame for P-VOPs is the temporally future frame in presentation order.
Generally, the decoder will perform the same steps as the encoder to determine the median or other value which was used as the motion vector predictor for the current VOP or block. Alternatively, when the motion vector predictor is the same as the motion vector of one of the candidate macroblocks, it is possible to transmit a code word to the decoder which designates the particular macroblock. For example, a code 00 may mean the motion vector of the previous block in the same row was used, a code 01 may mean the motion vector of the block above the current block in the [R:\LIBQ]637.doc:MSOffice 47previous row was used, and a code 10 may mean the motion vector of the next block in the previous row was used. In this case, the decoder can directly use the motion vector predictor of the designated macroblock and need not access the motion vectors of each of the candidate macroblocks to recalculate the motion vector predictor. A code of 11 may mean that the motion vector predictor is different than the motion vectors of all of the candidate macroblocks, so the decoder must recalculate the motion vector predictor.
Those skilled in the art will appreciate that the necessary operations can be implemented in software, firmware or hardware. This processing can be implemented with a relatively low cost and low complexity.
FIGURE 14 illustrates a macroblock layer structure. The structure is suitable for P-VOPs, and indicates the format of data received by the decoder. A first layer 1410 includes fields first_shape_code, MVD_sh, CR, ST and BAC. A second layer 1430 includes fields COD and MCBPC. A third layer 1450 includes fields AC_pred_flag, CBPY, DQUANT, Interlaced_information, MVD, MVD2, MVD3 and MVD4. A fourth layer includes fields CODA, AlphaACpred_flag, CBPA, Alpha Block Data and Block Data. Each of the above fields is defined according to the MPEG-4 standard.
The field Interlaced_information in the third layer 1450 indicates whether a macroblock is interlaced coded, and provides field motion vector reference data which informs the decoder of the e [R:LIBQ]637.doc:MSOffice 48 coding mode of the current macroblock or block. The decoder uses this information in calculating the motion vector for a current macroblock. For example, if the current macroblock is not interlaced coded but at least one of the reference macroblocks is,, then the decoder will average the motion vector components of each of the interlaced coded reference macroblocks for use in determining a reference motion vector for the current macroblock.
Alternatively, the motion vector from the top or bottom field in the reference macroblock is used, but not both. If the current macroblock is interlaced coded, then the decoder will know to calculate a reference motion vector separately for 15 each field. The coding mode may also designate which of the candidate macroblocks, if any, has a motion vector which is the same as the reference motion vector used in differentially encoding the motion vector of the current macroblock.
20 The Interlaced_information field may be stored for subsequent use as required in the motion vector memory 1349 or other memory in the decoder.
The Interlacedinformation field may also ee include a flag dcttype which indicates whether top and bottom field pixel lines in a field coded macroblock are reordered from the interleaved order, for padding.
It will be appreciated that the arrangement shown in FIGURE 14 is an example only and that various other arrangements for communicating the 49 relevant information to the decoder will become apparent to those skilled in the art.
A bitstream syntax for use in accordance with the present invention is now described. MPEG-4provides a video syntax with a class hierarchy, inpluding a Video Session Video Object (VO), Video Object Layer (VOL) or Texture Object Layer (SOL), Group of Video Object Plane, and at the bottom, a Video Object Plane. The Video Object Plane Layer syntax, set forth below, can indicate whether a current macr6block is interlaced coded in accordance with the present invention as shown below. The syntax which is not shaded is part of the present invention interlaced; if 15 (interlaced) and top_fieldfirst). Here, the term "interlaced"=l if the current macroblock is interlaced coded. The term "top_field_first" indicates that the top field of the current macroblock is to be processed first. Other terms 20 are defined in the aforementioned "MPEG-4 Video Verification Model Version Only a portion of the conventional syntax is shown for compactness, with the omitted portions being designated by three vertically arranged dots.
IintxINo fbt Svntax No. of bits I interlaced if (interlaced) top_field first -51 A more detailed macroblock layer syntax when the Interlacedinformation field=l is shown below. field_prediction=l if the current macroblock is interlaced predicted. The referenced field flags have a value of zero for the top field and a value one for the bottom field. For P-VOPs, when field_prediction=l, two motion vector differences follow the syntax, with the top field motion vector followed by the bottom field motion vector.
The syntax also accounts for B-VOPs. In particular, for B-VOPs, when field_prediction=l, two or four motion vector differences are encoded. The order of motion vector differences for an interpolated macroblock is top field forward, bottom 0o field forward, top field backward, and bottom field backward. For unidirectional interlaced prediction forward or backward only), the top field motion vector difference is followed by the bottom field motion vector difference.
[R:\LIBQ]637.doc:MSOffice -52- Syntax No. of bits Format if (interlaced) if ((mbtype INTRA) (mbtype INTRAQ) II (cbp 0)) dct type 1uimbsf if ((PVOP ((mbtype INTER) II (mbtype INTERQ))) (BVOP (mbtype Direct mode))) field prediction 1 uimbsf if (fieldprediction) if (PVOP II (B_VOP (mbtype Backward))) forward_topfield reference 1 .f uimbsf forward bottom field reference 1um uimbsf if (B-VOP (mbtype Forward)) 1 uimbsf backward top.field.reference 1mbs backward bottom field reference u Accordingly, a method and apparatus for coding of digital video images such as 5 video object planes (VOPs), and, in particular, to motion estimation and compensation techniques for interlaced digital video is provided. A technique for providing predictor motion vectors for use in differentially encoding a current field predicted macroblock uses the median of motion vectors of surrounding blocks or macroblocks. When a surrounding macroblock is itself interlaced coded, [R:\LIBQ]637.doc:MSOffice S. 53an average motion vector for that macroblock can be used, with fractional pixel values being mapped to the half-pixel. When the current block is not interlaced coded but a surrounding block is, the field motion vectors of the surrounding block may be used individually or averaged.
A decoder as described uses a bitstream syntax to determine the coding mode of a macroblock, such as whether a macroblock is field coded. The decoder may store the coding mode of neighboring macroblocks for later use.
In a repetitive padding technique for an interlaced coded VOP, the top and bottom lines of the VOP and surrounding block are grouped. Within each field, exterior 0o pixels are padded by setting them to the value of the nearest boundary pixel, or to an average of two boundary pixels. The lines are then reordered to provide a single reference VOP image.
e [R:\LIBQ]637.doc:MSOffice

Claims (24)

1. A method for padding a digital video image which includes a field coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said method -comprising the steps of: reordering said top and bottom field pixel lines from said interleaved order to provide a top field block comprising said top field pixel lines, and a bottom field block comprising said bottom field pixel lines; and padding said exterior pixels separately within said respective top and bottom field blocks.
2. The method of claim 1, comprising the further step of: reordering said top and bottom field pixel lines comprising said padded exterior pixels back to said interleaved order to provide said reference padded VOP.
3. The method of claim 1, wherein: when a particular one of said exterior pixels is located between two of said boundary pixels of said VOP in the corresponding field block, said padding step comprises the further step of: assigning said particular one of said exterior pixels a value according to an average of said two boundary pixels.
4. The method of claim 1, wherein: when a particular one of said exterior pixels is located between one of said boundary pixels of sa-id VOP and an edge of said region in the corresponding field block, but not between two of said boundary pixels of said VOP in the corresponding field block, said padding step comprises the further step of: assigning said particular one of said exterior pixels a value according to said one of said boundary pixels.
5. The method of claim 1, wherein: when a particular one of said exterior pixels -is located between two edges of said region in the corresponding field block, but not between one of sad boundary pixels of said VOP and an edge of said region in the corresponding field block, and not ^oo between two of said boundary pixels of said VOP in the corresponding field block, said padding step comprises the further step of: assigning said particular one of said exterior pixels a value according to at least one of: a padded exterior pixel which is closest to sadi particular one of said exterior pixels moving horizontally in said region in the corresponding field block; and 56 a padded exterior pixel which is closest to said particular one of said exterior pixels moving vertically in said region in the corresponding field block.
6. The method of claim 1, comprising the further step of: S; using said reference padded VOP for motion prediction of another VOP.
7. A method for padding a digital video image substantially as herein described with reference to Figs. 11, 12 and 14.
8. An apparatus for padding a digital video image which includes a field coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a -reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said apparatus comprising: means for reordering said top and bottom field pixel lines from said interleaved order to provide a top field block comprising said top field pixel lines, and a bottom field block comprising said bottom field pixel lines; and mmeans for padding said exterior pixels separately within said respective top and bottom field blocks. 57
9. The apparatus of claim 8, further comprising: means for reordering said top and bottom field pixel lines comprising said padded exterior pixels back to said interleaved order to provide said reference padded VOP. The apparatus of claim 8, wherein: said means for padding comprises means for assigning; and when a particular one of said exterior pixels is located between two of said boundary pixels of said VOP in the corresponding field block, said means for assigning assigns said particular one of said exterior pixels a value according to an average -of-said two boundary pixels. *o
11. The apparatus of claim 8, wherein: said means for padding comprises means for 9 assigning; and 9. 99 9 9 when a particular one of said exterior pixels is located between one of said boundary pixels of said VOP and an edge of said region in the corresponding field block, but not between two of said boundary pixels of said VOP in the corresponding field block, said means for assigning assigns said particular one of said exterior pixels a value according to said one of said boundary pixels.
12. The apparatus of claim 8, wherein: said means for padding comprises means for assigning; and when a particular one of said exterior pixels S: is located between two edges of said region in the corresponding field block, but not between one of -said. boundary pixels of said VOP and an edge of said region in the corresponding field block, and not between two of said boundary pixels of said VOP in the corresponding field block, said means for assigning assigns said particular one of said exterior pixels a value according to at least one of: a padded exterior pixel which is closest to said particular one of said exterior pixels moving horizontally in said region in the corresponding field block; and -59- a padded exterior pixel which is closest to said particular one of said exterior pixels moving vertically in said region in the corresponding field block.
13. The apparatus of claim 8, further comprising: means for using said reference padded VOP for motion prediction of another VOP.
14. An apparatus for padding a digital video image substantially as herein described with reference to Figs. 11, 12 and 14. A decoder for recovering a padded digital video image which includes a field coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said decoder comprising: a detector for detecting padding in the exterior pixels separately within respective top and bottom field blocks; said top and bottom field blocks being representative of said top and bottom field pixel lines reordered from said interleaved order; 20 said top field block comprising reordered data from said top field pixel lines; and said bottom field block comprising reordered data from said bottom field pixel lines. eoooe
16. A decoder in accordance with claim 15, wherein means are provided for using said reference padded VOP for motion prediction of another VOP.
17. A signal carrying a padded digital video image which includes a field coded video object plane (VOP) having top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said signal including: a top field block comprising top field pixel lines reordered from said interleaved order; OS"a bottom field block comprising bottom field pixel lines reordered from said terleaved order; and [R:\LIBQ] I 528.doc:eaa separately padded exterior pixels within said respective top and bottom field blocks.
18. A communication signal for use in a system in which horizontal and vertical motion vector components are used to differentially encode respective horizontal and vertical motion vector components of a current block of a digital video image, wherein: candidate first, second and third blocks have associated horizontal and vertical motion vector components; said first block being at least a portion of a first macroblock which immediately 1o precedes said current block in a current row; said second block being at least a portion of a second macroblock which is immediately above said current block in a preceding row; S:said third block being at least a portion of a third macroblock which immediately S•follows said second macroblock in said preceding row; and at least one of said first, second and third candidate blocks and said current block is field-coded; said communications signal including at least one of: a selected horizontal motion vector component used to differentially encode the horizontal motion vector component of said current block according to a value derived 20 from the horizontal motion vector components of said first, second and third candidate blocks; and S.o a selected vertical motion vector component used to differentially encode the vertical motion vector component of said current block according to a value derived from the vertical motion vector components of said first, second and third candidate blocks.
19. A signal in accordance with claim 18 further including data indicating whether said current block is field coded. A signal in accordance with claim 19, wherein said data is provided in at least one of the selected horizontal motion vector component and selected vertical motion vector component.
21. A signal in accordance with claim 18, including both a selected horizontal Smotion vector component and a selected vertical motion vector component. [RA\LIBQ] 1528.doc:caa -61
22. A communications channel carrying a padded digital video image signal which includes a field coded video object plane (VOP) having top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said signal including: a top field block comprising top field pixel lines reordered from said interleaved order; a bottom field block comprising bottom field pixel lines reordered from said interleaved order; and to separately padded exterior pixels within said respective top and bottom field blocks. :0,90,
23. A communications channel carrying a signal for use in a system in which horizontal and vertical motion vector components are used to differentially encode *5s respective horizontal and vertical motion vector components of a current block of a digital beo0o: video image, wherein: candidate first, second and third blocks have associated horizontal and vertical motion vector components; said first block being at least a portion of a first macroblock which immediately 20 precedes said current block in a current row; said second block being at least a portion of a second macroblock which is S.i immediately above said current block in a preceding row; said third block being at least a portion of a third macroblock which immediately follows said second macroblock in said preceding row; and at least one of said first, second and third candidate blocks and said current block is field-coded; said communications signal including at least one of: a selected horizontal motion vector component used to differentially encode the horizontal motion vector component of said current block according to a value derived from the horizontal motion vector components of said first, second and third candidate blocks; and a selected vertical motion vector component used to differentially encode the vertical motion vector component of said current block according to a value derived from the vertical motion vector components of said first, second and third candidate blocks. [R:\LIBQ]1528.doc:eaa 62
24. A decoder for recovering a padded digital video image which includes a field- coded video object plane (VOP) comprising top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, Ssaid decoder being substantially as described herein with reference to the accompanying drawings. A signal carrying a padded digital video image which includes a field-coded video object plane (VOP) having top and bottom field pixel lines carried in an interleaved io order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said signal being substantially as described herein with reference to the accompanying drawings.
26. A communication signal for use in a system in which horizontal and vertical motion vector components are used to differentially encode respective horizontal and vertical motion vector components of a current block of a digital video image, said communication signal being substantially as described herein with reference to the accompanying drawings. .i S
27. A communications channel carrying a padded digital video image signal which includes a field-coded video object plane (VOP) having top and bottom field pixel lines carried in an interleaved order to provide a reference padded VOP, said VOP being carried, at least in part, in a region which includes pixels which are exterior to boundary pixels of said VOP, said communications channel being substantially as described herein with reference to the accompanying drawings. [R:\LIBQ] 1528.doc:eaa -63-
28. A communications channel carrying a signal for use in a system in which horizontal and vertical motion vector components are used to differentially encode respective horizontal and vertical motion vector components of a current block of a digital video image, said communications channel being substantially as described herein with reference to the accompanying drawings. DATED this twenty-fourth Day of December, 2002 General Instrument Corporation Patent Attorneys for the Applicant SPRUSON FERGUSON o *o o *0go [R:\LIBQ] 528.doc:eaa
AU18393/01A 1997-03-07 2001-02-09 Padding of video object planes for interlaced digital video Ceased AU758254B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU18393/01A AU758254B2 (en) 1997-03-07 2001-02-09 Padding of video object planes for interlaced digital video

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US60/040120 1997-03-07
US60/042245 1997-03-31
US08/897847 1997-07-21
AU57401/98A AU728756B2 (en) 1997-03-07 1998-03-06 Motion estimation and compensation of video object planes for interlaced digital video
AU18393/01A AU758254B2 (en) 1997-03-07 2001-02-09 Padding of video object planes for interlaced digital video

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU57401/98A Division AU728756B2 (en) 1997-03-07 1998-03-06 Motion estimation and compensation of video object planes for interlaced digital video

Publications (2)

Publication Number Publication Date
AU1839301A AU1839301A (en) 2001-05-03
AU758254B2 true AU758254B2 (en) 2003-03-20

Family

ID=3742804

Family Applications (1)

Application Number Title Priority Date Filing Date
AU18393/01A Ceased AU758254B2 (en) 1997-03-07 2001-02-09 Padding of video object planes for interlaced digital video

Country Status (1)

Country Link
AU (1) AU758254B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023138543A1 (en) * 2022-01-19 2023-07-27 Beijing Bytedance Network Technology Co., Ltd. Method, apparatus, and medium for video processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5793895A (en) * 1996-08-28 1998-08-11 International Business Machines Corporation Intelligent error resilient video encoder
US5815646A (en) * 1993-04-13 1998-09-29 C-Cube Microsystems Decompression processor for video applications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5815646A (en) * 1993-04-13 1998-09-29 C-Cube Microsystems Decompression processor for video applications
US5793895A (en) * 1996-08-28 1998-08-11 International Business Machines Corporation Intelligent error resilient video encoder

Also Published As

Publication number Publication date
AU1839301A (en) 2001-05-03

Similar Documents

Publication Publication Date Title
CA2702769C (en) Motion estimation and compensation of video object planes for interlaced digital video
AU724796B2 (en) Prediction and coding of bi-directionally predicted video object planes for interlaced digital video
CA2230422C (en) Intra-macroblock dc and ac coefficient prediction for interlaced digital video
US6483874B1 (en) Efficient motion estimation for an arbitrarily-shaped object
US7545863B1 (en) Bidirectionally predicted pictures or video object planes for efficient and flexible video coding
US6404814B1 (en) Transcoding method and transcoder for transcoding a predictively-coded object-based picture signal to a predictively-coded block-based picture signal
Kim et al. Zoom motion estimation using block-based fast local area scaling
JP2000023194A (en) Method and device for picture encoding, method and device for picture decoding and provision medium
JP3440830B2 (en) Image encoding apparatus and method, and recording medium
KR20010082933A (en) Method and apparatus for updating motion vector memory
USRE38564E1 (en) Motion estimation and compensation of video object planes for interlaced digital video
AU758254B2 (en) Padding of video object planes for interlaced digital video
AU728756B2 (en) Motion estimation and compensation of video object planes for interlaced digital video
Ebrahimi et al. Mpeg-4 natural video coding—part ii
KR100495100B1 (en) Motion Vector Coding / Decoding Method for Digital Image Processing System
MXPA98001809A (en) Estimation and compensation of the movement of planes of the video object for digital video entrelaz
Shum et al. Video Compression Techniques
Chen et al. Coding of an arbitrarily shaped interlaced video in MPEG-4
WO2023101990A1 (en) Motion compensation considering out-of-boundary conditions in video coding
Wei et al. Video coding for HDTV systems
Ku et al. Architecture design of motion estimation for ITU-T H. 263
Nakaya Touradj Ebrahimi Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland F. Dufaux Compaq, Cambridge, Massachusetts
MXPA98001810A (en) Prediction and coding of planes of the object of video of prediccion bidireccional for digital video entrelaz
Mandal et al. Digital video compression techniques

Legal Events

Date Code Title Description
PC1 Assignment before grant (sect. 113)

Owner name: GENERAL INSTRUMENT CORPORATION

Free format text: THE FORMER OWNER WAS: GENERAL INSTRUMENT CORPORATION

PC1 Assignment before grant (sect. 113)

Owner name: GENERAL INSTRUMENT CORPORATION

Free format text: THE FORMER OWNER WAS: GENERAL INSTRUMENT CORPORATION

FGA Letters patent sealed or granted (standard patent)