GB2328337A - Encoding motion vectors - Google Patents

Encoding motion vectors Download PDF

Info

Publication number
GB2328337A
GB2328337A GB9717793A GB9717793A GB2328337A GB 2328337 A GB2328337 A GB 2328337A GB 9717793 A GB9717793 A GB 9717793A GB 9717793 A GB9717793 A GB 9717793A GB 2328337 A GB2328337 A GB 2328337A
Authority
GB
United Kingdom
Prior art keywords
motion vector
component
components
predictor
motion vectors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB9717793A
Other versions
GB9717793D0 (en
Inventor
Sang-Hoon Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WiniaDaewoo Co Ltd
Original Assignee
Daewoo Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daewoo Electronics Co Ltd filed Critical Daewoo Electronics Co Ltd
Publication of GB9717793D0 publication Critical patent/GB9717793D0/en
Publication of GB2328337A publication Critical patent/GB2328337A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/513Processing of motion vectors
    • H04N19/517Processing of motion vectors by encoding

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A current motion vector is encoded based on reference motion vectors by determining a first and a second predictors for the current motion vector. The first predictor 200 has a horizontal and a vertical component representing medians of the horizontal and the vertical components of the reference motion vectors 120, respectively, while the horizontal and the vertical components of the second predictor 160,170 represent a least horizontal and a least vertical component of the reference motion vectors which yield minimum differences from the horizontal and the vertical components of the current motion vectors. If the degree of dispersion 130,140 of the reference motion vectors is less than a predetermined threshold, the first predictor is set 150 as an optimum predictor of the current motion vector. If otherwise, the second predictor is regarded as the optimum predictor of the current motion vector. The current motion vector is encoded based on the optimum predictor by using the differential pulse coding technique 210 and the variable length coding scheme.

Description

1 2328337 METHOD AND APPARATUS FOR ENCODING A MOTION VECTOR The present
invention relates to a method and apparatus f or encoding a motion vector; and, more particularly, to a method and apparatus capable of encoding a motion vector of a search block based on a deviation f rom motion vectors of reference blocks thereof.
In digitally televised systems such as a videotelephone.' a teleconference and a high definition television systems, a large amount of digital data is needed to define each video frame signal since a video line signal in the video frame signal comprises a sequence of digital data referred to as pixel values. Since, however, the available frequency bandwidth of a conventional transmission channel is limited, in order to transmit the large amount of digital data therethrough, it is necessary to compress or reduce the volume of the data through the use of various data compression techniques, especially in the case of such low bit-rate video signal encoders as a video-telephone and a teleconference systems. Among various video compression techniques, the socalled hybrid coding technique, which combines temporal and spatial compression techniques together with a statistical coding technique, is known to be most effective.
1 - Most hybrid coding techniques employ a motion compensated DPCM(differential pulse coded modulation), two-dimensional DCT(discrete cosine transform), quantization of DCT coefficients, and VLC(variable length coding). The motion compensated DPCM is a process of estimating the movement of an object between a current frame and its previous frame, and predicting the current frame according to the motion flow of the object to produce a differential signal representing the difference between the current frame and its prediction.
Specifically, in the motion compensated DPCM, current frame data is predicted from the corresponding previous frame data based on an estimation of the motion between the current and the previous frames. Such estimated motion may be described in terms of two dimensional motion vectors representing the displacement of pixels between the previous and the current frames.
There have been two basic approaches to estimate the displacement of pixels of an object: one is a block-by-block estimation and the other is a pixel-by-pixel approach.
In the pixel-by-pixel approach, a displacement is determined for each and every pixel. This technique allows a more exact estimation of the pixel value and has the ability to easily handle scale changes and nontranslational movements, e.g., scale changes and rotations, of the object. However, in the pixel-bypixel approach, since a motion vector is determined at each and every pixel, it is virtually impossible to transmit all of the motion vector information to a,receiver.
As for the block-by-block motion estimation.. on the other hand, a current f rame is divided into a plurality of search blocks. To determine a motion vector for a search block in the current frame, a similarity calculation is performed between the search block of the current frame and each of a plurality of equal-sized candidate blocks included in a generally large search region within a reference frame. An error function such as the mean absolute error or mean square error is used to carry out the similarity measurement between the search block of the current f rame and one of the candidate blocks in the search region of the previous frame. And the motion vector, by definition, represents the displacement between the search block and a candidate block which yields a minimum error function.
Referring to Fig. 1, there is shown a schematic block diagram of a conventional apparatus f or encoding a motion vector of a search block based on a median value of the motion vectors of neighboring search blocks.
Motion vector information for each search block of the current frame is sequentially inputted to a memory 10, a reference motion vector selection circuit 20 and a difference encoder 40, wherein the motion vector information for a search block includes position data of the search block within the frame and a motion vector thereof, the motion vector being 3 represented by a horizontal and a vertical components thereof.
The memory 10 stores the motion vectors by using the position data thereof as their addresses.
The reference motion vector selection circuit 20 determines reference search blocks of a current search block based on the position data thereof and retrieves motion vectors of the reference search blocks from the memory 10, the reference search blocks having a predetermined positional relationship to the current search block. For instance, as disclosed in MPEG(Moving Pictures Expert Group)-4, Video Verification Model Version 7.0, ISO/TEC JTC1/SC29/WG11, MPEG97/1642, three blocks positioned at the left, the upper and the upper-right corners of the current search block can be determined as the reference search blocks. The motion vectors of the reference search blocks are provided to a predictor determination circuit 30 as reference motion vectors for the motion vector of the current search block("current motion vector"). In response to the reference motion vector, the predictor determination circuit 30 determines a predictor of the current motion vector and provides same to the difference encoder 40, wherein a horizontal and a vertical components of the predictor are medians of the horizontal and the vertical components of the reference motion vectors, respectively.
The difference encoder 40 finds directional differences between the current motion vector and the predictor thereof based on a DPCM(differential pulse code modulation) technique and encodes the differences coding(VLC) technique. The trans itted to a decoder at by using, e.g., variable length encoded differences are then a receiving end as an encoded motion vector of the current search block.
The process of encoding a motion vector of a search block based on a predictor thereof can effectively reduce the amount of data representing the motion vector, since the difference between a motion vector and a predictor thereof is normally smaller than the motion vector itself in most cases.
In certain cases, however, for instance, if reference motion vectors are of a large variation, the conventional predictor determination scheme based on the simple filtering described above may not produce an optimum predictor of a motion vector, resulting in a degraded coding efficiency.
It is, therefore, a primary object of the invention to provide a method and apparatus capable of determining an optimum predictor of a motion vector, thereby improving the coding efficiency of the motion vector.
In accordance with one aspect of the present invention, there is provided a method for encoding a current motion vector based on a plurality of reference motion vectors, wherein each motion vector includes a first and a second components, the method comprising the steps of:
(a) finding a first predictor having a first and a second components, the first component of the first predictor representing a median of the first components of the reference motion vectors and the second component of the first predictor denoting a median of the second components of the reference motion vectors; (b) calculating first absolute differences between the first component of the current motion vector and the first components of the reference motion vectors, and second absolute differences between the second component of the current motion vector and the second components of the reference motion vectors to thereby decide, among the first and the second components of the reference motion vectors, a least first component and a least second component which yield a least first absolute difference and a least second absolute difference, respectively; (c) determining a second predictor comprised of the least first and the least second components decided in step (b); (d) generating a first and a second flag signals representing the first and the second components decided in step (b), respectively; (e) computing a dispersion value of the reference motion vectors and comparing the dispersion value with a predetermined threshold to thereby generate a first selection signal if the dispersion value is less than the threshold and a second selection signal if otherwise, the dispersion value indicating the degree of proximity among the reference motion - 6 vectors; and (f) encoding the current motion vector based on the first predictor in response to the first selection signal, and the current motion vector based on the second predictor, as well as the first and the second flag signals, in response to the second selection signal, thereby generating encoded data of the current motion vector.
In accordance with another aspect of the present invention, there is provided a method for encoding a current motion vector based on a plurality of reference motion vectors, wherein each motion vector includes a first and a second components, comprising the steps of:
(a) finding a first predictor having a first component representing a median of the first components of the reference motion vectors and a second component denoting a median of the second components of the reference motion vectors; (b) estimating a dispersion value of the reference motion vectors and comparing the dispersion value with a predetermined threshold, the dispersion value indicating the degree of proximity among the first and the second components of the reference motion vectors; (c) if the dispersion value is smaller than the threshold, encodihg the difference between the first components of the current motion vector and the first predictor and the difference between the second components of the current motion vector and the first predictor to thereby generate the encoded differences as encoded data of the current motion vector; and (d) if the dispersion value is equal to or greater than the threshold, generating encoded data of the current motion vector, including the steps of:
(dl) determining a second predictor having a least first component corresponding to one of the first components of the reference motion vectors which yield a minimum difference with respect to the first component of the current motion vector and a least second component corresponding to one of the second components of the reference motion vectors which yield a least difference with respect to the second component of the current motion vector; (d2) obtaining a first and a second identification signals, the first and the second identification signals indicating the first component and the second component of the reference motion vectors to which the respective least first and second components of the second predictor correspond; and (d3) encoding the difference between the first components of the current motion vector and the second predictor and the difference between the second components of the current motion vector and the second predictor together with the first and the second identification signals, thereby generating encoded data 8 of the current motion vector.
In accordance with still another aspect of the present invention, there is provided an apparatus f or encoding a current motion vector based on a plurality of reference motion vectors, each of the motion vectors including a first and a second components, comprising:
means for estimating a degree of dispersion of the first and the second components of the reference motion vectors to thereby issue a first selection signal if the degree of dispersion is considered to be low and a second selection signal if the degree of dispersion is regarded to be high; means f or determining a f irst predictor having a f irst median and a second median as a first and a second components thereof, wherein the first and the second medians represent median values of the first and the second components of the reference motion vectors, respectively; means for obtaining a second predictor having a least first and a least second components, wherein the least first component of the second predictor corresponds to one of the first components of the reference motion vectors which yields a minimum difference with respect to the first component of the current motion vector and the least second component of the second predictor corresponds to one of the second components of the reference motion vectors which yields a least difference with respect to the second component of the current motion vector; means for deciding the first and the second predictors as an optimum predictor in response to the first and the second selection signals, respectively; and means f or encoding the current motion vector based on the optimum predictor.
The above and other objects and features of the present invention will become apparent from the following description of preferred embodiments given in conjunction with the accompanying drawings, in which:
Fig. 1 shows a schematic block diagram of a conventional apparatus for encoding a motion vector of a search block based on a median of neighboring motion vectors; and Fig. 2 presents a block diagram of an apparatus for encoding a motion vector of a search block in accordance with the present invention.
Referring to Fig. 2, there is shown a block diagram of an apparatus 100 for encoding a motion vector of a search block in accordance with the present invention, wherein the motion vector represents a displacement between the search block of a current frame and a candidate block within a corresponding search region of a previous frame that yields a minimum error function. Motion vector information for each of the search - blocks within a current frame is inputted to a memory 110, a reference motion vector selection circuit 120, a deviation calculation circuit 160 and a difference encoder 210 via a line L10, wherein the motion vector information represents position data of a current search block and a motion vector thereof, the motion vector being represented by a horizontal and a vertical components thereof.
The memory 110 stores therein the motion vector for each search block by using the position data thereof.
The reference motion vector selection circuit 120 determines reference search blocks of a current search block based on position data thereof and retrieves motion vectors of the reference search blocks from the memory 110. In a preferred embodiment of the invention, three search blocks positioned at the left, the upper and the upper-right of the current search block are selected as the reference search blocks in a same manner as in the MPEG-4 verification model 7.0 described above. In another instance of the invention, another set of reference search blocks, e.g.. at the left, the upper and the upper-left of the current search block can be selected as the reference search blocks. In any case, it is preferable to set the number of the reference search blocks to be an odd number in order to facilitate the median filtering the motion vectors thereof.
The motion vector for each of the reference search blocks, each motion vector being comprised of the horizontal and the vertical components, is provided from the reference motion vector selection circuit 120 to a dispersion calculation circuit 130, the deviation calculation circuit 160 and a median filter 200 as the reference motion vector of the motion vector for the current search block.
The median filter 200 determines a median vector based on the reference motion vectors. A horizontal and a vertical components W-MED-x and W_MED_y of the median vector W-MED are computed as:
W-MED-x = median (MVJI, MY211 1 MVW) MV_MED_y = median (MVY, W2Y. r MVW) wherein MVix and MViy are a horizontal and a vertical components of an ith reference motion vector, i being 1,2 N with N being a total number of reference motion vectors. For instance, if N=3 and MVj=(-2,3), W2=(1,5) and MV3=(-1,7), then MV-MED-x=-1 and MV-MED_y=5. The computed horizontal and vertical components of the median vector is provided to the dispersion calculation circuit 130, a switch 150 and a comparator 180 via a line L20 as a first candidate predictor of the motion vector of the current search block.
The dispersion calculation circuit 130 calculates a degree of dispersion with respect to the horizontal and the vertical components of the reference motion vectors around the median vector to provide same to a selection signal generator 140. In other words, the horizontal and the vertical dispersions DIS-x and DIS_y of the horizontal and the vertical 12 components of the reference motion vectors are calculated as follows, respectively:
AV DIS-x= (MVI,-W-MED-x) 2 1V DIS-Y= (MVIy-MY MED-Y) 2 The selection signal generator 140 compares the sum of the horizontal and the vertical dispersions DIS-x and DIS_y with a predetermined threshold to generate a first or a second selection signal. If the sum is less than the predetermined threshold, the first selection signal is provided to the switch 150 and a switch 155 via a line L30 and, if otherwise, the second selection signal is provided to the switches 150 and 155 via the line L30.
In the meantime. the deviation calculation circuit 160 calculates directional differences between the reference motion vectors and the motion vector of the current search block. respectively. The directional differences may be defined as:
DIR-DIF (i) -x= IMVi,- CMV, j DIR-DIF (i) -Y= JMViy- CMVy 1 wherein DIR-DIF(i)_x(y) represents the difference between the horizontal (vertical) components of an i th ref erence motion is vector and the motion vector of the current search block(I1current motion vector") provided on the line L10 and CMVX and CMVY denotes the horizontal and the vertical components of the current motion vector CMV. Provided from the deviation calculation circuit 160 to a smallest deviation selection circuit 170 for each directional components MVix or Mviy is a set of deviation data (MVix, DIR-DIF(i)_x) or (MVix, DIR- DIF(i)_y).
In response to the sets of deviation data from the deviation calculation circuit 160, the smallest deviation selection circuit 170 determines a smallest horizontal and vertical differences among DIR-DIF(i)_xls and DIRDIF(i)-y's, respectively, and provides the switch 150 and comparator 180 with a second candidate predictor of the current motion vector, wherein the second candidate predictor is comprised of a least horizontal component corresponding to a horizontal components of the reference motion vectors which generates the smallest horizontal difference and a least vertical component representing the one which yields the smallest vertical difference. For instance, if N=3 and W1=(-2,3), W2=(1,5), W3=(-1, 7) and the current motion vector MV... r=(5,1), then the second candidate predictor W-SEC=(1,3) is determined.
The comparator 180 compares the least horizontal and the least vertical components of the second candidate predictor with the horizontal and the vertical components of the first candidate predictor, respectively, and provides a horizontal 14 - and a vertical comparison results to a header encoder 190. The header encoder 190 generates a flag or an identification signal for each comparison result. For instance, if the least horizontal(or the least vertical) component of the second candidate predictor is equal to the horizontal(or the vertical) component of the first candidate predictor, a flag signal 101 is generated; if the least horizontal(or the least vertical) component of the second candidate predictor is smaller than the horizontal(or the vertical) component of the first candidate predictor, a flag signal 1101 is generated; and if the least horizontal(or the least vertical) component of the second candidate predictor is greater than the horizontal(or the vertical) component of the first candidate predictor greater, a flag signal '11' is generated. For the instance given above, i. e., the first candidate predictor W-MED=(1,5) and the second candidate predictor W-SEC=(1,3), flag signals 101 and 1101 are generated for the least horizontal and the least vertical components of the second candidate predictor, respectively. The pair of flag signals for the second candidate predictor is fed to the switch 155.
The switch 150 selects as an optimum predictor either the first candidate predictor fed from the median filter 200 or the second candidate predictor fed from the smallest deviation selection circuit 170 in response to the first or the second selection signal; and provides the optimum predictor to the difference encoder 210.
is - The difference encoder 210 calculates, based on the conventional DPCM technique, the differences between the horizontal components and the vertical components of the current motion vector and those of the optimum predictor; and encodes the differences based on, e.g., the VLC technique. The encoded differences are provided to a multiplexor(MUX) 220 as a coded motion vector for the current search block.
The switch 155 provides the flag signals from the header encoder 190 to the MUX 220 if and only if the second selection signal is inputted thereto via the line L30. At the MUX 220, the coded motion vector from the difference encoder 210 and the flag signals, if provided from the switch 155, are multiplexed as encoded motion vector data for the current search block; and the encoded motion vector data is transmitted to a transmitter(not shown) for the transmission thereof.
At a decoder of a receiving end, the sum of the directional dispersion of the reference motion vector is calculated in an identical manner as in the dispersion calculation circuit 130. If the sum is less than the predetermined threshold, the motion vector of the current search block is reconstructed based on the median vector of the reference motion vectors and the transmitted coded motion vector. If the sum is not less the predetermined threshold, the motion vector of the current search block can be reconstructed based on the flag signals and the coded motion 16 vector included in the transmitted motion vector data.
While the present invention has been described with respect to the particular embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the scope of the invention as defined in the following claims.
17 -

Claims (21)

Claims:
1. A method for encoding a current motion vector based on a plurality of reference motion vectors, wherein each motion vector includes a first and a second components, the method comprising the steps of:
(a) finding a first predictor having a first and a second components, the first component of the first predictor representing a median of the first components of the reference motion vectors and the second component of the f irst predictor denoting a median of the second components of the reference motion vectors; (b) calculating first absolute differences between the first component of the current motion vector and the first components of the reference motion vectors and second absolute differences between the second component of the current motion vector and the second components of the reference motion vectors to thereby decide, among the first and the second components of the reference motion vectors, a least first component and a least second component, respectively, which yield a least first absolute difference and a least second absolute difference, respectively; (c) determining a second predictor comprised of the least first and the least second components decided in step (b); (d) generating a first and a second flag signals representing the least first and the least second components 18 - decided in step (b), respectively; (e) computing a dispersion value of the reference motion vectors and comparing the dispersion value with a predetermined threshold to thereby generate a first selection signal if the dispersion value is less than the predetermined threshold and a second selection signal if otherwise, the dispersion value indicating the degree of proximity among the reference motion vectors; and (f) encoding the current motion vector based on the first predictor in response to the first selection signal, and the current motion vector based on the second predictor, as well as the first and the second flag signals, in response to the second selection signal, thereby generating encoded data of the current motion vector.
2. The method according to claim 1, wherein the dispersion value is defined as:
(MV -MED.) 2 + DIS=1 (MVj., -MED2) 2 wherein DIS represents the dispersion value; Mvil and MV Q denote a f irst and a second components of an i th reference motion vector, respectively, i being 1 to N with N being the number of the reference motion vectors; and MED1 and MED2 are medians of MVil' s and MV i2' s, respectively.
3. The method according to claim 2, wherein N=3.
4. The method according to claim 3, wherein the first and the second flag signals indicate whether the least first and the least second components decided in step (b) are smaller than, equal to, or greater than the medians of the f irst and the second components of the reference motion vectors, respectively.
is
5. The method according to claim 1, wherein said encoding step (f) is carried out by a variable length coding technique.
6. A method for encoding a current motion vector based on a plurality of reference motion vectors, wherein each motion vector includes a first and a second components, comprising the steps of:
(a) finding a first predictor having a first component representing a median of the first components of the reference motion vectors and a second component denoting a median of the second components of the reference motion vectors; (b) estimating a dispersion value of the reference motion vectors and comparing the dispersion value with a predetermined threshold, the dispersion value indicating the degree of proximity among the f irst and the second components of the reference motion vectors; (c) if the dispersion value is smaller than the predetermined threshold, encoding the difference between the f irst component of the current motion vector and the f irst predictor and the difference between the second component of the current motion vector and the first predictor to thereby generate the encoded differences as encoded data of the current motion vector; and (d) if the dispersion value is equal to or greater than the threshold, generating encoded data of the current motion vector, including the steps of:
(dl) determining a second predictor having a least first component corresponding to one of the first components of the reference motion vectors which yield a minimum difference from the first component of the current motion vector and a least second component corresponding to one of the second components of the reference motion vectors which yield a least difference from the second component of the current motion vector; (d2) obtaining a first and a second identification signals, the first and the second identification signals indicating a f irst component and a second component of the reference motion vector to which the least first and the least second components of the second predictor correspond, respectively; and (d3) encoding the difference between the first component of the current motion vector and the least first component the second predictor and the difference - 21 between the second component of the current motion vector and the least second component of the second predictor together with the first and the second identification signals, thereby generating encoded data of the current motion vector.
7. The method according to claim 6, wherein said determining step (dl) has the steps of:
(dll) calculating first absolute differences between the f irst component of the current motion vector and the f irst components of the reference motion vectors and second absolute differences between the second component of the current motion vector and the second components of the reference motion vectors; (d12) selecting a least first and a least second components which yield a least first absolute difference and a least second absolute difference, respectively; and (d13) deciding the second predictor comprised of the least first and the least second components selected in step (d12).
8. The method according to claim 6, wherein said encoding steps (c) and (d3) are carried out based on a variable length coding technique.
9. The method according to claim 6, wherein the dispersion 22 - value is defined as:
VIS= (MV -ME 2 + N1.1 (MVI2-MED2) 2 wherein DIS represents the dispersion value; MVil and MVi2 denote a first and a second components of an ith reference motion vector, respectively, i being 1 to N with N being the number of the reference motion vectors; and MED1 and MED2 are medians of MVil 's and Mvi2's, respectively.
10. The method according to claim 8, wherein N=3.
11. The method according to claim 9, wherein the first and the second identification signals indicate whether the least first and the least second components of the second predictor correspond to a smallest, a median or a largest f irst and second components of the reference motion vectors, respectively.
12. An apparatus f or encoding a current motion vector based on a plurality of reference motion vectors, each of the motion vectors including a f irst and a second components, comprising:
means for estimating a degree of dispersion of the first and the second components of the reference motion vectors to thereby issue a first selection signal if the degree of dispersion is considered to be low and a second selection signal if the degree of dispersion is regarded to be high; means for determining a first predictor having a first median and a second median as a first and a second components thereof, wherein the first and the second medians represent median values of the first and the second components of the reference motion vectors; means f or obtaining a second predictor having a least first and a least second components, wherein the least first component of the second predictor corresponds to one of the first components of the reference motion vectors which yields a minimum difference from the first component of the current motion vector and the least second component of the second predictor corresponds to one of the second components of the reference motion vectors which yields a least difference from the second component of the current motion vector; means for deciding the first and the second predictors as an optimum predictor in response to the first and the second selection signals, respectively; and means for encoding the current motion vector based on the optimum predictor.
13. The apparatus according to claim 12, wherein said estimating means includes means for computing a dispersion value, the dispersion value being defined as:
24 - (MV -ME)I) 2 + DIS=1 X.1.1 ' (MVi2-MED2) 2 NI-1 NI-1 is wherein DIS represents the dispersion value; MVil and MVi2 denote a f irst and a second components of an i th ref erence motion vector, respectively, i being 1 to N with N being the number of the reference motion vectors; and MED1 and MED2 are medians of MVil 's and MVi2's, respectively.
14. The apparatus according to claim 13, wherein said estimating means further includes: means for comparing the dispersion value with a predetermined threshold; and means for generating the f irst selection signal if the dispersion value is smaller than the predetermined threshold and the second selection signal if otherwise.
15. The apparatus according to claim 14, wherein said means for obtaining the second predictor includes: means for generating a first difference between the first component of the current motion vector and each first component of the reference motion vectors and a second difference between the second component of the current motion vector and each second component of the reference motion vectors, the first and the second differences being defined as:
- 25 DIR-DIFW-x=1W -CMV ix xl DIR-DIF(i) -Y= IMViy-CMVYI wherein DIR-DIF(i)_x represents a first difference between a first component of an ith reference motion vector and a first component of a current motion vector, i being 1 to N with N being the number of the reference motion vectors; DIR-DIF(i)_y, a second difference between a second component of the i th reference motion vector and a second component of a current motion vector; MVi., the first component of the 'th reference motion vector; MViy, the second component of the 'th reference motion vector; CWx and CW Y being the first and the second components of the current motion vector, respectively; means for finding the minimum difference among the first differences and the least difference among the second differences; and means for producing the second predictor having a least first component yielding the minimum difference and a least second component relating to the least difference as the first and the second components thereof.
16. The apparatus according to claim 15, wherein N=3.
17. The apparatus according to claim 16, further comprising means for generating a first and a second flag signals, the 26 - first flag signal indicating whether the first component of the second predictor is smaller than, equal to or greater than the first median and the second flag signal indicating whether the second component of the second predictor is smaller than, equal to or greater than the second median.
18. The apparatus according to claim 17, wherein said means f or encoding the current motion vector includes means f or coding the flag signals in response to the second selection signal.
19. The apparatus according to claim 18, wherein said encoding means further includes means for encoding the current motion vector by using a differential pulse code modulation technique and a variable length coding scheme.
20. A method for encoding a motion vector substantially as herein described with reference to or as shown in Figure 2 of the accompanying drawings.
21. Apparatus constructed and arranged substantially as herein described with reference to or as shown in Figure 2 of the accompanying drawings.
z 7 -
GB9717793A 1997-08-12 1997-08-21 Encoding motion vectors Withdrawn GB2328337A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1019970038277A KR100252342B1 (en) 1997-08-12 1997-08-12 Motion vector coding method and apparatus

Publications (2)

Publication Number Publication Date
GB9717793D0 GB9717793D0 (en) 1997-10-29
GB2328337A true GB2328337A (en) 1999-02-17

Family

ID=19517253

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9717793A Withdrawn GB2328337A (en) 1997-08-12 1997-08-21 Encoding motion vectors

Country Status (6)

Country Link
JP (1) JPH1175188A (en)
KR (1) KR100252342B1 (en)
CN (1) CN1208313A (en)
DE (1) DE19737805A1 (en)
FR (1) FR2767440A1 (en)
GB (1) GB2328337A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2368220A (en) * 2000-10-09 2002-04-24 Snell & Wilcox Ltd Compression of motion vectors
AU2001293994B2 (en) * 2000-10-09 2007-04-26 Snell & Wilcox Limited Compression of motion vectors
US7646810B2 (en) 2002-01-25 2010-01-12 Microsoft Corporation Video coding
US7664177B2 (en) 2003-09-07 2010-02-16 Microsoft Corporation Intra-coded fields for bi-directional frames
US7924920B2 (en) 2003-09-07 2011-04-12 Microsoft Corporation Motion vector coding and decoding in interlaced frame coded pictures
US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
US20120207221A1 (en) * 2009-10-16 2012-08-16 Tomoko Aono Video coding device and video decoding device
US8254455B2 (en) 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US8374245B2 (en) 2002-06-03 2013-02-12 Microsoft Corporation Spatiotemporal prediction for bidirectionally predictive(B) pictures and motion vector prediction for multi-picture reference motion compensation
US8379722B2 (en) 2002-07-19 2013-02-19 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
US8625669B2 (en) 2003-09-07 2014-01-07 Microsoft Corporation Predicting motion vectors for fields of forward-predicted interlaced video frames
US8687697B2 (en) 2003-07-18 2014-04-01 Microsoft Corporation Coding of motion vector information

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1050850A1 (en) * 1999-05-03 2000-11-08 THOMSON multimedia Process for estimating a dominant motion between two frames
KR100865034B1 (en) 2002-07-18 2008-10-23 엘지전자 주식회사 Method for predicting motion vector
KR100506864B1 (en) 2002-10-04 2005-08-05 엘지전자 주식회사 Method of determining motion vector
US7599438B2 (en) * 2003-09-07 2009-10-06 Microsoft Corporation Motion vector block pattern coding and decoding
JP5025286B2 (en) * 2007-02-28 2012-09-12 シャープ株式会社 Encoding device and decoding device
KR100881559B1 (en) * 2008-02-05 2009-02-02 엘지전자 주식회사 Method for predicting motion vector
KR100907174B1 (en) * 2008-09-11 2009-07-09 엘지전자 주식회사 Method of determining motion vector
KR100907175B1 (en) * 2008-09-11 2009-07-09 엘지전자 주식회사 Method for predicting motion vector
KR100907173B1 (en) * 2008-09-11 2009-07-09 엘지전자 주식회사 Method of determining motion vector
JP5422168B2 (en) * 2008-09-29 2014-02-19 株式会社日立製作所 Video encoding method and video decoding method
CN107241604B (en) * 2010-04-01 2020-11-03 索尼公司 Image processing apparatus and method
US9124898B2 (en) 2010-07-12 2015-09-01 Mediatek Inc. Method and apparatus of temporal motion vector prediction
DK2675169T3 (en) 2011-02-09 2019-07-22 Lg Electronics Inc METHOD OF CODING AND DECODING IMAGE DATA WITH A TEMPORARY MOVEMENT VECTOR PREDICTOR AND DEVICE FOR USE THEREOF

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0624981A2 (en) * 1993-04-09 1994-11-17 Sharp Kabushiki Kaisha Motion vector detecting circuit

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3263960B2 (en) * 1991-10-22 2002-03-11 ソニー株式会社 Motion vector encoder and decoder

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0624981A2 (en) * 1993-04-09 1994-11-17 Sharp Kabushiki Kaisha Motion vector detecting circuit

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2368220A (en) * 2000-10-09 2002-04-24 Snell & Wilcox Ltd Compression of motion vectors
AU2001293994B2 (en) * 2000-10-09 2007-04-26 Snell & Wilcox Limited Compression of motion vectors
US7646810B2 (en) 2002-01-25 2010-01-12 Microsoft Corporation Video coding
US10284843B2 (en) 2002-01-25 2019-05-07 Microsoft Technology Licensing, Llc Video coding
US9888237B2 (en) 2002-01-25 2018-02-06 Microsoft Technology Licensing, Llc Video coding
US8638853B2 (en) 2002-01-25 2014-01-28 Microsoft Corporation Video coding
US8406300B2 (en) 2002-01-25 2013-03-26 Microsoft Corporation Video coding
US10116959B2 (en) 2002-06-03 2018-10-30 Microsoft Technology Licesning, LLC Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US9571854B2 (en) 2002-06-03 2017-02-14 Microsoft Technology Licensing, Llc Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US9185427B2 (en) 2002-06-03 2015-11-10 Microsoft Technology Licensing, Llc Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US8873630B2 (en) 2002-06-03 2014-10-28 Microsoft Corporation Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US8374245B2 (en) 2002-06-03 2013-02-12 Microsoft Corporation Spatiotemporal prediction for bidirectionally predictive(B) pictures and motion vector prediction for multi-picture reference motion compensation
US8379722B2 (en) 2002-07-19 2013-02-19 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
US8774280B2 (en) 2002-07-19 2014-07-08 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
US8917768B2 (en) 2003-07-18 2014-12-23 Microsoft Corporation Coding of motion vector information
US8687697B2 (en) 2003-07-18 2014-04-01 Microsoft Corporation Coding of motion vector information
US9148668B2 (en) 2003-07-18 2015-09-29 Microsoft Technology Licensing, Llc Coding of motion vector information
US8625669B2 (en) 2003-09-07 2014-01-07 Microsoft Corporation Predicting motion vectors for fields of forward-predicted interlaced video frames
US8064520B2 (en) 2003-09-07 2011-11-22 Microsoft Corporation Advanced bi-directional predictive coding of interlaced video
US7924920B2 (en) 2003-09-07 2011-04-12 Microsoft Corporation Motion vector coding and decoding in interlaced frame coded pictures
US7852936B2 (en) 2003-09-07 2010-12-14 Microsoft Corporation Motion vector prediction in bi-directionally predicted interlaced field-coded pictures
US7680185B2 (en) 2003-09-07 2010-03-16 Microsoft Corporation Self-referencing bi-directionally predicted frames
US7664177B2 (en) 2003-09-07 2010-02-16 Microsoft Corporation Intra-coded fields for bi-directional frames
US8254455B2 (en) 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
US8189666B2 (en) 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
US20120207221A1 (en) * 2009-10-16 2012-08-16 Tomoko Aono Video coding device and video decoding device

Also Published As

Publication number Publication date
CN1208313A (en) 1999-02-17
KR100252342B1 (en) 2000-04-15
JPH1175188A (en) 1999-03-16
DE19737805A1 (en) 1999-02-18
KR19990015907A (en) 1999-03-05
GB9717793D0 (en) 1997-10-29
FR2767440A1 (en) 1999-02-19

Similar Documents

Publication Publication Date Title
GB2328337A (en) Encoding motion vectors
US5617144A (en) Image processing system using pixel-by-pixel motion estimation and frame decimation
US6625216B1 (en) Motion estimation using orthogonal transform-domain block matching
US6275532B1 (en) Video coding device and video decoding device with a motion compensated interframe prediction
EP1378124B1 (en) Motion information coding and decoding method
EP0634874B1 (en) Determination of motion vectors in a frame decimating video encoder
KR100209793B1 (en) Apparatus for encoding/decoding a video signals by using feature point based motion estimation
US5978048A (en) Method and apparatus for encoding a motion vector based on the number of valid reference motion vectors
EP0923251A1 (en) Mode coding method and apparatus for use in an interlaced shape coder
EP1075765A1 (en) Method and apparatus for encoding a motion vector of a binary shape signal
US5969766A (en) Method and apparatus for contour motion estimating a binary image by using a weighted block match algorithm
US5654761A (en) Image processing system using pixel-by-pixel motion estimation and frame decimation
US5627591A (en) Image processing system using a pixel-by-pixel motion estimation based on feature points
KR100238893B1 (en) Motion vector coding method and apparatus
US6020933A (en) Method and apparatus for encoding a motion vector
EP0921688B1 (en) Moving vector predictive coding method and moving vector decoding method, and storage medium stored with moving vector predictive coding program and moving vector decoding program
EP0731612B1 (en) Apparatus for encoding a video signal using search grids for motion estimation and compensation
US5625417A (en) Image processing system using a feature point-based motion estimation
KR100200225B1 (en) Image processing system for very low-speed transmission
GB2341030A (en) Video motion estimation
KR100252340B1 (en) Current frame prediction method and apparatus

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)