GB2437578A - Selection of a search window for motion estimation in video encoding - Google Patents

Selection of a search window for motion estimation in video encoding Download PDF

Info

Publication number
GB2437578A
GB2437578A GB0608497A GB0608497A GB2437578A GB 2437578 A GB2437578 A GB 2437578A GB 0608497 A GB0608497 A GB 0608497A GB 0608497 A GB0608497 A GB 0608497A GB 2437578 A GB2437578 A GB 2437578A
Authority
GB
United Kingdom
Prior art keywords
search window
motion
picture
selection means
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0608497A
Other versions
GB0608497D0 (en
Inventor
Anthony Richard Huggett
Neil Trimboy
Philip Bird
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ericsson Television AS
Original Assignee
Tandberg Television AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tandberg Television AS filed Critical Tandberg Television AS
Priority to GB0608497A priority Critical patent/GB2437578A/en
Publication of GB0608497D0 publication Critical patent/GB0608497D0/en
Publication of GB2437578A publication Critical patent/GB2437578A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/57Motion estimation characterised by a search window with variable size or shape

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A motion estimator for a video encoder selects a search window for locating an area in a reference picture similar to a macroblock in a source picture. The chosen shape and area of the search window may be varied from macroblock to macroblock. The shape and area may be chosen, for example, dependent on a predetermined type of motion in the video stream to be processed, on pre-processor analysis or on a position of a picture relative to reference pictures in a Group of Pictures. The search window may have the shape of a rectangle, oval, octagon, cruciform or celtic cross.

Description

<p>Video Encoding This invention relates to video encoding and in
particular to video encoding with a motion estimation search window of a selectable shape and area.</p>
<p>In a digital video format, images are usually compressed for transmission and storage.</p>
<p>Often sequential images in the video sequence differ only slightly. The difference from a previous, or following, image in the sequence can then be detected and encoded, rather than the entire picture. Such compression techniques are widely used, such as in MPEG encoding.</p>
<p>During compression or encoding, each current picture to be encoded is divided into a grid of macroblocks, each containing 16x16 pixels. A macroblock of a current picture may be compared to a range of macroblock-sized areas (not necessarily aligned with the macroblock grid) from a previously encoded picture in the video sequence. Often the best match is found at a different location. The difference in locations is known as a motion vector, since it indicates the movement of the block between the two pictures. The motion vector can then be used to predict the macroblock for the current picture from the previously encoded picture, thereby reducing the number of bits required in encoding the current picture, since only the motion vector need be transmitted along with an error signal representing the difference between the prediction and the actual macroblock.</p>
<p>Thus an important feature in video encoding is an ability to predict parts of a picture, hereinafter referred to as source macroblocks, from parts of related pictures within a stream.</p>
<p>Such predictions are used in most implementations of block-based motion-compensated hybrid transform coders, such as MPEG-2 and H264. Since pictures within a video sequence are in general not identical, otherwise elements on the picture would not move relative to each other or the camera, the position of a best match within a reference picture of a particular source macrob lock is in general offset from the position of the macroblock within the source picture. Typically this offset is representative of the real world motion of the scene, so these offsets are referred to as motion vectors, and the process of finding the best match is referred to as motion estimation.</p>
<p>There are many known methods of performing motion estimation. These include minimising a sum of absolute differences (SAD) and maximising a cross-correlation.</p>
<p>Whatever the method employed, there remains a fundamental limitation in that a very large number of possible matching locations need to be searched for a source macroblock in limited time, and this requires a large computational expenditure.</p>
<p>For instance, in order to search a region of the reference image of 144x80 pixels requires computing the sum of absolute differences for each possible matching position.</p>
<p>Computing the sum of absolute differences for one possible position of a match for a source macroblock requires 256 absolute subtraction operations and 255 additions. There are 8,385 possible matching positions which make the total computational load 2,146,560 absolute subtractions and 2,138,175 additions. The 8,385 results then need to be searched to find a best match. On a standard definition picture, this computation has to be completed in about 24 microseconds.</p>
<p>As the example shows, motion estimation is computationally intensive, therefore it is typically only possible to search a restricted area of a reference picture around a position of a source macroblock. Matches which lie outside this area will not be found by the motion estimator, potentially leading to poorer encoding. Therefore it is vital to maximise the probability of finding the best match by choosing characteristics of a search window carefully.</p>
<p>Equally, it is important not to expend large amounts of processing resource on inefficient searching when processing power is constrained due to thermal, power dissipation or limited processing resource considerations.</p>
<p>It is an object of the present invention at least to ameliorate the aforesaid difficulties in</p>
<p>the prior art.</p>
<p>According to a first aspect of the invention, there is provided a motion estimator for a video encoder comprising selection means for selecting a search window of a selected area and shape for locating an area in a reference picture similar to a macroblock in a source picture.</p>
<p>Preferably, the selection means is adapted to select a different search window for different macroblocks within a picture.</p>
<p>Conveniently, the selection means is adapted to select a search window dependent on input of a type of motion known to appear in a scene to be encoded.</p>
<p>Advantageously, the selection means is adapted to select a search window dependent on a position of a picture to be encoded relative to reference pictures in a Group of Pictures containing the picture to be encoded.</p>
<p>Conveniently, the motion estimator further comprises analysing means for analysing types of motion present in a sequence of pictures to be encoded and the selection means is adapted to select a search window dependent on a result of the analysis by the analysing means.</p>
<p>Advantageously, the selection means is adapted to select a search window dependent on processing time or resources available.</p>
<p>Advantageously, the selection means is adapted to select a search window for a macroblock dependent on a proximity of the macroblock to an edge of a picture.</p>
<p>Advantageously the motion estimator comprises a pre-processor for determining dynamic statistics and the selection means is adapted to select a search window dependent on the dynamic statistics.</p>
<p>Conveniently, the selection means is adapted such that if the pre-processor identifies regions with low motion, the selection means selects a small search area for macroblocks within these regions.</p>
<p>Conveniently, the selection means is adapted such that if the pre-processor identifies highly correlated motion of a known vector, the selection means offsets a centre of the search area by the aforesaid known vector.</p>
<p>Conveniently, the selection means is adapted to select a search window having a shape of a rectangle, oval, octagon, a cruciform shape, or a Celtic cross.</p>
<p>According to a second aspect of the invention there is provided a method of motion estimation for a video encoder comprising selecting a search window of a selected area and shape for locating an area in a reference picture similar to a macroblock in a source picture.</p>
<p>Preferably, selecting a search window comprises selecting different search windows for different macroblocks within a picture.</p>
<p>Conveniently, the method comprises selecting a search window dependent on input of a type of motion known to appear in a scene to be encoded.</p>
<p>Advantageously, the method comprises selecting a search window dependent on a position of a picture to be encoded relative to reference pictures in a Group of Pictures containing the picture to be encoded.</p>
<p>Conveniently, the method further comprises analysing types of motion present in a sequence of pictures to be encoded and selecting a search window dependent on a result of the analysing types of motion.</p>
<p>Advantageously, the method comprises selecting a search window dependent on processing time or resources available.</p>
<p>Advantageously, the method comprises selecting a search window for a macroblock dependent on a proximity of the macroblock to an edge of a picture.</p>
<p>Advantageously, the method comprises a pre-processor determining dynamic statistics and selecting a search window dependent on the dynamic statistics.</p>
<p>Conveniently, if the pre-processor identifies regions with low motion, a small search area is selected for macroblocks within these regions.</p>
<p>Conveniently, if the pre-processor identifies highly correlated motion of a known vector, a centre of the search area is offset by the aforesaid known vector.</p>
<p>Conveniently, the method comprises selecting a search window having a shape of a rectangle, oval, octagon, a cruciform shape, or a Celtic cross.</p>
<p>According to a third aspect of the invention there is provided computer program media comprising code means for performing all the steps of the method described above when the program is run on one or more computers.</p>
<p>The invention will now be described, by way of example, with reference to the accompanying drawings in which: Figure 1 is a block diagram of a process used by the motion estimator of the invention; and Figure 2 shows examples of window shapes according to the invention.</p>
<p>In the Figures, like reference numerals denote like parts or steps.</p>
<p>In a video encoder according to the present invention, a shape and area of a search window is adaptively controlled, so that a search is performed in those locations where matching is most likely, thereby minimising computational resource and power consumption.</p>
<p>A video encoder according to the invention has a controllable search window for motion estimation. This search window is preferably controllable in shape, and also in overall area, and may change between macroblocks, although it will be understood that the search window may be variable in only one of shape and area.</p>
<p>Referring to Figure 1, selection 14 of these characteristics is determined by a combination of factors of prior knowledge 11, pre-processor analysis 12, or temporal displacement 13.</p>
<p>Prior knowledge 11 of the source is helpful in choosing a search window shape and area because certain types of footage, e.g. golf where a long shot camera picks up a ball as it drops onto the fairway, are known to tend to contain more vertical motion than others e.g. football which contains many horizontal pans.</p>
<p>Temporal distance 13 from a reference frame is important because, for example, a steadily moving object from a reference picture two frames earlier would be expected to have moved further than the same object in a reference picture only one frame earlier, e.g. the second B picture in an 10B1B2P3... Group of Pictures. In MPEG2 there are three types of pictures. I pictures are intra-coded without reference to any other pictures. P pictures are predicted from an immediately previous reference picture, which may itself be P or 1. B pictures are predicted from the previous and the next reference picture, I or P. Typically these are arranged into a repeating sequence or Group of Pictures (GOP), for instance IBBPBBPBBPBBPBB IBBP... . A total available searching power, i.e. area, can be allocated such that a motion vector corresponding to true motion of an object in the scene from I to P may always be captured by smaller searches from the B pictures. For example, when coding a sequence 10B1B2P3 five searches are performed. For an object moving at a constant velocity these search dimensions must be in the ratios of the number of pictures separating the search pictures: 10B1: 1 10B2: 2 B1P3: 2 B2P1: I 10P3: 3 These numbers are dimensions, not areas. If an object has moved 60 pixels right and 30 pixels up between Jo and P3, then if the motion is constant it will have moved only 20 pixels right and 10 pixels up between 1 and B1, and 40 pixels right and 20 pixels up between 1 and B2. Therefore, if 60 pixels right and 30 pixels up corresponds to the maximum search distances for a P picture, only 40 pixels right and 20 up need to be allowed for the J0B2 and B1P3 searches, and only 20 and 10 respectively for the 10B1 and B2P3 searches.</p>
<p>If it is assumed that a motion estimation engine works by searching a constant number of locations per second, and that the search is symmetrical about the reference macroblock, then, to track the above stated movement, the search in the P picture must have a dimension of, at least, 60 by 30, i.e. a search area of 121*61 = 7381 search locations.</p>
<p>For the B pictures, the search area needs to be at least 40 by 20 from one reference picture plus 20 by 10 from the other reference picture.</p>
<p>Fortunately the area is proportional to the square of the scaled dimensions, thus the total required search area is 8 1*41+41*21 = 4182 locations. Under these assumptions, there is even time to search both reference pictures at 81 by 41 = 6642, should it be chosen to do so.</p>
<p>Some compression algorithms permit the use of multiple reference pictures. By using dynamic allocation of search areas between different reference pictures an amount of resource allocated to each can be optimised. This is a generalisation from the MPEG2 case which allows bidirectional prediction only from the previous and next reference pictures. For example, in H264 a P picture may be predicted from more than one previous P or I picture.</p>
<p>The selection of search window characteristics may also depend on processing time available since different resolution pictures and different frame rate video streams will have different time intervals between macroblocks. The larger this time interval the larger the search area can be.</p>
<p>Depending on the implementation, the interval between macroblocks may be a constant or a variable duration. This duration can be controlled in order to optimise the search area to those macroblocks believed to require a larger area search.</p>
<p>A technique known as 3:2 pull down is used to convert 24 frames-per-second film material to a format suitable for transmission in 29.97 frames-per-second video systems. This process is typically performed in reverse at a compression encoder such that a video signal originating from a film source is encoded at approximately 24 frames-per-second and the decoder controlled to display the signal at 29.97 frames-per-second. At the encoder this typically results in a field period, i.e. half a frame period, of no activity due to dropping the repeated field produced by the 3:2 pull-down process. A compression encoder according to the invention can make use of this time to increase the time available for processing each macroblock, therefore increasing the search area possible for macroblocks in that picture.</p>
<p>The technique described for 3:2 pull-down can also be used in general scenarios when a repeated field has been detected, such as cartoon material or stills.</p>
<p>A choice of window shape and area may also be dependent on dynamic statistics from pre-processor analysis 12. Dynamic statistics are of great potential benefit. If the pre-processor can identify regions with low motion, a small search area can be selected for macroblocks within these regions. Advantageously, if there is highly correlated motion of a known vector e.g. a video pan, the centre of the search area may be offset by the aforesaid known vector.</p>
<p>Furthermore, by choosing to do a small area search for some searches, more time may be allowed for a more extensive search on high activity areas of the image, provided pipelining of the overall encoding process is not violated.</p>
<p>Dynamic statistics coming from a pre-processor may also establish a search window shape if the pre-processor can establish a global motion vector or trend, such as that caused by a camera pan.</p>
<p>In addition, a different search window may be used for a macroblock near to an edge of an image so that search effort is not wasted searching beyond the edges of a reference picture.</p>
<p>Any processing effort which is thereby saved can be reallocated to other searches.</p>
<p>Other parameters that may be used to influence the search window characteristics include: 1. Available processing power: a search area can be maximised to utilise X% of the available processing resource Y. The available resource Y in a software encoder may well be variable.</p>
<p>2. Thermal/power dissipation: the search area can be varied to alter an amount of processing performed and therefore an amount of power dissipated within the motion estimation process.</p>
<p>3. Search area can be minimised, i.e. not made larger than believed necessary, to reduce power dissipation. This may increase battery life in some applications.</p>
<p>In general these techniques can be used either to maximise the search area to cover the most possibilities or to minimise the search area to reduce processing effort and therefore power dissipation.</p>
<p>Referring to Figure 2, typically the motion search engine has a number of possible search windows suited to different situations, and uses 15 the window suggested by the controlling engines. Typical shapes might include cruciform 22, which is useful where the motion is predominantly vertical and horizontal, simple rectangular 21, oval 23 or octagonal, when the motion is isotropic (equally likely in all directions), e.g. underwater footage, or in the shape of a Celtic cross 24. The windows of Figure 2 are drawn to scale such that all have equal area.</p>
<p>Although the sample search areas are symmetrical in Figure 2, there is no requirement for this to be the case. The windows do not have to be symmetrical in extent or shape from the position representing the (0,0) vector.</p>
<p>In use, the action of the pre-processor is as follows: known methods are used for the detection of 3:2 pull-down, global motion (pan vector), noise, zoom, scene changes etc. A scene change is important because typically an I picture would be put as a first picture of a new scene. There is little point in any final B pictures from the previous scene searching for motion in the new scene. This power may be recovered to be used elsewhere.</p>
<p>Global motion detection is used to offset the search according to the global predictor.</p>
<p>Noise is important because in very noisy sequences the noise may cause erroneous matches which do not lead to good compression. In this case the search area could be restricted so that only small searches are performed.</p>
<p>Zooms typically result in parts of the image moving in different directions. The motion at the top of the image will be in the opposite direction to the bottom, assuming that the centre of the zoom lies within the picture. This can be used to give different search offsets to different regions of the image.</p>
<p>When coding 3:2 pull-down sequences an encoder can either use the extra field period in its entirety for coding a picture and then be constrained to a frame period for the next picture, this is the inefficient way, or the gained field period can be 50% used in each picture thus evenly distributing the gained processing time to enable a larger search area.</p>
<p>When coding 3:2 pull-down sequences it is beneficial to make pictures with repeated fields reference P pictures rather than B pictures, as the available time and therefore search area will be greater to cover the larger prediction distance.</p>

Claims (1)

  1. <p>CLATMS</p>
    <p>1. A motion estimator for a video encoder comprising selection means for selecting a search window of a selected area and shape for locating an area in a reference picture similar to a macroblock in a source picture.</p>
    <p>2. A motion estimator as claimed in claim 1, wherein the selection means is adapted to select a different search window for different macroblocks within a picture.</p>
    <p>3. A motion estimator as claimed in claims 1 or 2, wherein the selection means is adapted to select a search window dependent on input of a type of motion known to appear in a scene to be encoded.</p>
    <p>4. A motion estimator as claimed in any of claims 1 to 3, wherein the selection means is adapted to select a search window dependent on a position of a picture to be encoded relative to reference pictures in a Group of Pictures containing the picture to be encoded.</p>
    <p>5. A motion estimator as claimed in any of the preceding claims further comprising analysing means for analysing types of motion present in a sequence of pictures to be encoded and wherein the selection means is adapted to select a search window dependent on a result of the analysis by the analysing means.</p>
    <p>6. A motion estimator as claimed in any of the preceding claims wherein the selection means is adapted to a search window dependent on processing time or resources available.</p>
    <p>7. A motion estimator as claimed in any of the preceding claims, wherein the selection means is adapted to select a search window for a macroblock dependent on a proximity of the macroblock to an edge of a picture.</p>
    <p>8. A motion estimator as claimed in any of the preceding claims, comprising a pre-processor for determining dynamic statistics wherein the selection means is adapted to select a search window dependent on the dynamic statistics.</p>
    <p>9. A motion estimator as claimed in claim 8, wherein the selection means is adapted such that if the pre-processor identifies regions with low motion, the selection means selects a small search area for macroblocks within these regions.</p>
    <p>10. A motion estimator as claimed in claim 9, wherein the selection means is adapted such that if the pre-processor identifies highly correlated motion of a known vector, the selection means offsets a centre of the search area by the aforesaid known vector.</p>
    <p>11. A motion estimator as claimed in any of the preceding claims wherein the selection means is adapted to select a search window having a shape of a rectangle, oval, octagon, a cruciform shape, or a Celtic cross.</p>
    <p>12. A method of motion estimation for a video encoder comprising selecting a search window of a selected area and shape for locating an area in a reference picture similar to a macroblock in a source picture.</p>
    <p>13. A method as claimed in claim 12, wherein selecting a search window comprises selecting different search windows for different macroblocks within a picture.</p>
    <p>14. A method as claimed in claims 12 or 13, comprising selecting a search window dependent on input of a type of motion known to appear in a scene to be encoded.</p>
    <p>15. A method as claimed in any of claims 12 to 14, comprising selecting a search window dependent on a position of a picture to be encoded relative to reference pictures in a Group of Pictures containing the picture to be encoded.</p>
    <p>16. A method as claimed in any of claims 12 to 15, further comprising analysing types of motion present in a sequence of pictures to be encoded and selecting a search window dependent on a result of the analysing types of motion.</p>
    <p>17. A method as claimed in any of claims 12 to 16, comprising selecting a search window dependent on processing time or resources available.</p>
    <p>18. A method as claimed in any of claims 12 to 17, comprising selecting a search window for a macroblock dependent on a proximity of the macroblock to an edge of a picture.</p>
    <p>19. A method as claimed in any of claims 12 to 18, comprising a pre-processor determining dynamic statistics and selecting a search window dependent on the dynamic statistics.</p>
    <p>20. A method as claimed in claim 19, wherein if the pre-processor identifies regions with low motion, a small search area is selected for macroblocks within these regions.</p>
    <p>21. A method as claimed in claim 20, wherein if the pre-processor identifies highly correlated motion of a known vector, a centre of the search area is offset by the aforesaid known vector.</p>
    <p>22. A method as claimed in any of claims 12 to 21, comprising selecting a search window having a shape of a rectangle, oval, octagon, a cruciform shape, and a Celtic cross.</p>
    <p>23. A computer program media comprising code means for performing all the steps of the method of any of claims 12 to 22 when the program is run on one or more computers.</p>
    <p>* 24. A motion estimator substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.</p>
    <p>25. A method of motion estimation for a video encoder substantially as hereinbefore described with reference to and as illustrated in the accompanying drawings.</p>
GB0608497A 2006-04-28 2006-04-28 Selection of a search window for motion estimation in video encoding Withdrawn GB2437578A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0608497A GB2437578A (en) 2006-04-28 2006-04-28 Selection of a search window for motion estimation in video encoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0608497A GB2437578A (en) 2006-04-28 2006-04-28 Selection of a search window for motion estimation in video encoding

Publications (2)

Publication Number Publication Date
GB0608497D0 GB0608497D0 (en) 2006-06-07
GB2437578A true GB2437578A (en) 2007-10-31

Family

ID=36590044

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0608497A Withdrawn GB2437578A (en) 2006-04-28 2006-04-28 Selection of a search window for motion estimation in video encoding

Country Status (1)

Country Link
GB (1) GB2437578A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259816B1 (en) * 1997-12-04 2001-07-10 Nec Corporation Moving picture compressing system capable of effectively executing compressive-encoding of a video signal in response to an attitude of a camera platform
US20020163968A1 (en) * 2001-03-19 2002-11-07 Fulvio Moschetti Method for block matching motion estimation in digital video sequences
US6480670B1 (en) * 1994-03-31 2002-11-12 Mitsubishi Denki Kabushiki Kaisha Video signal encoding method and system
US20030053544A1 (en) * 2001-09-18 2003-03-20 Tomoko Yasunari Method and apparatus for motion vector detection and medium storing method program directed to the same
US20030161400A1 (en) * 2002-02-27 2003-08-28 Dinerstein Jonathan J. Method and system for improved diamond motion search
US20040165663A1 (en) * 2003-01-10 2004-08-26 Renesas Technology Corp. Motion detecting device and search region variable-shaped motion detector

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6480670B1 (en) * 1994-03-31 2002-11-12 Mitsubishi Denki Kabushiki Kaisha Video signal encoding method and system
US6259816B1 (en) * 1997-12-04 2001-07-10 Nec Corporation Moving picture compressing system capable of effectively executing compressive-encoding of a video signal in response to an attitude of a camera platform
US20020163968A1 (en) * 2001-03-19 2002-11-07 Fulvio Moschetti Method for block matching motion estimation in digital video sequences
US20030053544A1 (en) * 2001-09-18 2003-03-20 Tomoko Yasunari Method and apparatus for motion vector detection and medium storing method program directed to the same
US20030161400A1 (en) * 2002-02-27 2003-08-28 Dinerstein Jonathan J. Method and system for improved diamond motion search
US20040165663A1 (en) * 2003-01-10 2004-08-26 Renesas Technology Corp. Motion detecting device and search region variable-shaped motion detector

Also Published As

Publication number Publication date
GB0608497D0 (en) 2006-06-07

Similar Documents

Publication Publication Date Title
KR950009699B1 (en) Motion vector detection method and apparatus
KR100905880B1 (en) Motion estimation and/or compensation
CN1106768C (en) Method and apparatus for detecting motion vectors
TWI440364B (en) Motion estimation using prediction guided decimated search
JP5744168B2 (en) Image decoding device
KR20010071705A (en) Motion estimation for digital video
US7702168B2 (en) Motion estimation or P-type images using direct mode prediction
CN101151889A (en) Image processing device and program
US7646437B1 (en) Look-ahead system and method for pan and zoom detection in video sequences
US8073054B2 (en) Unit for and method of estimating a current motion vector
KR100644629B1 (en) Method for estimating motion based on hybrid search block matching algorithm and frame-rate converter using thereof
KR100649654B1 (en) Method of motion estimation for transmission cost reduction of motion vectors
US5731851A (en) Method for determining feature points based on hierarchical block searching technique
Mietens et al. Computational-complexity scalable motion estimation for mobile MPEG encoding
Bachu et al. Adaptive order search and tangent-weighted trade-off for motion estimation in H. 264
Paul Efficient video coding using optimal compression plane and background modelling
Paul et al. McFIS in hierarchical bipredictve pictures-based video coding for referencing the stable area in a scene
Ma et al. A fast background model based surveillance video coding in HEVC
GB2437578A (en) Selection of a search window for motion estimation in video encoding
CN100469141C (en) Motion estimating method for video data compression
JP5970507B2 (en) Video encoding apparatus and video encoding program
JP2004260251A (en) Apparatus and program of detecting motion vector
Kim et al. Stochastic approach for motion vector estimation in video coding
WO1999044369A1 (en) Device and method for coding image
JP4003149B2 (en) Image encoding apparatus and method

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)