WO2002096116A1 - Motion estimation using packed operations - Google Patents
Motion estimation using packed operations Download PDFInfo
- Publication number
- WO2002096116A1 WO2002096116A1 PCT/SG2001/000096 SG0100096W WO02096116A1 WO 2002096116 A1 WO2002096116 A1 WO 2002096116A1 SG 0100096 W SG0100096 W SG 0100096W WO 02096116 A1 WO02096116 A1 WO 02096116A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- macroblock
- picture
- pixel
- motion estimation
- pixels
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/43—Hardware specially adapted for motion estimation or compensation
- H04N19/433—Hardware specially adapted for motion estimation or compensation characterised by techniques for memory access
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to methods and systems for motion estimation of video pictures in video picture encoding.
- the invention relates to methods and systems of motion estimation using packed operations in a microprocessor.
- Motion estimation is an efficient way to exploit the temporal redundancy inherent in the video sequences. It is widely used in almost all existing video coding algorithms. To improve the encoding efficiency, the resolution of the motion vector is half pixel in many video encoding standards, such as H.263, H.263+, MPEG2, and MPEG4. Because of the large amount of computation required for motion estimation, it is always one of the most critical routines in real time implementation of a video coding application. Improvements in efficiency of motion estimation methods have a significant effect on the overall video encoding speed.
- DSPs digital signal processors
- ST120 processor made by STMicroelectronics
- TMS320C6000 processor made by Texas Instruments.
- Video coding standards such as H.263, H.263+, and MPEG2 utilize half pixel resolution motion estimation to achieve better coding efficiency.
- an integer motion vector within the searching window is obtained using a searching algorithm, for example, hierarchical searching.
- the predictive image region is obtained by using the integer motion vector determined from the first stage and is interpolated.
- half pixel values of the motion vector are obtained during the second stage.
- the first problem is that byte misalignment may occur at both the first and second stages of motion estimation.
- the second problem is that some additional processing is required at the second stage of motion estimation after each packed loading.
- DSPs which can do packed operations (like the ST 120 processor) are capable of loading two data units (where each of them does not exceed 16 bits) into one register, with one data unit at the upper half and another at the lower half, in a single load instruction (called a packed loading).
- Packed operations effectively allow two ALU operations to be executed in one single instruction. It is desirable to make full use of such packed operations to improve efficiency during motion estimation, which requires a lot of ALU operations.
- a requirement for packed loading is that the loading start point must be aligned to an even byte boundary. However, it is normal to use one byte to store one pixel for the luminance component. Therefore, there will be byte misalignment if no processing is carried out before executing a packed loading during motion estimation.
- the original pixels are interpolated and the original and interpolated pixels are normally stored in a mixed fashion with one byte per pixel, as illustrated in Figure 1.
- the desired pixels are located in a non-consecutive way. If the original loading start point is at an even byte boundary, as shown in Figure 2, the 1 st and 3 rd bytes of the register after loading contain unwanted bytes (corresponding to horizontally interpolated pixels, for example) which have to be masked off for later packed ALU operations, like packed subtraction.
- the present invention provides a method of motion estimation for use in a video picture encoder, wherein a first macroblock of a present picture is compared with a plurality of macroblocks within a search window of a previous or next picture for determining a second macroblock within the search window which is spatially displaced from the first macroblock by an amount represented by a motion vector at a full pixel resolution level, wherein the comparison of the present picture and the previous or next picture is performed by the encoder using packed operations and wherein the method includes the step of storing each pixel of the first macroblock using two bytes per pixel for avoiding byte misalignment during the packed operations.
- the step of storing stores each pixel of the second macroblock using two bytes per pixel.
- the method further includes the steps >of: interpolating the second macroblock to half-pixel resolution to provide a plurality of horizontally, vertically and diagonally interpolated pixels; storing the horizontally, vertically and diagonally interpolated pixels and original pixels of the second macroblock as distinct sequences of pixels; and performing a mean absolute error (MAE) calculation using the stored distinct sequences of pixels to determine the motion vector at half-pixel resolution.
- interpolating the second macroblock to half-pixel resolution to provide a plurality of horizontally, vertically and diagonally interpolated pixels
- storing the horizontally, vertically and diagonally interpolated pixels and original pixels of the second macroblock as distinct sequences of pixels
- MAE mean absolute error
- the present invention also provides a processor configured to perform the steps of the above-described methods and provides a motion estimation system for use in video picture encoding, including a processor adapted to perform the steps of the above-described methods.
- embodiments of the invention address the problem of byte misalignment by storing the pixels of the first or second macroblock using two bytes instead of one. This requires additional storage space, but the cost of this is outweighed by the gains in efficiency due to being able to effectively perform packed operations.
- the novel storage sequence of the pixels of the interpolated macroblock eliminates additional processing which would otherwise arise through use of packed operations.
- Figure 1 illustrates a normal storage structure for interpolated image regions
- Figure 2 illustrates a first possible situation after packed loading of data into a microprocessor
- Figure 3 illustrates a second possible situation after packed loading of data into a microprocessor
- Figures 4A and 4B illustrate an arrangement for avoiding byte misalignment in accordance with a first embodiment of the invention
- Figures 5A and 5B illustrate an arrangement for avoiding byte misalignment in accordance with a second embodiment of the invention
- Figure 6 illustrates a modified storage sequence of pixels of interpolated image regions in accordance with an embodiment of the invention
- Figure 7 illustrates interpolation of a macroblock.
- the preferred embodiment stores the current macroblock with two bytes per pixel. Because the pixels in the current macroblock are stored using two bytes each, byte misalignment is avoided. This changed storage paradigm requires double the normal amount of memory to store the current macroblock, but it is considered that the extra memory requirement is a tolerable price (in the order of 256 bytes) for a typical 16 by 16 pixel macroblock.
- the decision criterion for determining which macroblock within the search window best matches the current macroblock is based on a Mean Absolute Error (MAE) calculation.
- the macroblock within the search window which provides the best MAE value is that which determines the motion vector.
- the separate processing of the first and possibly last pair of pixels is effectively an overhead cost associated with this embodiment. While there is some loss in computation efficiency as a result of the separate processing, this loss is marginal as the processing is only carried out at most twice for each row of pixels.
- the pixels in the searching window are stored with two bytes per pixel, as depicted in Figure 5. Adjustment of the loading start point is not required since no byte misalignment occurs in this arrangement.
- the size of the increase in memory required for pixel storage in this embodiment depends on the size of the searching window. This may require significant extra memory as the size of the searching window at the first stage of motion estimation is normally large (for example, 46 by 46 pixels) relative to the macroblock. However, in this embodiment no separate processing of the first and possibly last pair of pixels is required (compared to the embodiment described above). At the second stage of motion estimation, the additional memory requirement is not as large, since the searching window is relatively small, as shown in Figure 7 and described below.
- a (MB_SIZE+ ⁇ )*(MBJ>IZE+ ⁇ ) square region is interpolated, where MB_SIZE is ' ⁇ he length of a macroblock (typically 16 pixels), as shown in Figure 7.
- the interpolation starts at a (-1,-1) offset from the motion compensated macroblock identified by the integer motion vector obtained during the first stage of motion estimation.
- the interpolated and original pixels are then stored in separate consecutive sequences, as shown in Figure 6.
- the pixel sequences may be stored in any order, for example such that the diagonally interpolated pixels are stored first, followed by the original pixels or such as they appear in Figure 6.
- This interpolation and storage approach not only provides the required data for the second stage of motion estimation but also efficiently implements the MAE calculation via packed operations without additional processing after the packed loadings.
- the MAE calculation is performed for the half pixel motion vector (-1, -1), for example, it is first necessary to find the loading start point, which is the start of the diagonally interpolated region. Then the packed loading and packed ALU operations are performed for the entire row without the additional processing (e.g., masking and shift) previously described. After finishing one row, the loading point is adjusted, for example to the start of the vertically interpolated region, and the processing of the next row is started.
- the loading start point for vector (1, 1) is the second pixel of the second row of the diagonally interpolated region.
- Table 1 shows the computation reduction per call at the macroblock level for the second stage of motion estimation implemented on a ST120 processor.
- the exact computation reduction depends on the frame rate, the frame size, and the prediction algorithm (ie. whether or not the advanced prediction mode is used in which half pixel motion estimation is carried out at the block level).
- the prediction algorithm ie. whether or not the advanced prediction mode is used in which half pixel motion estimation is carried out at the block level.
- QCIF quarter common intermediate format
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention relates to a method of motion estimation for use in a video picture encoder. A first macroblock of a present picture is compared with a plurality of macroblocks within a search window of a previous or next picture for determining a second macroblock within the search window which is spatially displaced from the first macroblock by an amount represented by a motion vector at a full pixel resolution level. The comparison of the present picture and the previous or next picture is performed by the encoder using packed operations. The method includes the step of storing each pixel of the first macroblock or the search window using two bytes per pixel for avoiding byte misalignment during the packed operations.
Description
MOTION ESTIMATION USING PACKED OPERATIONS
FIELD OF THE INVENTION
The present invention relates to methods and systems for motion estimation of video pictures in video picture encoding. In particular, the invention relates to methods and systems of motion estimation using packed operations in a microprocessor.
BACKGROUND OF THE INVENTION
For encoding video pictures, it is desirable to minimise redundancy of information in the pictures where possible, in order to speed up the encoding process. This can be done by minimising the degree of redundancy between consecutive pictures inherent in most picture sequences. As part of the encoding process, certain pictures in a group of pictures (GOP) can be encoded by reference to the next or previous picture in the sequence. To do this, a comparison of the previous or next picture with the picture to be encoded is made (called motion estimation), and a motion vector is calculated. The motion vector represents an offset of a macroblock in the encoded picture relative to the corresponding macroblock in the reference picture.
Motion estimation is an efficient way to exploit the temporal redundancy inherent in the video sequences. It is widely used in almost all existing video coding algorithms. To improve the encoding efficiency, the resolution of the motion vector is half pixel in many video encoding standards, such as H.263, H.263+, MPEG2, and MPEG4. Because of the large amount of computation required for motion estimation, it is always one of the most critical routines in real time implementation of a video coding application. Improvements in efficiency of motion estimation methods have a significant effect on the overall video encoding speed.
So-called "Packed operations" are one way to speed up computation in some digital signal processors (DSPs), such as the ST120 processor made by STMicroelectronics and the
TMS320C6000 processor made by Texas Instruments.
Video coding standards such as H.263, H.263+, and MPEG2 utilize half pixel resolution motion estimation to achieve better coding efficiency. In the first stage of motion estimation, an integer motion vector within the searching window is obtained using a searching algorithm, for example, hierarchical searching. Then the predictive image region is obtained by using the integer motion vector determined from the first stage and is interpolated. By further searching within the interpolated region, half pixel values of the motion vector are obtained during the second stage. •
If packed operations are used in motion estimation, two problems may arise. The first problem is that byte misalignment may occur at both the first and second stages of motion estimation. The second problem is that some additional processing is required at the second stage of motion estimation after each packed loading.
DSPs which can do packed operations (like the ST 120 processor) are capable of loading two data units (where each of them does not exceed 16 bits) into one register, with one data unit at the upper half and another at the lower half, in a single load instruction (called a packed loading). Packed operations effectively allow two ALU operations to be executed in one single instruction. It is desirable to make full use of such packed operations to improve efficiency during motion estimation, which requires a lot of ALU operations.
A requirement for packed loading is that the loading start point must be aligned to an even byte boundary. However, it is normal to use one byte to store one pixel for the luminance component. Therefore, there will be byte misalignment if no processing is carried out before executing a packed loading during motion estimation.
At the second stage of motion estimation, the original pixels are interpolated and the original and interpolated pixels are normally stored in a mixed fashion with one byte per pixel, as illustrated in Figure 1. Apart from the possible byte misalignment, the desired pixels are located in a non-consecutive way. If the original loading start point is at an even
byte boundary, as shown in Figure 2, the 1st and 3 rd bytes of the register after loading contain unwanted bytes (corresponding to horizontally interpolated pixels, for example) which have to be masked off for later packed ALU operations, like packed subtraction.
If the original loading start point is at an odd byte boundary, we can move the loading start point by one byte in the either forward or backward direction. Then after loading, the 1st and 3 rd will contain the desired bytes, while the 0lh and 2nd bytes are unwanted. This is illustrated in Figure 3. A right shift and then masking must be done to get the desired data for later packed ALU operations. This additional processing increases the computational load significantly since it must be carried out after every packed loading.
SUMMARY OF THE INVENTION
The present invention provides a method of motion estimation for use in a video picture encoder, wherein a first macroblock of a present picture is compared with a plurality of macroblocks within a search window of a previous or next picture for determining a second macroblock within the search window which is spatially displaced from the first macroblock by an amount represented by a motion vector at a full pixel resolution level, wherein the comparison of the present picture and the previous or next picture is performed by the encoder using packed operations and wherein the method includes the step of storing each pixel of the first macroblock using two bytes per pixel for avoiding byte misalignment during the packed operations.
Alternatively, the step of storing stores each pixel of the second macroblock using two bytes per pixel.
Preferably, the method further includes the steps >of: interpolating the second macroblock to half-pixel resolution to provide a plurality of horizontally, vertically and diagonally interpolated pixels; storing the horizontally, vertically and diagonally interpolated pixels and original pixels of the second macroblock as distinct sequences of pixels; and performing a mean absolute error (MAE) calculation using the stored distinct sequences of
pixels to determine the motion vector at half-pixel resolution.
The present invention also provides a processor configured to perform the steps of the above-described methods and provides a motion estimation system for use in video picture encoding, including a processor adapted to perform the steps of the above-described methods.
Advantageously, embodiments of the invention address the problem of byte misalignment by storing the pixels of the first or second macroblock using two bytes instead of one. This requires additional storage space, but the cost of this is outweighed by the gains in efficiency due to being able to effectively perform packed operations. Advantageously, the novel storage sequence of the pixels of the interpolated macroblock eliminates additional processing which would otherwise arise through use of packed operations.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a normal storage structure for interpolated image regions;
Figure 2 illustrates a first possible situation after packed loading of data into a microprocessor;
Figure 3 illustrates a second possible situation after packed loading of data into a microprocessor;
Figures 4A and 4B illustrate an arrangement for avoiding byte misalignment in accordance with a first embodiment of the invention;
Figures 5A and 5B illustrate an arrangement for avoiding byte misalignment in accordance with a second embodiment of the invention;
Figure 6 illustrates a modified storage sequence of pixels of interpolated image regions in
accordance with an embodiment of the invention;
Figure 7 illustrates interpolation of a macroblock.
Further features of the invention will be apparent to those skilled in the art from the following detailed description, by way of example only, of the preferred embodiments of the invention, in conjunction with the drawings.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
For normal encoding methods to which packed operations are applied, it is necessary to detect whether the original loading start point for the searching window is at an odd or even byte boundary in order to address possible byte misalignment during the packed loading. If the loading start point is at an odd byte boundary, the loading start point is moved forward or backward by one byte, as shown in Figures 4 A and 4B. To make sure that the differences between the predictive and current macroblocks are calculated in a pixel-to-pixel correspondence way, the loading start point for the current macroblock must also be adjusted.
Instead of adjusting the loading start point, the preferred embodiment stores the current macroblock with two bytes per pixel. Because the pixels in the current macroblock are stored using two bytes each, byte misalignment is avoided. This changed storage paradigm requires double the normal amount of memory to store the current macroblock, but it is considered that the extra memory requirement is a tolerable price (in the order of 256 bytes) for a typical 16 by 16 pixel macroblock.
The decision criterion for determining which macroblock within the search window best matches the current macroblock is based on a Mean Absolute Error (MAE) calculation. The macroblock within the search window which provides the best MAE value is that which determines the motion vector. There may be some additional processing required for pixels at the start and end of a row, since the loading start point has been changed. For
example, if the loading start point is moved forward, the differences between the first pair, and possibly the last pair, of pixels in a row must be calculated separately. The separate processing of the first and possibly last pair of pixels is effectively an overhead cost associated with this embodiment. While there is some loss in computation efficiency as a result of the separate processing, this loss is marginal as the processing is only carried out at most twice for each row of pixels.
In an alternative embodiment, the pixels in the searching window are stored with two bytes per pixel, as depicted in Figure 5. Adjustment of the loading start point is not required since no byte misalignment occurs in this arrangement. The size of the increase in memory required for pixel storage in this embodiment depends on the size of the searching window. This may require significant extra memory as the size of the searching window at the first stage of motion estimation is normally large (for example, 46 by 46 pixels) relative to the macroblock. However, in this embodiment no separate processing of the first and possibly last pair of pixels is required (compared to the embodiment described above). At the second stage of motion estimation, the additional memory requirement is not as large, since the searching window is relatively small, as shown in Figure 7 and described below.
To avoid additional processing (e.g., right shift and masking) after each packed loading during the second stage of motion estimation, a novel data storage structure for the original and interpolated pixels is used, as shown in Figure 6. Compared to Figure 1, the original and interpolated pixels are stored consecutively within a distinct sequence. This arrangement obviates the need for masking and/or right shift operations.
In the implementation of the preferred embodiments, a (MB_SIZE+\)*(MBJ>IZE+\) square region is interpolated, where MB_SIZE is '{he length of a macroblock (typically 16 pixels), as shown in Figure 7. The interpolation starts at a (-1,-1) offset from the motion compensated macroblock identified by the integer motion vector obtained during the first stage of motion estimation. The interpolated and original pixels are then stored in separate consecutive sequences, as shown in Figure 6. The pixel sequences may be stored in any
order, for example such that the diagonally interpolated pixels are stored first, followed by the original pixels or such as they appear in Figure 6. This interpolation and storage approach not only provides the required data for the second stage of motion estimation but also efficiently implements the MAE calculation via packed operations without additional processing after the packed loadings.
In a typical situation, nine macroblock sized sequences of interpolated pixel data are used to determine the half pixel motion vector (a, b), (a, b = 1, 0, -1). Within the nine candidate blocks, the block with the minimum MAE value is selected as the predictive block. The corresponding offset value (a, b) is recorded as the half pixel value for the motion vector and is added to the integer motion vector determined in the first stage to determine the final motion vector.
If the MAE calculation is performed for the half pixel motion vector (-1, -1), for example, it is first necessary to find the loading start point, which is the start of the diagonally interpolated region. Then the packed loading and packed ALU operations are performed for the entire row without the additional processing (e.g., masking and shift) previously described. After finishing one row, the loading point is adjusted, for example to the start of the vertically interpolated region, and the processing of the next row is started. A similar approach is applied in relation to other half pixel motion vectors in the MAE calculations, with the difference being the loading start point, which depends on the values of the half pixel motion vector. For example, the loading start point for vector (1, 1) is the second pixel of the second row of the diagonally interpolated region.
Advantageously, with the data storage arrangement modified as shown in Figure 4 or 5, the byte misalignment problem is avoided and, compared to non-packed operations, the required number of clock cycles for motion estynation can be almost halved if packed operations are efficiently used.
Using the data arrangement shown in Figure 6 instead of shown in Figure 1, the number of computations can be reduced during the second stage of motion estimation. Table 1 shows
the computation reduction per call at the macroblock level for the second stage of motion estimation implemented on a ST120 processor.
Table 1-comparision of different half pixel motion estimation implementations
If the half pixel motion estimation is also carried out at the block level, for example as suggested in H.263 Annex F, the values in the Table 1 are almost doubled. For low bit rate video conferencing applications in which most of the macroblocks are predictive coded, the cycles needed for half pixel motion estimation per second can be roughly calculated as:
Macroblock Number per frame*Frame Rate* Cycles per call
Therefore, the exact computation reduction depends on the frame rate, the frame size, and the prediction algorithm (ie. whether or not the advanced prediction mode is used in which half pixel motion estimation is carried out at the block level). As an example, if a quarter common intermediate format (QCIF) video is encoded at 10 frames per second with advanced prediction mode, about 2 megacycles per second reduction can be achieved.
Claims
1. A method of motion estimation for use in a video picture encoder, wherein a first macroblock of a present picture is compared with a plurality of macroblocks within a search window of a previous or next picture for determining a second macroblock within the search window which is spatially displaced from the first macroblock by an amount represented by a motion vector at full pixel resolution, wherein the comparison of the present picture and the previous or next picture is performed by the encoder using packed operations and wherein the method includes the step of storing each pixel of the first macroblock using two bytes per pixel for avoiding byte misalignment during the packed operations.
2. A method of motion estimation for use in a video picture encoder, wherein a first macroblock of a present picture is compared with a plurality of macroblocks within a search window of a previous or next picture for determining a second macroblock within the search window which is spatially displaced from the first macroblock by an amount represented by a motion vector at full pixel resolution, wherein the comparison of the present picture and the previous or next picture is performed by the encoder using packed operations and wherein the method includes the step of storing each pixel of the search window using two bytes per pixel for avoiding byte misalignment during the packed operations.
3. A method as described in claim 1 or 2, wherein the method further includes the steps of: interpolating the second macroblock to half-pixel resolution to provide a plurality of horizontally, vertically and diagonally interpolated pixels; storing the horizontally, vertically and diagonally interpolated pixels and original pixels of the second macroblock respectively as distinct sequences of pixels; and performing a mean absolute error (MAE) calculation using the stored distinct sequences of pixels to determine the motion vector at half-pixel resolution.
4. A motion estimation system for use in video picture encoding, including a processor adapted to perform the steps of any one of the preceding claims.
5. A processor adapted to perform the steps of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/SG2001/000096 WO2002096116A1 (en) | 2001-05-18 | 2001-05-18 | Motion estimation using packed operations |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/SG2001/000096 WO2002096116A1 (en) | 2001-05-18 | 2001-05-18 | Motion estimation using packed operations |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002096116A1 true WO2002096116A1 (en) | 2002-11-28 |
Family
ID=20428937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/SG2001/000096 WO2002096116A1 (en) | 2001-05-18 | 2001-05-18 | Motion estimation using packed operations |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2002096116A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013071669A1 (en) * | 2011-11-18 | 2013-05-23 | 杭州海康威视数字技术股份有限公司 | Motion analysis method based on video compression code stream, code stream conversion method and apparatus thereof |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0613304A2 (en) * | 1993-01-27 | 1994-08-31 | General Instrument Corporation Of Delaware | Half-pixel interpolation for a motion compensated digital video system |
WO2001009717A1 (en) * | 1999-08-02 | 2001-02-08 | Morton Steven G | Video digital signal processor chip |
-
2001
- 2001-05-18 WO PCT/SG2001/000096 patent/WO2002096116A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0613304A2 (en) * | 1993-01-27 | 1994-08-31 | General Instrument Corporation Of Delaware | Half-pixel interpolation for a motion compensated digital video system |
WO2001009717A1 (en) * | 1999-08-02 | 2001-02-08 | Morton Steven G | Video digital signal processor chip |
Non-Patent Citations (1)
Title |
---|
MARC QUEROL: "Application Note: STi3220 Motion Estimation Processor Codec", SGS-THOMSON MICROELECTRONICS, XP002190909 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013071669A1 (en) * | 2011-11-18 | 2013-05-23 | 杭州海康威视数字技术股份有限公司 | Motion analysis method based on video compression code stream, code stream conversion method and apparatus thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR100239260B1 (en) | Picture decoder | |
US8218635B2 (en) | Systolic-array based systems and methods for performing block matching in motion compensation | |
KR100261072B1 (en) | Digital signal processing system | |
US7940844B2 (en) | Video encoding and decoding techniques | |
US5731850A (en) | Hybrid hierarchial/full-search MPEG encoder motion estimation | |
US6859494B2 (en) | Methods and apparatus for sub-pixel motion estimation | |
EP0821857B1 (en) | Video decoder apparatus using non-reference frame as an additional prediction source and method therefor | |
EP1653744A1 (en) | Non-integer pixel sharing for video encoding | |
JP3968712B2 (en) | Motion prediction compensation apparatus and method | |
JPH10150666A (en) | Method for compressing digital video data stream, and search processor | |
JP3031152B2 (en) | Motion prediction processor and motion prediction device | |
WO1997047139A2 (en) | Method and device for decoding coded digital video signals | |
US20090232201A1 (en) | Video compression method and apparatus | |
US20080123748A1 (en) | Compression circuitry for generating an encoded bitstream from a plurality of video frames | |
US20080031335A1 (en) | Motion Detection Device | |
EP1514426A2 (en) | Techniques for video encoding and decoding | |
US20060140277A1 (en) | Method of decoding digital video and digital video decoder system thereof | |
JPH10215457A (en) | Moving image decoding method and device | |
US20070153909A1 (en) | Apparatus for image encoding and method thereof | |
JP2000175199A (en) | Image processor, image processing method and providing medium | |
WO2002096116A1 (en) | Motion estimation using packed operations | |
US20020136302A1 (en) | Cascade window searching method and apparatus | |
JPH0846977A (en) | Picture compression circuit | |
JP2000354245A (en) | Method and device for estimating block movement | |
JPH09261661A (en) | Method for forming bidirectional coding picture from two reference pictures |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP SG US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |