WO2009155759A1 - Joint system for frame rate conversion and video compression - Google Patents
Joint system for frame rate conversion and video compression Download PDFInfo
- Publication number
- WO2009155759A1 WO2009155759A1 PCT/CN2008/072609 CN2008072609W WO2009155759A1 WO 2009155759 A1 WO2009155759 A1 WO 2009155759A1 CN 2008072609 W CN2008072609 W CN 2008072609W WO 2009155759 A1 WO2009155759 A1 WO 2009155759A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- motion
- module
- motion vectors
- video stream
- processing apparatus
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0135—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
- H04N7/014—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
- H04N19/426—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/436—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/533—Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0127—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter
Definitions
- the present invention relates to a joint system for frame rate conversion and video compression, and more particularly, to a system that shares processes of a motion estimation module and a motion compensation module between frame rate conversion and video compression operations.
- FIG. 1 is a diagram of a conventional ME/MC frame rate conversion circuit 100 that converts a film, movie or animated source having a sample rate of 24-30 Hz into a display video stream having a sample rate of 50- 60 Hz or 100-120 Hz
- the frame rate conversion circuit 100 includes a motion estimation circuit 110, a motion compensation circuit 120, and a storage unit such as a DRAM 130.
- the DRAM 130 temporarily stores input frames
- the motion estimation circuit 110 retrieves two frames (a current frame and a previous frame) from the DRAM 130 and compares them to generate a motion vector indicating the motion of a pixel corresponding to the location movement of the pixel from a previous frame to a current frame.
- the motion compensation circuit 120 also retrieves the two frames from the DRAM 130, which are processed together with the motion vector received from the motion estimation circuit 110 for generating a frame to be interpolated between the current frame and the previous frame.
- the output video has a higher frame rate than the input video with reduced judder artifact.
- the frame rate conversion circuit 100 can therefore correctly interpolate the intermediate frames even when the objects and background in the frames are moving.
- Motion estimation and motion compensation are also utilized in video coding, as shown in FIG. 2.
- the video encoder 200 is utilized to compress the input video stream by removing the redundancy of the input frames; in other words, the output compressed stream of the video encoder 200 that is transmitted to the receiving end only includes the difference between each two adjacent frames. The receiving end then reconstructs the original frame stream by compensating for the difference.
- the video encoder 200 therefore includes a DRAM 230 for temporarily storing input frames, and a motion estimation circuit 210 for retrieving two frames (i.e. an I-frame and a P-frame) from the DRAM 230 and comparing the two frames to generate a residue and a motion vector indicating the difference between the two frames.
- the residue is then encoded by a block encoding circuit 240, and sent to the bit stream generator 260 to generate a compressed bit-stream.
- a block decoding circuit 250 and a motion compensation circuit 220 simulate the operations that the receiving end takes to reconstruct the original frame stream: the block decoding circuit 250 decodes the encoded residue, and the motion compensation circuit 220 generates a reconstructed frame according to the residue generated by the block decoding circuit 250 and the motion vectors generated by the motion estimation circuit 210.
- the reconstructed frame which is utilized as the P-frame in the next encoding cycle, is stored into the DRAM 230 before being retrieved by the motion estimation circuit 210.
- the present invention is therefore to provide a video processing apparatus that shares a storage interface and operation of a motion estimation module and a motion compensation module between frame rate conversion and video coding operations (or data compression).
- the apparatus may perform both frame rate conversion and video coding at the same time, or perform the operations by turns, while requiring fewer resources and a smaller chip area than conventional methods.
- the sharing concept proposed in the present invention may further be implemented to combine other functionalities, such as de-interlacing, encoding, video NR, super resolution, and functions that require motion information generated by the motion estimation, in order to reduce the system resource requirement.
- a video processing apparatus for performing a video coding operation and a frame rate conversion operation on an input video stream.
- the video processing apparatus comprises a storage module for storing the input video stream comprising a plurality of frames, a video coding module, a motion compensation module, and a motion estimation module.
- the video coding module encodes the input video stream, generates a compressed bit-stream according to a plurality of motion vectors.
- the motion compensation module is coupled to the storage module and the video coding module, and performs motion judder cancellation (MJC) on the input video stream to generate an output video stream according to the input video stream, and the motion vectors when in a conversion mode, and generates a reconstructed frame according to the input video stream and the motion vectors and stores the reconstructed frame into the storage module when in a coding mode.
- the motion estimation module is coupled to the storage module, the video coding module, and the motion compensation module, and extracts the input video stream from the storage module, generates the motion vectors according to the input video stream.
- a video processing method of performing a video coding and a frame rate conversion on an input video stream comprises storing the input video stream comprising a plurality of frames in a storage module; extracting the input video stream from the storage module, and generating a plurality of motion vectors according to the input video stream; encoding the input video stream to generate a compressed bit-stream according to the motion vectors; performing motion judder cancellation (MJC) on the input video stream to generate an output video stream according to the input video stream and the motion vectors; and generating a reconstructed frame according to the input video stream and the motion vectors, and storing the reconstructed frame into the storage module.
- MJC motion judder cancellation
- FIG. 1 is a block diagram of a conventional frame rate conversion circuit.
- FIG. 2 is a block diagram of a conventional video encoder.
- FIG. 3 is a block diagram of a video processing apparatus according to an exemplary embodiment of the present invention.
- FIG. 4 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
- FIG. 5 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
- FIG. 6 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
- FIG. 7 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
- FIG. 8 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
- FIG. 9 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
- FIG. 3 is a block diagram of a video processing apparatus 300 according to an exemplary embodiment of the present invention.
- the video processing apparatus 300 is provided with both the frame rate conversion function and the video coding function while only one motion estimation module 310, one motion compensation module 320, one video coding module 335 and one storage module 330 are needed.
- the storage module 330 can be a DRAM and stores an input video stream comprising a plurality of frames.
- the motion estimation module 310 includes only one motion estimation unit
- the motion compensation module 320 includes only one motion compensation unit
- the video processing apparatus 300 has two modes, a conversion mode and a coding mode.
- the video coding module 335 is controlled to become disabled, and the motion estimation module 310 and the motion compensation module 320 are configured to generate an output video stream having a frame rate different from the frame rate of the input video stream.
- the motion estimation module 310 extracts a target frame and a reference frame from the storage module 330, and generates a motion vector according to the target frame and the reference frame.
- the motion vector is sent to the motion compensation module 320, which also extracts the target frame and the reference frame from the storage module 320 and generates interpolated frame(s) according to the target frame, the reference frame and the motion vector.
- the output video stream is generated after the motion compensation module 320 interpolates frames into the input video stream.
- the video coding module 335 is enabled, while the motion estimation module 310 and the motion compensation module 320 are configured to perform the data compression procedure.
- the motion estimation module 310 extracts a target frame and a reference frame from the storage module 330 and generates a motion vector and a residue to the video coding module 335 and the motion compensation module 320 according to the target frame and the reference frame.
- a block coding circuit 340 in the video coding module 335 then encodes the residue to generate an encoded residue, and transmits the encoded residue to a bit- stream generator 360 and a block decoding circuit 350 in the video coding module 335.
- the bit-stream generator 360 generates the output compressed bit-stream according to the motion vectors and the encoded residue.
- the processed residue along with the motion vectors and the reference frame are processed to generate a reconstructed frame, which is stored back into the storage module 320 by the motion compensation module 320.
- the video processing apparatus 300 in this embodiment performs the frame rate conversion and the video coding at different time (the video processing apparatus 300 can only operate according to one mode each time) because the motion estimation module 310 and the motion compensation module 320 only include, respectively, one motion estimation unit and one motion compensation unit, controlled by a control signal that is selectively in the conversion mode or in the coding mode.
- the motion estimating methodology of the motion estimation module 310 can be different in the conversion mode and the coding mode in order to obtain the best solution.
- a first motion estimating methodology such as 3D Recursive Search (3 DRS) may be adopted for frame rate conversion
- a second motion estimating methodology such as a Full search may be adopted for video coding.
- FIG. 3 DRS 3D Recursive Search
- the motion compensation module 420 of the video processing apparatus 400 is established with two motion compensation units 422 and 424, each of which is in charge of one function.
- the first motion compensation unit 422 is in charge of frame rate conversion function; therefore, it extracts the target frame and the reference frame from the storage module 430, receives the motion vectors generated by the motion estimation module 410, and generates the output video stream having different frame rates from the input video stream according to the frames and the motion vectors.
- the second motion compensation unit 424 is in charge of video coding; therefore, it generates the reconstructed frame according to the processed residue received from the coding module 435, the reference frame and the motion vectors received from the motion estimation module 410, and stores the reconstructed frame in the storage module 430.
- the video processing apparatus 400 does not require two modes, and therefore may not require switching functionality between two different modes.
- the two functionalities can be simultaneously performed, sharing motion estimation and storage interface.
- the bandwidth of the storage module 430 e.g. DRAM
- the motion estimation module 410 generates the motion vectors according to a single motion estimating methodology no matter whether the motion vectors are for frame rate conversion purposes or video coding purposes since the two functions may take place at the same time.
- the motion estimation module 410 may adopt the 3DRS methodology because, for the frame rate conversion, the 3DRS methodology is preferred.
- a modified video processing apparatus 500 is shown in FIG. 5.
- the video processing apparatus 500 is able to perform video coding on a current video stream and perform frame rate conversion on previous video stream so that a display device, such as a TV in the digital video system, can replay or rewind previously received programs.
- the first motion compensation unit 422 and the second motion compensation unit 424 utilize different frames for frame rate conversion and video coding, respectively. Therefore, the motion estimation module 410 does not directly provide the motion vectors to the first motion compensation unit 422, but instead stores the motion vectors into a storage space (in Fig.
- FIG. 6 is another embodiment of the video processing apparatus that supports instant replay and rewind functions.
- the video processing apparatus 600 further comprises a decoder 670 coupled to the bit-stream generator 460 and the first motion compensation unit 422.
- the bit-stream generator 460 packs the motion vectors received from the motion estimation module 410 into the output compressed bit-stream, and delivers the output compressed bit-stream to the receiving end (not shown) and the decoder 670. After the decoder 670 decodes the motion vectors from the output compressed bit-stream, the motion vectors can be utilized for next motion judder cancellation, which is performed by the first motion compensation unit 422.
- the video processing apparatus 700 can also activate the frame rate conversion and the video compression at the same time.
- the first motion estimation unit 712 generates motion vectors to the first motion compensation unit 722
- the second motion estimation unit 714 generates motion vectors to the second motion compensation unit 724, and generates residue to the video coding module 735.
- the two motion estimation units 712 and 714 share essential information (e.g. motion vectors) between each other, thereby reducing the computation amount, and further improving the motion estimation performance.
- one motion estimation unit receives motion vectors generated by the other motion estimation unit (the first motion estimation unit 712) instead of generating the motion vectors itself.
- the second motion estimation unit 714 can refine the motion vectors according to a motion estimating methodology that is different from that used in the first motion estimation unit 712 in order to improve the efficiency and performance.
- the first motion estimation unit 712 generates primary motion vectors according to the 3DRS methodology
- the second motion estimation unit 714 further refines the primary motion vectors from the first motion estimation unit 712 according to the full search methodology with a smaller search range, thereby reducing computation.
- the information shared between the first and second motion estimation units 712 and 714 is not limited to motion vectors, and the primary motion vectors can be generated by the second motion estimation unit 714 and refined by the first motion estimation unit 712.
- the video processing apparatus 700 can be modified to support functions such as instant replay and rewind.
- FIG. 8 and FIG. 9, show diagrams of modified video processing apparatuses 800 and 900, respectively.
- the video processing apparatus 800 stores the motion vectors generated by the motion estimation module 710 (for example, the first motion estimation unit 712) to a storage space or to the storage module 730, and then the first motion compensation unit 722 retrieves proper motion vectors from the storage space or the storage module 730.
- a decoder 970 is added to decode the motion vectors included in the output compressed bit-stream, and provides the motion vectors to the first motion compensation unit 722.
- the motion estimation module may share information between the frame rate conversion and the video coding operations such as motion vectors, or share hardware such as a data address generator which extracts frames from the storage module, a block matching (SAD calculation) unit, an on-chip SRAM for caching the search range for block matching, a motion vector generator and storage, or a quarter pair interpolator able to make motion vectors more precise when the motion vector is not an integer.
- the motion compensation module may share a hardware-like data address generator, an on-chip SRAM for motion compensation, or a quarter-pair interpolator between the frame rate conversion and the video coding.
- the sharing of an I/O interface and first-in-first-out (FIFO) access of the storage module, such as a DRAM will also benefit the video processing apparatus.
- the video processing apparatus mentioned above When the video processing apparatus mentioned above is implemented in a TV product, it may support the frame rate conversion, instant replay application, and time shift application at low cost with reduced DRAM bandwidth by storing motion vectors into the storage module or another storage device.
- the data rate of the motion vectors is only 1% of the video stream, and therefore will not cause interference or performance degradation.
- the first motion compensation unit When the TV set is in the normal mode, the first motion compensation unit performs the motion judder cancellation to retrieve the current motion vectors; when the TV set is in the delayed playback mode, however, the first motion compensation unit is controlled to retrieve the stored motion vectors.
- the sharing concept proposed in the present invention can further extend to combine other functionalities, such as de-interlacing, encoding, video NR, super resolution, and functions that need motion information generated by the motion estimation and motion compensation.
- the system resource requirement may therefore be reduced.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Systems (AREA)
Abstract
A video processing apparatus includes a storage interface, where information and hardware of a motion estimation module and a motion compensation module are shared between frame rate conversion and video coding operations. The video processing apparatus therefore may perform both the frame rate conversion and video coding operations at the same time or perform them by turns, while requiring fewer resources and a smaller chip area than conventional methods.
Description
JOINT SYSTEM FOR FRAME RATE CONVERSION AND VIDEO
COMPRESSION
FIELD OF INVENTION The present invention relates to a joint system for frame rate conversion and video compression, and more particularly, to a system that shares processes of a motion estimation module and a motion compensation module between frame rate conversion and video compression operations.
BACKGROUND OF THE INVENTION
Please refer to FIG. 1, which is a diagram of a conventional ME/MC frame rate conversion circuit 100 that converts a film, movie or animated source having a sample rate of 24-30 Hz into a display video stream having a sample rate of 50- 60 Hz or 100-120 Hz, the frame rate conversion circuit 100 includes a motion estimation circuit 110, a motion compensation circuit 120, and a storage unit such as a DRAM 130. The DRAM 130 temporarily stores input frames, and the motion estimation circuit 110 retrieves two frames (a current frame and a previous frame) from the DRAM 130 and compares them to generate a motion vector indicating the motion of a pixel corresponding to the location movement of the pixel from a previous frame to a current frame. The motion compensation circuit 120 also retrieves the two frames from the DRAM 130, which are processed together with the motion vector received from the motion estimation circuit 110 for generating a frame to be interpolated between the current frame and the previous frame.
After carrying out the above operations, which are collectively called frame rate conversion with motion judder cancellation (MJC), the output video has a higher frame rate than the input video with reduced judder artifact. The frame rate conversion circuit 100 can therefore correctly interpolate the intermediate frames even when the objects and background in the frames are moving.
Motion estimation and motion compensation are also utilized in video coding, as shown in FIG. 2. The video encoder 200 is utilized to compress the input video stream by removing the redundancy of the input frames; in other words, the output compressed stream of the video encoder 200 that is transmitted to the receiving end only includes the difference between each two adjacent frames. The receiving end then reconstructs the original frame stream by compensating for the difference.
The video encoder 200 therefore includes a DRAM 230 for temporarily storing input frames, and a motion estimation circuit 210 for retrieving two frames (i.e. an I-frame and a P-frame) from the DRAM 230 and comparing the two frames to generate a residue and a motion vector indicating the difference between the two frames. The residue is then encoded by a block encoding circuit 240, and sent to the bit stream generator 260 to generate a compressed bit-stream. A block decoding circuit 250 and a motion compensation circuit 220 simulate the operations that the receiving end takes to reconstruct the original frame stream: the block decoding circuit 250 decodes the encoded residue, and the motion compensation circuit 220 generates a reconstructed frame according to the residue generated by the block decoding circuit 250 and the motion vectors generated by the motion estimation circuit 210. The reconstructed frame, which is utilized as the P-frame in the next encoding cycle, is stored into the DRAM 230 before being retrieved by the motion estimation circuit 210.
However, the data compression and the frame rate conversion operations are always performed independently, which considerably wastes resources and requires a large chip area for the duplicate motion estimation circuits and motion compensation circuits.
SUMMARY OF THE INVENTION
The present invention is therefore to provide a video processing apparatus that shares a storage interface and operation of a motion estimation module and a
motion compensation module between frame rate conversion and video coding operations (or data compression). The apparatus may perform both frame rate conversion and video coding at the same time, or perform the operations by turns, while requiring fewer resources and a smaller chip area than conventional methods.
The sharing concept proposed in the present invention may further be implemented to combine other functionalities, such as de-interlacing, encoding, video NR, super resolution, and functions that require motion information generated by the motion estimation, in order to reduce the system resource requirement.
According to one exemplary embodiment of the present invention, a video processing apparatus for performing a video coding operation and a frame rate conversion operation on an input video stream is disclosed. The video processing apparatus comprises a storage module for storing the input video stream comprising a plurality of frames, a video coding module, a motion compensation module, and a motion estimation module. The video coding module encodes the input video stream, generates a compressed bit-stream according to a plurality of motion vectors. The motion compensation module is coupled to the storage module and the video coding module, and performs motion judder cancellation (MJC) on the input video stream to generate an output video stream according to the input video stream, and the motion vectors when in a conversion mode, and generates a reconstructed frame according to the input video stream and the motion vectors and stores the reconstructed frame into the storage module when in a coding mode. The motion estimation module is coupled to the storage module, the video coding module, and the motion compensation module, and extracts the input video stream from the storage module, generates the motion vectors according to the input video stream.
According to another exemplary embodiment of the present invention, a video processing method of performing a video coding and a frame rate
conversion on an input video stream is disclosed. The method comprises storing the input video stream comprising a plurality of frames in a storage module; extracting the input video stream from the storage module, and generating a plurality of motion vectors according to the input video stream; encoding the input video stream to generate a compressed bit-stream according to the motion vectors; performing motion judder cancellation (MJC) on the input video stream to generate an output video stream according to the input video stream and the motion vectors; and generating a reconstructed frame according to the input video stream and the motion vectors, and storing the reconstructed frame into the storage module.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a conventional frame rate conversion circuit.
FIG. 2 is a block diagram of a conventional video encoder.
FIG. 3 is a block diagram of a video processing apparatus according to an exemplary embodiment of the present invention.
FIG. 4 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
FIG. 5 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention. FIG. 6 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
FIG. 7 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
FIG. 8 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
FIG. 9 is a block diagram of a video processing apparatus according to another exemplary embodiment of the present invention.
DETAILED DESCRIPTION
Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus should be interpreted to mean "include, but not limited to ...". Also, the term "couple" is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections. Please refer to FIG. 3, which is a block diagram of a video processing apparatus 300 according to an exemplary embodiment of the present invention. Unlike the conventional systems shown in FIG. 1 and FIG.2 that each require one motion estimation module, one motion compensation module and one storage module to perform the frame rate conversion and video coding respectively, the video processing apparatus 300 is provided with both the frame rate conversion function and the video coding function while only one motion estimation module 310, one motion compensation module 320, one video coding module 335 and one storage module 330 are needed.
The storage module 330 can be a DRAM and stores an input video stream comprising a plurality of frames. In one embodiment, the motion estimation module 310 includes only one motion estimation unit, the motion compensation
module 320 includes only one motion compensation unit, and the video processing apparatus 300 has two modes, a conversion mode and a coding mode.
When the video processing apparatus 300 is switched to the conversion mode, the video coding module 335 is controlled to become disabled, and the motion estimation module 310 and the motion compensation module 320 are configured to generate an output video stream having a frame rate different from the frame rate of the input video stream. For example, the motion estimation module 310 extracts a target frame and a reference frame from the storage module 330, and generates a motion vector according to the target frame and the reference frame. The motion vector is sent to the motion compensation module 320, which also extracts the target frame and the reference frame from the storage module 320 and generates interpolated frame(s) according to the target frame, the reference frame and the motion vector. The output video stream is generated after the motion compensation module 320 interpolates frames into the input video stream.
However, when the video processing apparatus 300 is in the coding mode, the video coding module 335 is enabled, while the motion estimation module 310 and the motion compensation module 320 are configured to perform the data compression procedure. The motion estimation module 310 extracts a target frame and a reference frame from the storage module 330 and generates a motion vector and a residue to the video coding module 335 and the motion compensation module 320 according to the target frame and the reference frame. A block coding circuit 340 in the video coding module 335 then encodes the residue to generate an encoded residue, and transmits the encoded residue to a bit- stream generator 360 and a block decoding circuit 350 in the video coding module 335. The bit-stream generator 360 generates the output compressed bit-stream according to the motion vectors and the encoded residue. Additionally, after the decoding of the encoded residue by the block decoding circuit 350, the processed residue along with the motion vectors and the reference frame are processed to
generate a reconstructed frame, which is stored back into the storage module 320 by the motion compensation module 320.
The video processing apparatus 300 in this embodiment performs the frame rate conversion and the video coding at different time (the video processing apparatus 300 can only operate according to one mode each time) because the motion estimation module 310 and the motion compensation module 320 only include, respectively, one motion estimation unit and one motion compensation unit, controlled by a control signal that is selectively in the conversion mode or in the coding mode. However, the motion estimating methodology of the motion estimation module 310 can be different in the conversion mode and the coding mode in order to obtain the best solution. A first motion estimating methodology such as 3D Recursive Search (3 DRS) may be adopted for frame rate conversion, and a second motion estimating methodology such as a Full search may be adopted for video coding. FIG. 4 shows a block diagram of a video processing apparatus 400 that can activate the frame rate conversion function and the video coding function at the same time according to one exemplary embodiment of the present invention. Compared with the above embodiment, the motion compensation module 420 of the video processing apparatus 400 is established with two motion compensation units 422 and 424, each of which is in charge of one function. For example, the first motion compensation unit 422 is in charge of frame rate conversion function; therefore, it extracts the target frame and the reference frame from the storage module 430, receives the motion vectors generated by the motion estimation module 410, and generates the output video stream having different frame rates from the input video stream according to the frames and the motion vectors. Meanwhile, the second motion compensation unit 424 is in charge of video coding; therefore, it generates the reconstructed frame according to the processed residue received from the coding module 435, the reference frame and the motion
vectors received from the motion estimation module 410, and stores the reconstructed frame in the storage module 430.
As both the frame rate conversion function and the video coding function are, respectively, accomplished by a dedicated motion compensation unit, the video processing apparatus 400 does not require two modes, and therefore may not require switching functionality between two different modes. The two functionalities can be simultaneously performed, sharing motion estimation and storage interface. The bandwidth of the storage module 430 (e.g. DRAM) can be significantly reduced when compared to the conventional system that needs two motion estimation units and two motion compensation units to fulfill the frame rate conversion and video coding. Moreover, in this embodiment, the motion estimation module 410 generates the motion vectors according to a single motion estimating methodology no matter whether the motion vectors are for frame rate conversion purposes or video coding purposes since the two functions may take place at the same time. For example, the motion estimation module 410 may adopt the 3DRS methodology because, for the frame rate conversion, the 3DRS methodology is preferred.
In consideration of the additional functionalities such as instant replay and rewind that a digital video system having the video processing apparatus 400 implemented therein may be provided with, a modified video processing apparatus 500 is shown in FIG. 5. The video processing apparatus 500 is able to perform video coding on a current video stream and perform frame rate conversion on previous video stream so that a display device, such as a TV in the digital video system, can replay or rewind previously received programs. In this embodiment, the first motion compensation unit 422 and the second motion compensation unit 424 utilize different frames for frame rate conversion and video coding, respectively. Therefore, the motion estimation module 410 does not directly provide the motion vectors to the first motion compensation unit 422, but instead stores the motion vectors into a storage space (in Fig. 5, the storage
space is allocated in the storage module 430; however, it can be allocated in another storage device), and the first motion compensation unit 422 further retrieves proper motion vectors from the storage space. In this way, the first motion compensation unit 422 can obtain motion vectors of a previous input video stream from the storage space to generate the output video stream having previously received programs when the instant replay/rewind function is enabled, while the second motion compensation unit 424 along with the video coding module 435 still generates the output compressed bit-stream representing the current input video stream. FIG. 6 is another embodiment of the video processing apparatus that supports instant replay and rewind functions. The video processing apparatus 600 further comprises a decoder 670 coupled to the bit-stream generator 460 and the first motion compensation unit 422. The bit-stream generator 460 packs the motion vectors received from the motion estimation module 410 into the output compressed bit-stream, and delivers the output compressed bit-stream to the receiving end (not shown) and the decoder 670. After the decoder 670 decodes the motion vectors from the output compressed bit-stream, the motion vectors can be utilized for next motion judder cancellation, which is performed by the first motion compensation unit 422. The following discloses a video processing apparatus according to another embodiment of the present invention. The video processing apparatus 700 shown in FIG. 7 includes two motion estimation units 712 and 714, and two motion compensation units 722 and 724, wherein the first motion compensation unit 722 is in charge of motion judder cancellation of frame rate conversion, and the second motion compensation unit 724 is in charge of video coding; these motion compensation units are substantially the same as the motion compensation units 422 and 424 disclosed above. Therefore, the video processing apparatus 700 can also activate the frame rate conversion and the video compression at the same time.
The first motion estimation unit 712 generates motion vectors to the first motion compensation unit 722, and the second motion estimation unit 714 generates motion vectors to the second motion compensation unit 724, and generates residue to the video coding module 735. However, the two motion estimation units 712 and 714 share essential information (e.g. motion vectors) between each other, thereby reducing the computation amount, and further improving the motion estimation performance.
For example, one motion estimation unit (for example, the second motion estimation unit 714) receives motion vectors generated by the other motion estimation unit (the first motion estimation unit 712) instead of generating the motion vectors itself. The advantages of reduced computation, faster convergence and improved compression efficiency are therefore achieved. After receiving the motion vectors from the first motion estimation unit 712, the second motion estimation unit 714 can refine the motion vectors according to a motion estimating methodology that is different from that used in the first motion estimation unit 712 in order to improve the efficiency and performance. For example, the first motion estimation unit 712 generates primary motion vectors according to the 3DRS methodology, and the second motion estimation unit 714 further refines the primary motion vectors from the first motion estimation unit 712 according to the full search methodology with a smaller search range, thereby reducing computation.
Note that the information shared between the first and second motion estimation units 712 and 714 is not limited to motion vectors, and the primary motion vectors can be generated by the second motion estimation unit 714 and refined by the first motion estimation unit 712.
Similarly, the video processing apparatus 700 can be modified to support functions such as instant replay and rewind. Please refer to FIG. 8 and FIG. 9, which show diagrams of modified video processing apparatuses 800 and 900, respectively. The video processing apparatus 800 stores the motion vectors
generated by the motion estimation module 710 (for example, the first motion estimation unit 712) to a storage space or to the storage module 730, and then the first motion compensation unit 722 retrieves proper motion vectors from the storage space or the storage module 730. In FIG. 9, a decoder 970 is added to decode the motion vectors included in the output compressed bit-stream, and provides the motion vectors to the first motion compensation unit 722. As these embodiments have already been detailed in the above, further description is omitted here for brevity.
In summary, in the above-mentioned embodiments, the motion estimation module may share information between the frame rate conversion and the video coding operations such as motion vectors, or share hardware such as a data address generator which extracts frames from the storage module, a block matching (SAD calculation) unit, an on-chip SRAM for caching the search range for block matching, a motion vector generator and storage, or a quarter pair interpolator able to make motion vectors more precise when the motion vector is not an integer. The motion compensation module may share a hardware-like data address generator, an on-chip SRAM for motion compensation, or a quarter-pair interpolator between the frame rate conversion and the video coding. Moreover, the sharing of an I/O interface and first-in-first-out (FIFO) access of the storage module, such as a DRAM, will also benefit the video processing apparatus.
When the video processing apparatus mentioned above is implemented in a TV product, it may support the frame rate conversion, instant replay application, and time shift application at low cost with reduced DRAM bandwidth by storing motion vectors into the storage module or another storage device. The data rate of the motion vectors is only 1% of the video stream, and therefore will not cause interference or performance degradation. When the TV set is in the normal mode, the first motion compensation unit performs the motion judder cancellation to retrieve the current motion vectors; when the TV set is in the delayed playback
mode, however, the first motion compensation unit is controlled to retrieve the stored motion vectors.
Furthermore, the sharing concept proposed in the present invention can further extend to combine other functionalities, such as de-interlacing, encoding, video NR, super resolution, and functions that need motion information generated by the motion estimation and motion compensation. The system resource requirement may therefore be reduced.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention.
Claims
1. A video processing apparatus for performing a video coding and a frame rate conversion on an input video stream, the video processing apparatus comprising: a storage module, for storing the input video stream comprising a plurality of frames; a video coding module, for encoding the input video stream, generating a compressed bit-stream according to a plurality of motion vectors; a motion compensation module, coupled to the storage module and the video coding module, for performing motion judder cancellation (MJC) on the input video stream to generate an output video stream according to the input video stream and the motion vectors when in a conversion mode; and generating a reconstructed frame according to the input video stream and the motion vectors and storing the reconstructed frame into the storage module when in a coding mode; and a motion estimation module, coupled to the storage module, the video coding module, and the motion compensation module, for extracting the input video stream from the storage module, generating the motion vectors according to the input video stream.
2. The video processing apparatus of claim 1 , wherein the motion compensation module comprises only one motion compensation unit, and the motion compensation unit is controlled by a control signal to be switched between the conversion mode and the coding mode.
3. The video processing apparatus of claim 1, wherein the motion estimation module generates the motion vectors according to a first motion estimating methodology when the motion compensation module is in the conversion mode, and generates the motion vectors according to a second motion estimating methodology different from the first motion estimating methodology when the motion compensation module is in the coding mode.
4. The video processing apparatus of claim 1 , wherein the motion compensation module comprises: a first motion compensation unit, coupled to the storage module, and the motion estimation module, for extracting the input video stream from the storage module, and performing motion judder cancellation (MJC) on the input video stream to generate the output video stream according to the input video stream and the motion vectors; and a second motion compensation unit, coupled to the storage module, the motion estimation module and the video coding module, for extracting the input video stream from the storage module to generate the reconstructed frame according to the input video stream and the motion vectors received from the motion estimation module, and storing the reconstructed frame into the storage module.
5. The video processing apparatus of claim 4, wherein the motion estimation module generates the motion vectors according to a single motion estimating methodology.
6. The video processing apparatus of claim 4, wherein the first motion compensation unit directly receives the motion vectors from the motion estimation module.
7. The video processing apparatus of claim 4, wherein the motion estimation module further stores the motion vectors into a storage space, and the first motion compensation unit further retrieves the motion vectors therefrom.
8. The video processing apparatus of claim 7, wherein the storage space is allocated in the storage module.
9. The video processing apparatus of claim 4, further comprising a decoder, coupled to the video coding module and the first motion compensation unit, for decoding the compressed bit-stream to obtain the motion vectors included therein, and delivering the motion vectors to the first motion compensation unit.
10. The video processing apparatus of claim 1, wherein the motion estimation module comprises: a first motion estimation unit; and a second motion estimation unit for receiving motion vectors generated from the first motion estimation unit and generating motion vectors according to the motion vectors received from the first motion estimation unit and the input video stream; wherein one of the first and second motion estimation units generates and provides the motion vectors to the motion compensation module for MJC when in the conversion mode, and the other of the first and second motion estimation unit generates and provides the motion vectors to the motion compensation module for generating the reconstructed frame when in the coding mode.
11. The video processing apparatus of claim 10, wherein the first motion estimation unit generates motion vectors according to a first motion estimating methodology, and the second motion estimation unit generates motion vectors according to a second motion estimating methodology different from the first motion estimating methodology.
12. The video processing apparatus of claim 10, wherein the first motion estimation unit further stores motion vectors into a storage space, and the second motion estimation unit retrieves stored motion vectors from the storage space.
13. The video processing apparatus of claim 12, wherein the storage space is allocated in the storage module.
14. The video processing apparatus of claim 10, wherein the second motion estimation unit is coupled to the storage module and the motion compensation module, the first motion estimation unit is coupled to the storage module, the video coding module and the motion compensation module, and the video processing apparatus further comprises: a decoder, coupled to the video coding module and the second motion estimation unit, for decoding the compressed bit-stream to obtain the motion vectors included therein, and delivering the motion vectors to the second motion estimation unit.
15. A video processing method of performing a video coding and a frame rate conversion on an input video stream, the method comprising:
(a) storing the input video stream comprising a plurality of frames in a storage module;
(b) extracting the input video stream from the storage module, and generating a plurality of motion vectors according to the input video stream;
(c) encoding the input video stream to generate a compressed bit-stream according to the motion vectors;
(d) performing motion judder cancellation (MJC) on the input video stream to generate an output video stream according to the input video stream and the motion vectors; and
(e) generating a reconstructed frame according to the input video stream and the motion vectors, and storing the reconstructed frame into the storage module.
16. The method of claim 15, wherein the step of generating the motion vectors in step (b) further comprises storing the motion vectors into a storage space, and the step (d) further comprises retrieving the motion vectors from the storage space.
17. The method of claim 16, wherein the storage space is allocated in the storage module.
18. The method of claim 15, wherein the step (d) further comprises decoding the compressed bit-stream to obtain the motion vectors included therein for generation of the output video stream.
19. The method of claim 15, wherein the step (d) further comprises generating motion vectors for generation of the compressed bit-stream according to the input video stream and the motion vectors generated in step (b).
20. The method of claim 19, wherein the step (b) generates the motion vectors according to a first motion estimating methodology, and the step (d) generates motion vectors according to a second motion estimating methodology different from the first motion estimating methodology.
21. The method of claim 15, wherein the step (e) further comprises generating motion vectors for generation of the reconstruction frame according to the input video stream and the motion vectors generated in step (b).
22. The method of claim 21, wherein the step (b) generates the motion vectors according to a first motion estimating methodology, and the step (e) generates motion vectors according to a second motion estimating methodology different from the first motion estimating methodology.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP08874758.9A EP2304931B1 (en) | 2008-06-23 | 2008-10-07 | Joint system for frame rate conversion and video compression |
CN200880130028XA CN102067583B (en) | 2008-06-23 | 2008-10-07 | Joint system for frame rate conversion and video compression |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/143,854 US8284839B2 (en) | 2008-06-23 | 2008-06-23 | Joint system for frame rate conversion and video compression |
US12/143,854 | 2008-06-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2009155759A1 true WO2009155759A1 (en) | 2009-12-30 |
Family
ID=41431261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2008/072609 WO2009155759A1 (en) | 2008-06-23 | 2008-10-07 | Joint system for frame rate conversion and video compression |
Country Status (5)
Country | Link |
---|---|
US (1) | US8284839B2 (en) |
EP (1) | EP2304931B1 (en) |
CN (1) | CN102067583B (en) |
TW (1) | TWI519171B (en) |
WO (1) | WO2009155759A1 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8494058B2 (en) | 2008-06-23 | 2013-07-23 | Mediatek Inc. | Video/image processing apparatus with motion estimation sharing, and related method and machine readable medium |
US8284839B2 (en) | 2008-06-23 | 2012-10-09 | Mediatek Inc. | Joint system for frame rate conversion and video compression |
US20120044327A1 (en) * | 2009-05-07 | 2012-02-23 | Shinichi Horita | Device for acquiring stereo image |
US8990693B2 (en) * | 2010-06-22 | 2015-03-24 | Newblue, Inc. | System and method for distributed media personalization |
CN102647559B (en) * | 2012-04-26 | 2016-04-13 | 广州盈可视电子科技有限公司 | A kind of The Cloud Terrace follows the tracks of the method and apparatus recorded |
US8629937B1 (en) * | 2012-07-25 | 2014-01-14 | Vixs Systems, Inc | Motion adaptive filter and deinterlacer and methods for use therewith |
RU2656785C1 (en) * | 2017-08-03 | 2018-06-06 | Самсунг Электроникс Ко., Лтд. | Motion estimation through three-dimensional recursive search (3drs) in real time for frame conversion (frc) |
US10523961B2 (en) | 2017-08-03 | 2019-12-31 | Samsung Electronics Co., Ltd. | Motion estimation method and apparatus for plurality of frames |
US11722635B2 (en) | 2021-06-22 | 2023-08-08 | Samsung Electronics Co., Ltd. | Processing device, electronic device, and method of outputting video |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1175157A (en) * | 1996-05-08 | 1998-03-04 | 德国汤姆逊-布朗特公司 | Method and circuit device of memory optimization processing for composite video frequency band signal |
CN1330493A (en) * | 2000-06-28 | 2002-01-09 | 三星电子株式会社 | Decoder with digital image stability function and image stability method |
US20060002465A1 (en) | 2004-07-01 | 2006-01-05 | Qualcomm Incorporated | Method and apparatus for using frame rate up conversion techniques in scalable video coding |
WO2008027508A2 (en) * | 2006-08-30 | 2008-03-06 | Broadcom Corporation | Framebuffer sharing for video processing |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5247355A (en) * | 1992-06-11 | 1993-09-21 | Northwest Starscan Limited Partnership | Gridlocked method and system for video motion compensation |
US6519287B1 (en) * | 1998-07-13 | 2003-02-11 | Motorola, Inc. | Method and apparatus for encoding and decoding video signals by using storage and retrieval of motion vectors |
WO2000070879A1 (en) * | 1999-05-13 | 2000-11-23 | Stmicroelectronics Asia Pacific Pte Ltd. | Adaptive motion estimator |
JP4724351B2 (en) * | 2002-07-15 | 2011-07-13 | 三菱電機株式会社 | Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and communication apparatus |
US20040179599A1 (en) * | 2003-03-13 | 2004-09-16 | Motorola, Inc. | Programmable video motion accelerator method and apparatus |
JP2007525703A (en) * | 2004-01-27 | 2007-09-06 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Apparatus and method for compensating for image motion |
EP1772017A2 (en) * | 2004-07-20 | 2007-04-11 | Qualcomm Incorporated | Method and apparatus for encoder assisted-frame rate up conversion (ea-fruc) for video compression |
KR100644629B1 (en) * | 2004-09-18 | 2006-11-10 | 삼성전자주식회사 | Method for estimating motion based on hybrid search block matching algorithm and frame-rate converter using thereof |
US7778476B2 (en) * | 2005-10-21 | 2010-08-17 | Maxim Integrated Products, Inc. | System and method for transform coding randomization |
US7705885B2 (en) * | 2006-06-15 | 2010-04-27 | Freescale Semiconductor, Inc. | Image and video motion stabilization system |
WO2009066284A2 (en) * | 2007-11-20 | 2009-05-28 | Ubstream Ltd. | A method and system for compressing digital video streams |
US8284839B2 (en) | 2008-06-23 | 2012-10-09 | Mediatek Inc. | Joint system for frame rate conversion and video compression |
-
2008
- 2008-06-23 US US12/143,854 patent/US8284839B2/en active Active
- 2008-10-07 CN CN200880130028XA patent/CN102067583B/en not_active Expired - Fee Related
- 2008-10-07 WO PCT/CN2008/072609 patent/WO2009155759A1/en active Application Filing
- 2008-10-07 EP EP08874758.9A patent/EP2304931B1/en not_active Not-in-force
- 2008-11-20 TW TW097144884A patent/TWI519171B/en not_active IP Right Cessation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1175157A (en) * | 1996-05-08 | 1998-03-04 | 德国汤姆逊-布朗特公司 | Method and circuit device of memory optimization processing for composite video frequency band signal |
CN1330493A (en) * | 2000-06-28 | 2002-01-09 | 三星电子株式会社 | Decoder with digital image stability function and image stability method |
US20060002465A1 (en) | 2004-07-01 | 2006-01-05 | Qualcomm Incorporated | Method and apparatus for using frame rate up conversion techniques in scalable video coding |
WO2008027508A2 (en) * | 2006-08-30 | 2008-03-06 | Broadcom Corporation | Framebuffer sharing for video processing |
Non-Patent Citations (1)
Title |
---|
See also references of EP2304931A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP2304931A4 (en) | 2013-01-02 |
US8284839B2 (en) | 2012-10-09 |
TW201002075A (en) | 2010-01-01 |
US20090316785A1 (en) | 2009-12-24 |
EP2304931A1 (en) | 2011-04-06 |
TWI519171B (en) | 2016-01-21 |
CN102067583B (en) | 2012-10-17 |
EP2304931B1 (en) | 2015-10-28 |
CN102067583A (en) | 2011-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8284839B2 (en) | Joint system for frame rate conversion and video compression | |
US8494058B2 (en) | Video/image processing apparatus with motion estimation sharing, and related method and machine readable medium | |
EP0793389B1 (en) | Memory reduction in the MPEG-2 main profile main level decoder | |
US8239766B2 (en) | Multimedia coding techniques for transitional effects | |
US20150023428A1 (en) | Method and device for encoding/decoding video signals using base layer | |
US20120294376A1 (en) | Image decoding device and image encoding device, methods therefor, programs thereof, integrated circuit, and transcoding device | |
US20050105621A1 (en) | Apparatus capable of performing both block-matching motion compensation and global motion compensation and method thereof | |
KR20060105409A (en) | Method for scalably encoding and decoding video signal | |
JPH09247679A (en) | Video encoder in compliance with scalable mpeg2 | |
US7573529B1 (en) | System and method for performing interlaced-to-progressive conversion using interframe motion data | |
US20140177726A1 (en) | Video decoding apparatus, video decoding method, and integrated circuit | |
US7113644B2 (en) | Image coding apparatus and image coding method | |
KR100883604B1 (en) | Method for scalably encoding and decoding video signal | |
KR20080013879A (en) | Method for scalably encoding and decoding video signal | |
KR20080013880A (en) | Method for scalably encoding and decoding video signal | |
JP4326028B2 (en) | Motion prediction method | |
KR100878825B1 (en) | Method for scalably encoding and decoding video signal | |
JPH10136381A (en) | Moving image encoding/decoding device and method therefor | |
JP2005513927A (en) | Method and apparatus for motion compensated temporal interpolation of video sequences | |
US7391469B2 (en) | Method and apparatus for video decoding and de-interlacing | |
KR100327202B1 (en) | Image device and method using memory efficiently | |
KR20060043120A (en) | Method for encoding and decoding video signal | |
US20190075312A1 (en) | Method and apparatus for decoding multi-level video bitstream | |
KR0152780B1 (en) | Image processing apparatus for mpeg | |
JPH0738899A (en) | Image encoding device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880130028.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08874758 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2008874758 Country of ref document: EP |