WO2005086981A2 - Methods and apparatuses for compressing digital image data with motion prediction - Google Patents

Methods and apparatuses for compressing digital image data with motion prediction Download PDF

Info

Publication number
WO2005086981A2
WO2005086981A2 PCT/US2005/008391 US2005008391W WO2005086981A2 WO 2005086981 A2 WO2005086981 A2 WO 2005086981A2 US 2005008391 W US2005008391 W US 2005008391W WO 2005086981 A2 WO2005086981 A2 WO 2005086981A2
Authority
WO
WIPO (PCT)
Prior art keywords
frame
sub
bit stream
motion
block
Prior art date
Application number
PCT/US2005/008391
Other languages
French (fr)
Other versions
WO2005086981A3 (en
Inventor
Jayaram Ramasastry
Partho Choudhury
Ramesh Prasad
Original Assignee
Sindhara Supermedia, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/076,746 external-priority patent/US20050207495A1/en
Priority claimed from US11/077,106 external-priority patent/US7522774B2/en
Application filed by Sindhara Supermedia, Inc. filed Critical Sindhara Supermedia, Inc.
Priority to JP2007503104A priority Critical patent/JP2007529184A/en
Priority to EP05725507A priority patent/EP1730846A4/en
Publication of WO2005086981A2 publication Critical patent/WO2005086981A2/en
Publication of WO2005086981A3 publication Critical patent/WO2005086981A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/129Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/523Motion estimation or motion compensation with sub-pixel accuracy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/53Multi-resolution motion estimation; Hierarchical motion estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/533Motion estimation using multistep search, e.g. 2D-log search or one-at-a-time search [OTS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/57Motion estimation characterised by a search window with variable size or shape
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • H04N19/619Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding the transform being operated outside the prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/64Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/64Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
    • H04N19/647Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission using significance based coding, e.g. Embedded Zerotrees of Wavelets [EZW] or Set Partitioning in Hierarchical Trees [SPIHT]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/87Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present invention relates generally to multimedia applications. More particularly, this invention relates to compressing digital image data with motion prediction.
  • a motion prediction- is performed between the consecutive frames by tracking motion on a luminance map of the frames to generate motion prediction information for the luminance component.
  • the motion prediction information of the luminance component is then applied to the chrominance maps.
  • the wavelet coefficients of each frame and the motion prediction information are encoded into a bit stream based on a target transmission rate, where the encoded wavelet coefficients satisfy a predetermined threshold according to a predetermined algorithm.
  • FIG. 1 is a block diagram illustrating an exemplary-multimedia streaming system according to one embodiment.
  • Figure 2 is a block diagram illustrating an exemplary multimedia streaming system according to one embodiment.
  • Figure 3 is a block diagram illustrating an exemplary network stack according to one embodiment.
  • Figures 4A and 4B are block diagram illustrating exemplary encoding and decoding systems according to certain embodiments.
  • Figure 5 is a flow diagram illustrating an exemplary encoding process according to one embodiment.
  • Figures 6 and 7 are block diagrams illustrating exemplary pixel maps according to certain embodiments.
  • Figure 8 is a flow diagram illustrating an exemplary encoding process according to an alternative embodiment.
  • Figures 9A-9B and 10A-10B are block diagrams illustrating exemplary encoding and decoding systems with motion prediction according certain embodiments.
  • Figure 11 is a flow diagram illustrating an exemplary encoding process with motion prediction according to one embodiment.
  • Figures 12-15 are block diagrams illustrating exemplary pixel maps according to certain embodiments.
  • Embodiments of the system are suited for wireless streaming solutions, due to the seamless progressive transmission capability (e.g., various bandwidths), which help in graceful degradation of video quality in the event of a sudden shortfall in channel bandwidth. Moreover, it also allows for comprehensive intra- as well as inter-frame rate control, thereby allowing for the optimal allocation of bits to each frame, and an optimal distribution of the frame bit budget between the luma and chroma maps. As a result, this helps in improving the perspective quality of frames that have relatively high levels of detail or motion, while maintaining a minimal threshold on the picture quality of uniform texture and/or slow motion sequences within a video clip.
  • Embodiments set forth herein include a stable system to compress and decompress digital audio/video data that is implemented on software and/or hardware platforms. Some advantages of the various embodiments of the invention include, but are not limited to, low battery power consumption, low complexity and low processing load, leading to a more efficient implementation of a commercial audio/video compression/decompression and transmission system.
  • some other advantages include, but are not restricted to, a robust error detection and correction routine that exploits the redundancies in the unique data structure used in the source/arithmetic encoder/decoder of the system, and a smaller search space for predicting motion between two consecutive frames, for a more efficient and faster motion prediction routine.
  • FIG. 1 is a block diagram of one embodiment of an exemplary multimedia streaming system.
  • exemplary system 100 includes- server component 101 (also referred to herein as a server suite) communicatively coupled to client components 103-104 (also referred to herein as client suites) over a network 102, which may be a wired network, a wireless network, or a combination of both.
  • a server suite is an amalgamation of several services that provide download- and-playback (D&P), streaming broadcast, and/or peer-to-peer communication services. This server suite is designed to communicate with any third party network protocol stack (as shown in Figure 3).
  • these components of the system may be implemented in the central server, though a lightweight version of the encoder may be incorporated into a handheld platform for peer-to-peer video conferencing applications.
  • the decoder may be implemented in a client-side memory.
  • the server component 101 may be implemented as a plug-in application within a server, such as a Web server.
  • each of the client components 103- 104 may be implemented as a plug-in within a client, such as a wireless station (e.g., cellular phone, a personal digital assistant or PDA).
  • a wireless station e.g., cellular phone, a personal digital assistant or PDA
  • server component 101 includes a data acquisition module 105, an encoder 106, and a decoder 107.
  • the data acquisition module 105 includes a video/audio repository, an imaging device to capture video in real-time, and/or a repository of video/audio clips.
  • an encoder 106 reads the data and entropy/arithmetic encodes it into a byte stream. The encoder 106 may be implemental within a server suite.
  • video/audio services are provided to a client engine (e.g., clients 103-104), which is a product suite encapsulating a network stack implementation (as shown in Figure 3) and a proprietary decoder (e.g;, 108-109).
  • client engine e.g., clients 103-104
  • This suite can accept a digital payload at various data rates and footprint formats, segregate the audio and video streams, decode each byte stream independently, and display the data in a coherent and real-life manner.
  • encoder module 106 reads raw data in a variety of data formats, (which includes, but is not limited to, RGB x:y:z, YUV x':y':z ⁇ YCrCb x":y":z", where the letter symbols denote sub-sampling ratios etc.), and converts them into one single standard format for purposes of standardization and simplicity.
  • the digital information is read frame wise in a non-interleaved raster format.
  • the encoder unit 106 segregates the audio and video streams prior to actual processing. This is useful since the encoding and decoding mechanisms used for audio and video may be different.
  • the frame data is then fed into a temporary buffer, and transformed into the spatial-frequency domain using a unique set of wavelet filtering operations.
  • the ingenuity in this wavelet transformation lies in its preclusion of extra buffering, and the conversion of computationally complex filtering operations into simple addition/subtraction operations. This makes the wavelet module in this codec memory more efficient.
  • the source encoder/decoder performs compression of the data by reading the wavelet coefficients of every sub-band of the frame obtained from the previous operation in a unique zigzag fashion, known as a Morton scan (similar to the one shown in Figure 7). -This allows the system to arrange the data in an order based on the significance of the wavelet coefficients, and code it in that order.
  • the coding alphabet can be classified into significance, sign and refinement classes in a manner well-known in the art (e.g., JPEG 2000, etc.)
  • the significance, sign and bit plane information of the pixel is coded and transmitted into the byte-stream.
  • the first set of coefficients to be coded thus is the coarsest sub-band in the top-left corner of the sub-band map. Once the coarsest sub- band has been exhausted in this fashion, the coefficients in the finer sub-bands are coded in a similar fashion, based on a unique tree-structure relationship between coefficients in spatially homologous sub-bands.
  • bit stream To further exploit the redundancy of the bit stream, it is partitioned into independent logical groups of bits, based on their source in the sub-band tree map and the type of information it represents (e.g., significance, sign or refinement), and is arithmetic coded for further compression. This process that achieves results is similar to, but superior to, the context-based adaptive binary arithmetic coding (CABAC) technique specified in the H.264 and MPEG4 standards.
  • CABAC context-based adaptive binary arithmetic coding
  • the temporal redundancy between consecutive frames in a video stream is exploited to reduce the bit count even further, by employing a motion prediction scheme.
  • motion is predicted over four coarsest sub-bands, and by employing a type of affined transformation, is predicted in the remaining finer sub-bands using a lower-entropy refinement search.
  • the effective search area for predicting motion in the finer sub-bands is lesser than in the coarser sub-bands, leading to a speed-up in the overall performance of the system, along with a lower bit-rate as compared to similar video compression systems in current use.
  • the video decoder e.g., decoders 108-109 works in a manner similar to the encoder, with the exception that it does not have the motion prediction feedback loop.
  • the decoder performs the relatively opposite operations as in the encoder.
  • the byte stream is read on a bit-by-bit basis, and the coefficients are updated using a non-linear quantization scheme, based on the context decision. Similar logic applies to the wavelet transformation and arithmetic coding blocks.
  • the updated coefficient map is inverse wavelet transformed using a set of arithmetic lifting operations, which may be reverse of the operations undertaken in the forward wavelet transform block in the encoder, to create the reconstructed frame.
  • the reconstructed frame is then rendered by a set of native API (application programming interface) calls in the decoder client.
  • native API application programming interface
  • the codec suite is made compatible with several popular third party multimedia network protocol suites.
  • the exemplary system can be deployed on a variety of operating systems and environments, both on the hand-held as well as PC domain. These include, but are not restricted to; Microsoft ® Windows ® 9x/Me/XP/NT - 4.X/2000, Microsoft ® Windows ® CE, PocketLinuxTM (and its various third party flavors), SymbianOSTM and PalmOSTM. It is available on a range of third-party development platforms. These include, but are not limited to, Microsoft ® PocketPCTM 200X, Sun Microsystems ® J2METM MIDP ® X.O/CLDC ® X.O, Texas Instruments ® OMAPTM and Qualcomm ® BREWTM.
  • embodiments of the invention can be provided as a solution on a wide range of platforms including, but not limited to, Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuits (ASIC) and System-on-Chip (SoC) implementations.
  • FPGA Field Programmable Gate Arrays
  • ASIC Application Specific Integrated Circuits
  • SoC System-on-Chip
  • FIG. 2 is a block diagram illustrating an exemplary multimedia streaming system according to one embodiment.
  • exemplary system 200 includes a server or servers 201 communicatively coupled to one or more clients 202-203 over a various types of networks, such as wireless network 204 and/or wired networks 205-206, which may be the same network.
  • server 201 may be implemented as server 101 of Figure 1.
  • Clients 202-203 may be implemented as clients 103-104 of Figure 1.
  • the server platform 201 includes, but is not limited to, three units labeled A, B and C. However, it is not so limited. These units may be implemented as a single unit or module. These units can communicate with one another, as well with external units, to provide all relevant communication and video/audio processing capabilities.
  • Unit C may be an application server, which provides download services for client-side components such as decoder/encoder APIs to facilitate third party support, browser plug-ins, drivers and plug-and-play components.
  • Unit B may be a web services platform. This addresses component reusability and scalability issues by providing COMTM, COM+TM, EJBTM, CORB ATM, XML and other related web and/or MMS related services. These components are discrete and encapsulate the data. They minimize system dependencies and reduce interaction to a set of inputs and desired outputs. To use a component, a developer may call its interface. The functionality once developed can be used in various applications, hence making the component reusable.
  • Unit A may be an actual network services platform.
  • Unit A provides network services required to transmit encoded data over the wireless network, either in a D&P (Download and Play) or a streaming profile.
  • Unit A also provides support for peer-to-peer (P2P) communications in mobile video conferencing applications, as well as communicates with the wireless service provider to expedite billing and other ancillary issues.
  • P2P peer-to-peer
  • a user 203 with unrestricted mobility (such as a person driving a car downtown) is able to access his or her wireless multimedia services using the nearest wireless base station (BS) 209 of the service provider to which he or she subscribes.
  • the connection could be established using a wide range of technologies including, but not limited to, WCDMA [UMTS], IS-95 A/B-CDMA 1.X/EVDO EVDV, IS-2000-CDMA2000, GSM-GPRS-EDGE, AMPS, iDEN/WiDEN, and Wi-MAX.
  • the BS 209 communicates with the switching office (MTSO) 210 of the service provider over a TCP/IP or UDP/IP connection on the wireless WAN 204.
  • the MTSO 210 handles hand-off, call dropping, roaming and other user profile issues.
  • the payload and profile data is sent to the wireless ISP server for processing.
  • the user 202 has limited mobility, for example, within a home or office building (e.g., a LAN controlled by access point/gateway 211).
  • a user sends in a request for a particular service over a short-range wireless connection, which includes, but is not restricted to, a BluetoothTM, Wi-FiTM (IEEETM 802.1 lx), HomeRF, HiperLAN/1 or HiperLan/2 connection, via an access point (AP) and the corporate gateway 211, to the web gateway of his or her service provider.
  • the ISP communicates with the MTSO 210, to forward the request to the server suite 201. All communications are over a TCP/IP or UDP/IP connection 206.
  • peer-to-peer (P2P) communication is enabled by bypassing the server 201 altogether.
  • all communications, payload transfer and audio/video processing are routed or delegated through the wireless ISP server (e.g., server 207) without any significant load on the server, other than performing the functions of control, assignment, and monitoring.
  • the system capabilities may be classified based on the nature of the services and modes of payload transfer.
  • the user waits for the entire payload (e.g., video/audio clip) to be downloaded onto his or her wireless mobile unit or handset before playing it.
  • Such a service has a large latency period, but can be transported over secure and reliable TCP/IP connections.
  • the payload routing is the same as before, with the exception that it is now transported over a streaming protocol stack (e.g., RTSP/RTP, RTCP, SDP) over a UDP/IP network (e.g., networks 205-206).
  • a streaming protocol stack e.g., RTSP/RTP, RTCP, SDP
  • UDP/IP network e.g., networks 205-206.
  • This ensures that the payload packets are transmitted quickly, though there is a chance of data corruption (e.g., packet loss) due to the unreliable nature of the UDP connection.
  • the payload is routed through a UDP IP connection, to ensure live video/audio quality needed for video conferencing applications.
  • the decoder as well as the encoder may be available on hardware, software, or a combination of both.
  • the encoder may be stored in the remote server, which provides the required service over an appropriate connection, while a lightweight software decoder may be stored in the memory of the wireless handheld terminal.
  • the decoder APIs can be downloaded from an application server (e.g., unit A) over an HTTP/FTP-over- TCP/IP connection.
  • an application server e.g., unit A
  • HTTP/FTP-over- TCP/IP connection e.g., HTTP/FTP-over- TCP/IP connection.
  • decoder e.g., an application layer software
  • FIGS 4A and 4B are data flow diagrams illustrating exemplary encoding and decoding processes through an encoding system and a decoding system respectively, according to certain embodiments of the invention.
  • Figure 5 is a flow diagram illustrating an exemplary process for encoding digital image data according to one embodiment.
  • the exemplary process 500 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system, a server, or a dedicated machine), or a combination of both.
  • processing logic may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system, a server, or a dedicated machine), or a combination of both.
  • exemplary process 500 may be performed by a server component (e.g., server suite), such as, for example, server 101 of Figure 1 or sever 201 of Figure 2.
  • server component e.g., server suite
  • the codec works on a raw file format 401 having raw YUV color frame data specified by any of the several standard file and frame formats including, but not limited to, high definition television (HDTV), standard definition television (SDTV), extended video graphics array (XVGA), standard video graphics array (SVGA), video graphics array (VGA), common interchange format (GIF), quarter common interchange format (QCIF) and sub- quarter interchange format (S-QCIF).
  • the pixel data is stored in a byte format, which is read in serial fashion and stored in a 1 -dimensional array.
  • each image, henceforth to be called the 'frame' includes three maps.
  • Each of these maps may be designated to either store one primary color component, or in a more asymmetric scheme, one map stores the luminance information (also referred to as a luma map or Y map), while the other two maps store the chrominance information (also referred to as chroma maps or Cb/Cr maps).
  • the Y map stores the luma information of the frame, while the chroma information is stored in two quadrature components.
  • the system is designed to work on a wide variety of chrominance sub-sampling formats (which includes, but is not restricted to, the 4: 1 : 1 color format).
  • the dimensions of the chroma maps are an integral fraction of the dimensions of the luma map, along both the cardinal directions.
  • the pixel data is stored in a byte format in the raw payload file, which is read in serial fashion and stored in a set of 3 one-dimensional arrays, one for each map.
  • the 2- dimensional co-ordinate of each pixel in the image map is mapped onto the indexing system of the 1 -dimensional array representing the current color map.
  • the actual 1- dimensional index is divided by the width of the frame, to obtain the "row number”.
  • a modulo operation on the 1-dimensional index gives a remainder that is the "column number" of the corresponding pixel.
  • the pixel coefficient values are scaled up by shifting the absolute value of each pixel coefficient by a predetermined factor (e.g., factor of 64). This increases the dynamic range of the pixel coefficient values, thereby allowing for a finer approximation of the reconstructed frame during decoding.
  • the next operation in the encoding process is to transform the payload from the spatial domain into the multi-resolution domain.
  • a set of forward and backward wavelet filter 402 coefficients with an integral number of taps is used for the low pass and high pass filtering operations (e.g., operation 501).
  • the filter coefficients may be modified in such a way that all operations can be done in-place, without a need for buffering the pixel values in a separate area in memory. This saves valuable volatile-memory space and processing time.
  • the wavelet filtering operations on each image pixel are performed in-place, and the resultant coefficients maintain their relative position in the sub-band map.
  • the entire wavelet decomposition process is split into its horizontal and vertical components, with no particular preference to the order in which the cardinal orientation of filtering may be chosen. Due to the unique lifting nature of the filtering process, the complex mathematical computations involved in the filtering process is reduced to a set of fast, low-complexity addition and/or subtraction operations.
  • a row or column (also referred to as a row vector or a column vector) is chosen, depending on the direction of the current filtering process.
  • a low pass filtering operation is performed on every pixel that has an even index relative to the first pixel in the current vector
  • a high pass filtering operation is performed on every pixel that has an odd index relative to the first pixel of the same vector.
  • the pixel whose wavelet coefficient is to be determined, along with a set of pixels symmetrically arranged around it in its neighborhood along the current orientation of filtering, in the current vector is chosen.
  • Wavelet filters with a vanishing moment of four is applied on the pixels.
  • four tap high pass and low pass filters are used for the transformation.
  • the high pass filter combines the four neighboring even pixels weighted and normalized, as shown below, for filtering an odd pixel [9*(X k - ! + X k+1 ) - (X k - 3 + X k+3 ) + 16] /32
  • the low pass filter combines the four neighboring odd pixels weighted and normalized, as shown below, for filtering an even pixel [9*(X k - ! + X k+1 ) - (X k . 3 + ⁇ k+3 ) + 8] /16 where X is the pixel at position k.
  • the wavelet filtering operation is viewed as a dyadic hierarchical filtering process, meaning that the end-result of a single iteration of the filtering process on the image is to decimate it into four sub-bands, or sub- images, each with half the dimensions in both directions as the original image.
  • the four sub-bands, or sub-images are labeled as HH k , HL k , LH k and LL (where k is the level of decomposition beginning with one for the finest level), depending on their spatial orientation relative to the original image.
  • the entire filtering process ifr repeated on only the LL k sub-image obtained in the previous pass, to obtain four sub-images called HH k - l5 IM k -u HL f c-t and LL k - l5 which have half the dimensions of LL , as explained above.
  • This process is repeated for as many levels of decomposition as is desired, or until the LL sub-band has been reduced to a block which is one pixel across, in which case, further decimation is no longer possible.
  • the filtering is split into horizontal and vertical filtering operations. For the vertical filtering mode, each column (e.g., vertical vector) in the three maps is processed one at a time.
  • the temporary vector is split into two halves. Pixels located in the even numbered memory locations (such as 0, 2, 4,...) of the temporary vector are low pass filtered using the low pass filter (LPF) coefficients, while the pixels in the odd numbered memory locations (such as 1, 3, 5,...) of the temporary vector are high pass filtered using the high pass filter (HPF) coefficients.
  • LPF low pass filter
  • HPF high pass filter
  • the result of each filtering operation (high-pass or low-pass) is stored in the current vector, such that all the results of the low-pass filtering operations are stored in the upper half of the vector (e.g., the top half of a vertical vector, or the left half of a horizontal vector, depending on the current orientation of filtering), while the results from the high-pass filtering operations are stored in the lower half of the column (e.g., the bottom half of a vertical vector, or the right half of a horizontal vector).
  • the pixel data is decimated in a single iteration.
  • the entire process is repeated for all the columns and rows in the current map and frame.
  • the entire process is repeated for all three maps for the current frame, to obtain the wavelet transformed image.
  • the bootstrapped source entropy and arithmetic coding process 403 of the wavelet map is also referred to as channel coding (e.g., operation 502).
  • the arithmetic coding exploits the intimate relationships between spatially homologous blocks within the sub-band tree structure generated in the wavelet transformation 402 described above.
  • the data in the wavelet map is encoded by representing the significance (e.g., with respect to a variable-size quantization threshold), sign and bit plane information of the pixels using a single bit alphabet.
  • the bit stream is encoded in an embedded form, meaning that all the relevant information of a single pixel at a particular quantization threshold is transmitted as a continuous stream of bits.
  • the quantization threshold depends on the number of bits used to represent the Wavelet coefficients. In this embodiment, sixteen bits are used for representing the coefficients. Hence for the first pass the quantization threshold is set, for example, at 0x8000. After a single pass, the threshold is lowered, and the pixels are encoded in the same or similar order as before until substantially all the pixels have been processed. This ensures that all pixels are progressively coded and transmitted in the bit stream.
  • the entropy coded bit stream is further compressed by passing the outputted bit through a context based adaptive arithmetic encoder 404 (also referred to as a channel encoder), as shown as operation 503.
  • CAB AC This context based adaptive binary arithmetic coder
  • This context based adaptive binary arithmetic coder encodes the bit information depending on the probability of occurrence of a predetermined set of bits immediately preceding the current bit.
  • the context in which the current bit is " encoded depends on the nature of the information represented by the bit (significance, sign or bit plane information) and the location of the coefficient being coded in the hierarchical tree structure.
  • the concept of a CAB AC is similar in principle to the one specified in the ITU-T SG16 WP3 Q.6 (VCEG) Rec. H.264 and ISO/EEC JTC 1/SC 29/WG 11 (MPEG) Rec. 14496-10 (MPEG4 part 10). The difference lies in the context modeling, estimation and adaptation of probabilities.
  • the coefficients of the embodiment has different statistical characteristics.
  • the CAB AC-type entropy coder, as specified in the embodiment, is designed to exploit these characteristics to the maximum.
  • the context is an n-bit data structure with a dynamic range of 0 to 2 n . With every new bit coded, the context variable assigned to the current bit is updated, based on a probability estimation table (PET).
  • the system uses (9 x m) context variables for each frame - for three bit classes over three spatial orientation trees, and all sub-bands over m levels of decomposition.
  • the decoder which may reside in the client, may be implemented similar to the exemplary encoder 400 of Figure 4A, but in a reversed order as shown in Figure 4B.
  • FIG. 6 is a diagram illustrating an exemplary pixel map for encoding processing according to one embodiment.
  • the root of the tree structure may be made up of the set of all the pixels in the coarsest sub-band, LL , and the set be labeled as H.
  • the pixels in set H are grouped in sets of 2x2, or quads.
  • each quad in set H (e.g., block 601) has four pixels, with all but the top-left member 602 of every quad having four descendants (e.g., blocks 603-605) in the spatially homologous next finer level of decomposition.
  • the top-right pixel in a quad has four descendant pixels 604 (in a 2x2 format) in the next finer sub-band with the same spatial orientation ( ⁇ L ⁇ . ⁇ in this case).
  • the relative location of the descendants is related to the spatial orientation of the tree root.
  • the first generation descendants of a coefficient (henceforth labeled as offspring) of the top-right pixel in the top-left quad of set H are the top-left 2x2 quad in HL k -j (e.g., block 604).
  • the offspring of the bottom right pixel in any quad of set H lie in spatially homologous positions in the HH k - ⁇ sub-band, while the descendants of the bottom left pixel in any quad of set H lie in spatially homologous positions in the LH k -i sub-band (e.g., block 603).
  • Descendants beyond the first generation of pixels, and sets (including quads) thereof, are generally labeled as grandchildren coefficients, for example, blocks 606-611 as shown in Figure 6.
  • a unique data structure records the order in which the coefficients are encoded.
  • Three dynamically linked data structures, or queues, are maintained for this purpose, labeled as insignificant pixel queue (IPQ), insignificant set queue (ISQ) and significant pixel queue (SPQ).
  • IPQ insignificant pixel queue
  • ISQ insignificant set queue
  • SPQ significant pixel queue
  • each queue is implemented as a dynamic data structure, which includes, but is not restricted to, a doubly linked list or a stack array structure, where each node stores information about the pixel such as coordinates, bit plane number when the pixel becomes significant and type of ISQ list.
  • three types of sets of transform coefficients are defined to partition the pixels and their descendant trees. However, more or less sets may be implemented.
  • the set D(T) is the set of all descendants of a pixel, or an arbitrary set, T, thereof. This includes direct descendants (e.g., offspring such as blocks 603-605) as well as grandchildren coefficients (e.g., blocks 606-608).
  • the set O(T) is defined as the set of all first generation, or direct, descendants of a pixel, or an arbitrary set, T, thereof (e.g., blocks 603-605).
  • two types of ISQ entries may be defined. ISQ entries of type ⁇ represent the set D(T). ISQ entries of type ⁇ represent the set L(T).
  • a binary metric used extensively in the encoding process is the significance function, S n (T).
  • the significance function gives an output of one if the largest wavelet coefficient in the set ⁇ is larger than the current quantization threshold level (e.g., the quantization threshold in the current iteration), or else give an output of zero.
  • the significance function may be defined as follows:
  • S n (T) is the set of pixels, T whose significance is to be measured against the current threshold, ni.
  • FIG 8 is a flow diagram illustrating an exemplary encoding process according to one embodiment.
  • the exemplary process 500 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system, a server, or a dedicated machine), or a combination of both.
  • processing logic may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system, a server, or a dedicated machine), or a combination of both.
  • exemplary process 500 may be performed by a server component (e.g., server suite), such as, for example, server 101 of Figure 1 or sever 201 of Figure 2.
  • server suite e.g., server suite
  • the first phase in the encoding process of the encoder (also referred to as the initialization pass) is the determination and transmission (as a sequence of 8 bits, in binary format) of the number of passes the encoder has to iterate through (block 801).
  • the number of iterations is less than or equal to the number of bit-planes of the largest wavelet coefficient in the current map.
  • the number of iterations to code all the bits in a single map is determined by the number of quantization levels. In one embodiment, this is determined using a formula that may be defined as follows: npceilQogalw roax l)
  • w ⁇ is the largest wavelet coefficient in the current map. This number is transmitted (without context) into the byte stream. The coding process is then iterated ni times over the entire map.
  • IPQ is populated with all the pixels in set H
  • ISQ is populated with all the pixels in set H that have descendants (i.e., in set H, all the pixels in every quad except the top-left one).
  • the SPQ is kept empty and is filled gradually as pixels become significant against the current quantization threshold.
  • all the pixels in the IPQ are sorted to determine which ones have become significant with respect to a current quantization threshold.
  • the value of the significance function for the current pixel is determined, and the value is sent out as the output in the form of a single bit.
  • the sign bit of the pixel entry if the value of the significance function in the previous operation were one, is sent as the output in the form of a single bit.
  • the output of the sign is 1 if the entry is positive and 0 if the entry is negative.
  • the significance of the set D(T) is transmitted as a single bit. If this output bit is one (e.g., the entry has one or more significant descendants), a similar test is performed for the direct (e.g., first generation) descendants, or offspring of the entry. For all four offspring of the entry (defined by set O(T)), according to one embodiment, two operations are performed. First, the significance of the offspring pixel is determined and transmitted. As a second operation, if the offspring pixel is significant, the sign of the ISQ entry is transmitted.
  • type ⁇ e.g., the class of entries that represents all the descendants of the pixel across all generations
  • a value of one is transmitted if the entry is positive, or a value of zero is transmitted if the entry is negative.
  • the entry is then deleted from the ISQ and appended to the SPQ. If, however, the offspring pixel is insignificant, the offspring pixel is removed from the ISQ, and appended to the IPQ.
  • the current ISQ entry is retained depending on the depth of the descendant hierarchy. If the entry has no descendants beyond the immediate offspring (L (T) ⁇ (0), the entry is purged from the ISQ. If, however, descendants for the current set exist beyond the first generation, the entry is removed from its current position in the ISQ, and appended to the end of the ISQ as an entry of type ⁇ (block 805).
  • the significance test is performed on the set L (T). For every entry in the ISQ of type ⁇ , the significance of the set L(T) is tested (e.g., using a significant function) and transmitted as a single-bit. If there exists one or more significant pixels in the set ⁇ L(T), all four offspring of the current ISQ entry are appended to the ISQ as type ot entries at block 806, to be processed in future passes. The current entry in the ISQ is then purged from the queue at block 807.
  • the final phase in the coding process is referred to as the refinement pass.
  • the refinement pass At the end of the sorting pass, all the pixels (or sets thereof) that have become significant against the current quantization threshold level up to the current iteration are removed from the IPQ and appended to the SPQ.
  • the iteration number "n" when the entry was appended to the queue (and the corresponding coefficient became significant against the current quantization threshold level), is recorded along with the co-ordinate information.
  • the n ⁇ most significant bit is transmitted.
  • the output of the entropy coder may be passed through a CABAC-type processor.
  • the embedded output stream of the entropy coder has been designed in a way, such that the compression is optimized by segregating the bit stream based on the context in which the particular bit has been coded.
  • the bit stream includes the bits representing the binary decisions made during the coding. The bits corresponding to the same decisions are segregated and coded separately.
  • the Wavelet transformed coefficients are arranged such that the coefficients with identical characteristics are grouped together, the decisions made on the coefficients in a group are expected to be similar or identical. Hence the bit-stream generated as a result would have longer runs of identical bits, making it more suitable for compression, and achieving more optimal level of compression.
  • the wavelet coefficients "w" have a unique spatial correlation with one another, depending on which sub-band and tree it may belong to. Particularly, such a close correlation exists between the pixels of a single sub-band at a particular level, though the level of correlation weakens across pixels of different sub-bands in the same or different trees and levels. Also note that there is a run- length based correlation between bits that have the similar syntactic relationship.
  • bits in the embedded stream represent sign information for a particular pixel, while others represent significance information. For example, a value of one in this case denotes that the pixel currently being processed is significant with respect to the current quantization threshold, while a zero value denotes otherwise.
  • a third and final class of bits represent refinement bits, which encode the actual quantization error information.
  • each bit in the output stream may be classified based on the nature of the information it represents (3 types) or the location in the sub-band tree map (3 ni + 1 possible locations, where n ! is the number of levels of decomposition). This gives rise to 3 x (3 nj + 1) possible contexts in which a bit can exist, and a unique context is used to code an arbitrary bit.
  • context variables act as an interface between the output of the entropy coder and the binary arithmetic coder.
  • Each context variable is an 8-bitmemory location, which updates its value one bit at a time, as additional coded bits are outputted.
  • PAT probability estimation table
  • the wavelet map may be split into blocks of size 32 x 32 (in pixels), and each such block is source coded independent of all other blocks in the wavelet map.
  • each wavelet map if the dimensions of the map are not a multiple of 32 in either direction, a set of columns and/or rows are padded with zeros such that the new dimensions of the map is a multiple of 32 in both directions.
  • the coefficients in each such block are arranged in the hierarchical Mallat format.
  • the number of levels of decomposition may be arbitrary. In one embodiment, the number is five, so that the coarsest sub-band in Mallat format of each block is one pixel across.
  • the coarsest band is constructed by amalgamating the six coarsest sub- bands in the Mallat format.
  • the bands are numbered in a zigzag manner, similar to the sequence shown in Figure 7.
  • the coarsest band is labeled as band 0, while the next three bands (HL, LH and HH orientations, in that order) are labeled as bands 1, 2 and 3 respectively, and so on.
  • An additional data structure known as a stripe, may be used to represent a set of 4 x 4 coefficients.
  • each of bands 0, 1, 2 and 3 is made up of one such stripe.
  • Bands in the second and third level of decomposition are made of four and sixteen stripes each.
  • quantization thresholds are assigned to all coefficients in band 0 (coarsest), as well as all finer bands. There exists a linear progressive relationship between the thresholds assigned to the various coefficients and bands in the wavelet map. The value of the thresholds are arbitrary, and a matter of conjecture and rigorous experimentation.
  • the top-left (coarsest) sub-band (which is a part of the band 0) is assigned a particular threshold value (labeled x), while the top-right and bottom-left sub-bands of the same level of decomposition (also part of band 0) are assigned a threshold of 2x, and the threshold for the bottom-right sub-band is 4x.
  • the threshold for the top-right and bottom-left sub-bands is the same as the threshold value of the bottom-right sub-band of the previous (coarser) level, while the bottom- right sub-band of the current (finer) level has a threshold that is double that value.
  • This process is applied to the assignment of threshold values for all consecutive bands numbered 0 through 9 in the cunent block.
  • the initial thresholds for the four coarsest pixels in the top-left corner of band 0 are set at 4000 , 8000 h , 8000 h and lOOOO h (h denotes a number in hexadecimal notation).
  • the four- pixel quartet in the top-right comer of band 0 is assigned a threshold of lOOOO h
  • the quartets in the bottom-left and bottom-right comer of band 0 are assigned- thresholds of lOOOO h and 20000 h respectively.
  • the coding scheme includes four passes, labeled 0 to 3.
  • pass 0 in one embodiment, the decision on the significance of the cunent band is considered.
  • the coarsest band band 0
  • the cunent band is marked as significant.
  • An extra bit e.g., 1 is transmitted to the output stream to represent a significant band. If the cunent band has already been marked as significant, then no further action is necessary.
  • each stripe is a set of 4 x 4 pixels, and each set of 2 x 2 pixels in the stripe has a hierarchical parent-child relationship with a homologous pixel in the previous coarser sub-band with the same orientation.
  • each stripe has a parent-child hierarchical relationship with a 2 x 2 quad that is homologous in its spatial orientation in the previous coarser sub-band (see Fig. 11).
  • a stripe is designated as significant if its 2 x 2 quad parent (as explained above) is also significant, or the band within which the stripe resides has been marked as significant (in pass 0).
  • a parent quad is marked as significant if one or more of the coefficients in the quad is above the cunent threshold level for the band in which the quad resides.
  • the significance information of individual pixels in the cunent stripe, along with their sign information, is considered.
  • the number of pixels in the cunent stripe that are significant is recorded. This information is used to determine which context variable is to be used to code the significance information of the pixels in the cunent stripe (see discussion on CAB AC above).
  • a binary 1 is transmitted, followed by a single bit for the sign of that coefficient (1 for a positive coefficient, or a 0 for a negative coefficient). If the cunent coefficient is insignificant, a 0 is transmitted, and its sign need not be checked. This test is performed on all 16 pixels in the cunent stripe, and is repeated over all the stripes in the cunent band, and for all bands in the cunent block of the wavelet map.
  • the refinement information for each pixel in the cunent block is transmitted. For every band, each pixel is compared against the threshold level for the particular band and stripe. If the absolute value of the coefficient is above the threshold level for the cunent band and stripe, then a 1 (bit) is transmitted, else a 0 is transmitted.
  • the first three passes are nested within each other for the cunent block, band and stripe.
  • pass 0 is performed on every band in the cunent block, with the bands numbered sequentially in a zigzag fashion, and tested in that order.
  • pass 1 for performed on all the stripes in the" band in a raster scan fashion.
  • pass 2 is performed on every coefficient of the cunent stripe, also in raster scan mode.
  • Pass 3 is performed on all the coefficients of the block, without consideration to the sequence of bands of stripes within the bands.
  • a fast and efficient motion prediction scheme is made to take optimal advantage of the temporal redundancy inherent in the video stream.
  • the spatial shift in the wavelet coefficient's location is tracked using an innovative, fast and accurate motion prediction routine, in order to exploit the temporal redundancy between the wavelet coefficients of homologous sub-bands in successive frames in a video clip.
  • every sub-band, or sub-image, in the entire wavelet map for each frame in the video clip represents a sub-sampled and decimated version of the original image.
  • a feedback loop is introduced in the linear signal flow path.
  • FIGS 9A-9B and 10A-10B are block diagrams illustrating exemplary encoding and decoding processes according to certain embodiments of the invention.
  • the overall motion in the original image is tracked by following the motion of homologous blocks of pixels in every sub-band of consecutive frames.
  • motion is tracked only in the luma (Y) map, while the same motion prediction information is used in the two chroma (Cr and Cb) maps. This works relatively well since it can be assumed that chroma information follows changes in the luma map fairly assiduously.
  • a full-fledged search of the entire search space is performed only in the four coarsest sub-bands as shown in Figure 6, while this information is scaled and refined using a set of affined transformations, for example, in the six finer sub-bands. This saves a considerable amount of bandwidth, due to the less number of bits that now needs to be coded and transmitted to represent the motion information, without any significant loss of fidelity.
  • I-frames cunent frames that do not need to be predictively coded for temporal redundancies are labeled as infra-coded frames (I-frames).
  • I-frames cunent frames that do not need to be predictively coded for temporal redundancies are labeled as infra-coded frames (I-frames).
  • Frames that are coded using information from previously coded frames are called predicted frames (P-frames).
  • P-frames predicted frames
  • B -frames bi-directional frames
  • the luma (Y) map of- the cunent frame may be encoded using the arithmetic coding I II scheme with a target bit-rate.
  • the bit budget is exhausted (e.g. a number of bits encoded that will be transmitted within a period of time determined by the target bit rate), or all the bit-planes have been coded, the coding is stopped, and the similar reverse procedure (called inverse arithmetic coding I/II) is executed to recover the (lossy) version of the luma (Y) component of the wavelet map of the cunent frame.
  • the version of arithmetic coding to be used here is similar or the same as the version used in the forward entropy coder described above.
  • FIG 11 is a flow diagram illustrating an exemplary process for motion prediction according to one embodiment.
  • the exemplary process 1100 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is ran on a general-purpose computer system, a server, or a dedicated machine), or a combination of both.
  • processing logic may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is ran on a general-purpose computer system, a server, or a dedicated machine), or a combination of both.
  • exemplary process 1100 may be performed by a server component (e.g., server suite), such as, for example, server 101 of Figure 1 or sever 201 of Figure 2.
  • server suite e.g., server suite
  • the recovered wavelet map is buffered as the reference frame, for use as a reference for the next frame in the sequence (block 1101).
  • the second frame is read and decomposed using the n-level wavelet decomposition filter-bank, to generate a new cunent frame.
  • a unique search-and-match algorithm is performed on the wavelet map to keep track on pixels, and sets thereof, which have changed their location due to general motion in the video sequence.
  • the search algorithm is refened to as motion estimation (ME), while the match algorithm is refened to as motion compensation (MC " ).
  • a lower threshold is set, to determine which coefficient values need to be tracked for motion, and eventually compensated for.
  • most coefficients in the finer sub-bands are automatically quantized to zero, while most of the coefficients in the coarsest sub-bands are typically not quantized to zero.
  • it makes sense to determine the largest coefficient in the intermediate (level 2) sub- bands during the encoding process, which is quantized down to zero during the lossy reconstruction process, and use that as a lower threshold (also refened to as loThres).
  • a traditional search-and-match is performed on the four coarsest sub-bands of the wavelet maps of the reference and cunent frames (block 1102).
  • the motion prediction routine performed on these sub-bands involves a simple block search-and-match algorithm on homologous blocks of pixels. This operation identifies the blocks where motion has occuned. The amount of motion is determined and compensated for. This reduces the amount of information that is required to be transmitted, hence leading to better compression.
  • a block neighborhood is defined around the block of pixels in the reference map, whose motion is to be estimated (which is called the reference block), as shown in- Figure-12. . . . . . .
  • the depth of the neighborhood around the pixel block is usually set at equal to k and 1 respectively, though a slightly lower value (e.g., k-1 and 1-1) performs equally well.
  • the neighborhood region spills over outside the sub-band.
  • an edge extension zone is used to create the block neighborhood.
  • a minoring scheme is used to create the edge extension zone.
  • columns of pixels in the neighborhood zone are filled with pixels in the same column along the horizontal edge of the block, in a reverse order.
  • the pixel in the neighborhood zone closest to the edge is filled with the value of the pixel directly abutting it in the same column, and inside the block.
  • the block of pixels in the cunent as well as the reference frames that are in the same relative position in homologous sub-bands are used for the ME routine.
  • the block of pixels in the cunent frame, which is to be matched is called the cunent block.
  • the region encompassed by the block neighborhood around the reference block can be viewed as being made up of several blocks having the same dimensions as the reference (or cunent) block.
  • the metric used to measure the objective numerical difference between the cunent and any block of the same size in the neighborhood of the reference block is the popular Li, or mean absolute enor (MAE) metric.
  • MAE mean absolute enor
  • a block of pixels with the same dimensions as the cunent block is identified within the neighborhood zone.
  • the difference between the absolute values of conesponding pixels in the two blocks is computed and summed. This process is repeated for all such possible blocks of pixels within the neighborhood region, including the reference block itself (block 1104).
  • One important aspect of the search technique is the order in which the search take place. Rather than using a traditional raster scan, according to one embodiment, an innovative outward coil technique is used.
  • the first block in the cunent neighborhood in the cunent sub-band of the reference frame to be matched with the cunent block (of the homologous sub-band in the cunent frame) is the reference block itself.
  • the reference block has been tested, all the blocks which are at a one pixel offset on all sides of the reference block are tested. After the first iteration, all blocks that are at two pixels' offset from the reference block are tested. In this fashion, the search space progressively moves outwards-, until all the blocks in the cunent neighborhood have been tested.
  • the particular block within the neighborhood region that possesses the minimum MAE is of special interest to the cunent system (also refened to as a matching block). This is the block of pixels in the reference (previous) frame, which is closest in terms of absolute difference to the cunent block of pixels in the cunent frame.
  • a unique data structure also refened to as a motion vector (MV) is utilized.
  • the MV of the block being tested contains information on the relative displacement between the reference block (e.g., a block of a previous or future frame) and the matching block (in the cunent frame).
  • the top- left comer of each block is chosen as the point of reference to track matching blocks.
  • the relative shift between the coordinates of the top-left comer of the reference block and that of the matching block is stored in the motion vector data structure.
  • the motion vectors in the LL k sub-band are labeled N ⁇ °, while the motion vectors in the three other coarsest sub-bands are labeled V 2 °, where o is the orientation (HL, LH and HH), as shown in Figure 12.
  • the data is transmitted without context through a baseline binary arithmetic compression algorithm (also refened to herein as the 'pass through mode').
  • a hierarchical order is followed while transmitting motion information, especially the motion vector data structure.
  • Motion vector information both absolute (from coarser levels) and refined (from finer levels), according to one embodiment, has a hierarchical structure.
  • the motion vectors conesponding to blocks that share a parent-child relationship along the same spatial orientation tree have some degree of conelation, and hence may be transmitted using the same context variable.
  • the pixel values of the matching block in the reference frame are subtracted from the homologous block in the cunent frame, and the result of each operation is used to overwrite the conesponding pixel in the cunent block.
  • This difference block also refened to as the compensated block, replaces the current block in the cunent frame.
  • This process is refened to as motion compensation (MC).
  • MC motion compensation
  • the previously defined lower threshold (loThres) is used to perform motion compensation only on such coefficients.
  • the compensated coefficient may be quantized down to zero. This ensures that only those coefficients that make some significant contribution to the overall fidelity of the reconstructed frame are allowed to contribute to the final bit rate.
  • the above ME/MC process is repeated over all 2 x 2 blocks in the four coarsest sub-bands of the cunent and reference wavelet maps.
  • a refinement motion prediction scheme may be implemented using an affined transformation over the motion vectors conesponding to the homologous blocks in the coarsest sub-bands, and applying a regular search routine over a limited area in the region around the displaced reference block as shown in Figure 12.
  • the relative position of the reference block in the finer sub-bands is closely related to the reference blocks in the coarsest sub-bands.
  • the descendants of the top-left 2 x 2 block of pixels in the HL 3 sub-band include the 4 x 4 block of pixels in the top-left comer of HL 2 , and the 8 x 8 block of pixels in the top- left comer of HL l5 as shown in Figure 6.
  • the size of a reference block along both dimensions is twice that of a homologous reference block in the previously coarser sub-band.
  • the size of a motion vector in the finer sub- band may be assumed to be twice as the motion vector in a homologous coarser sub- band. This provides a very coarse approximation of the spatial shift of the pixels in the reference block in the sub-band. To further refine this approximation and track the motion of pixels in finer sub-bands more accurately, according to one embodiment, a refined-search-and-match routine is performed on reference blocks in the finer sub-bands.
  • the dimensions of the reference block depend upon the level of the sub- band where the reference block resides.
  • reference blocks in level 2 are of size 4 4, while those-in level 3 are of size 8 8, and so on.
  • the size of a reference block along both directions is twice as the reference block in the immediately preceding (coarser) sub-band.
  • a block with the same or similar dimensions as the reference block in a particular level and shifted by a certain amount along both cardinal directions is identified.
  • the amount of displacement depends on the level where the reference block resides, as shown in Figure 12.
  • the approximate displacement is 2 * V k °, where, V k ° is the motion vector for a homologous reference block in the coarsest (level 1) sub-band.
  • V k ° is the motion vector for a homologous reference block in the coarsest (level 1) sub-band.
  • the new reference block is displaced by 2 * V k ° from the original reference block.
  • a search region which is identical to the neighborhood zone around the reference block defined earlier, is defined around the new reference block, along with edge extension if the block happens to abut the sub-band edge.
  • the depth of the neighborhood zone depends on the level of decomposition. In one embodiment, it has been set at 4 pixels for level 2 sub-bands, and 8 for level 3 sub-bands, and so on.
  • the refined-search-and-match routine is implemented in a manner that is similar or identical to the search-and- match routine for the level 1 (coarsest) sub-bands, as described above.
  • the (resultant) conected motion vector, V k °, pointing to the net displacement of the matching block is given by adding the approximate (scaled) motion vector, 2 * V ⁇ 0 , and the refinement vector, ⁇ k °.
  • the approximate motion vector (to ⁇ account for the doubling of the dimensions of the reference block) is given by V k ° + (2 * V M 0 ).
  • a block that is displaced from the original reference block by the approximate motion vector is then used as the new reference block.
  • the depth of the neighborhood zone is set at twice the size as that set in the immediately coarser level, around this block.
  • the new refined motion vector, ⁇ 2*k °, thus obtained is transmitted in a manner similar to that of coarser levels (see Fig. 12).
  • the motion compensation (MC) routine for the refined motion prediction algorithm performed on the finer sub-bands is similar or identical to the process outlined for the coarsest sub-bands.
  • the matching block, pointed to by the refined motion vector is subtracted pixel-by-pixel from the cunent block (in the homologous position in the cunent frame), and the difference is overwritten in the location occupied by the cunent block. This block is now called the compensated block (as described above for coarser sub-bands).
  • the new frame is called the compensated frame.
  • This compensated (difference) frame also called the predicted frame
  • the bit stream is transmitted over the transmission channel (e.g., blocks 403-405 of Figure 4A).
  • the source coding and motion compensation feedback loop for predicted frame is similar to the process employed for Intra-frames, with some minor modifications. It is well known that the statistical distribution of coefficient values in a predicted-frame is different from the one found in Intra-coded frames. In case of Intra coded frames, it is assumed that the energy compaction of the wavelet filter ensures superior energy compaction. This ensures that a majority portion of the energy is concentrated in the four coarsest sub-bands. However, during the entire setup, the data has the non-deterministic statistical properties of real time visual signals, such as video sequences. But in the case of predicted frames, only the spatially variant difference values of the pixels are stored, and these coefficients lack the entropy of a real video clip. Hence, the superior energy compaction of the predicted wavelet map cannot be taken for granted.
  • the coarsest sub-band has the largest mean and variance of coefficient values, and these statistics decrease along a logarithmic curve towards finer levels.
  • Such a "downhill” contour maintains the high level of energy compaction in the wavelet map.
  • This "top-heavy" distribution helps in the high coding efficiency and gain of the source coder.
  • the first and second statistical moments of these sub-bands are not so intimately related in predicted wavelet maps.
  • the wavelet coefficients of the finer sub-bands in a predicted map may be scaled down from their original values.
  • this process of scaling is reversed in the decoding process.
  • scaling factors of 8, 16 and 32 for the finest sub-bands (other than the LL k sub-band) along a particular tree orientation for a three level decomposition.
  • a group-of-frames is defined as a temporally contiguous set of frames, beginning with an intra-coded frame, and succeeded by predicted (B or P or otherwise) frames.
  • an intra-coded frame signals the beginning of a new GOF.
  • An important facet of rate control is to ensure that intra-coded frames are introduced only when it is needed, due to their inherently higher coding rates.
  • the two events that wanant the introduction of intra-coded frames is a fall in the average frame PSNR below acceptable levels and/or a change in scene in a video clip. Due to the accurate motion prediction routine used by the system, the average PSNR of the frame less likely falls below a previously accepted threshold (thereby ensuring good subjectively quality throughout the entire video sequence).
  • LL k coarsest
  • the absolute difference of homologous pixels in the LL k sub-band is computed and compared with respect to a threshold.
  • This threshold is determined upon experimentation on a wide range of video clips. In a particular embodiment, a value of 500 is suitable for most purposes. This absolute differencing operation is performed on all coefficients of the coarsest sub-band, and a counter keeps track of the number of cases where the value of the absolute difference exceeds the threshold.
  • the number of pixels in whose case the absolute difference exceeds the threshold is above or equal to a predetermined level, it can be assumed that there has been such a drastic change in the scenery in the video frame, so as to wanant an introduction of an intra-coded frame, thereby marking the end of the cunent GOF, and the beginning of a new one.
  • the numeric level hereby labeled as the scene change factor (SCF) that determines a scene change is a matter of experimentation.
  • SCF scene change factor
  • a value of 50 is suitable for most cases.
  • a technique is employed to ensure that only those matching blocks (within a sub-band) that satisfy certain minimum and maximum threshold requirements are compensated and coded. This technique is called adaptive threshold.
  • the first block to be compared with the cunent block is the reference block.
  • the MAE of this block is compared with the MAE of the reference block against a threshold. If the difference in the values of the MAE of these two blocks is less than a threshold value, this match is discarded, and the reference block continues to be regarded as the best match.
  • the threshold value may be determined by experimentation, and is different for different levels of the wavelet tree structure. At the coarser level (higher sub-bands) the coefficients are the average values while at the finer level (lower sub- bands) the coefficients are the difference values. Average values are larger than the difference values.
  • the threshold value is higher than other sub-bands. All the sub-bands at given decomposition levels have the same quantization value and the value reduces as we go down the decomposition levels.
  • the energy of the cunent block in the cunent frame
  • the energy of the compensated block obtained by differencing homologous pixels of the cunent block in the cunent frame and the matching block in the reference frame.
  • the energy in this case is a simple first order metric obtained by summing the coefficient values of the particular compensated block.
  • the compensated block is discarded and the cunent block is used in its place in the compensated (residual) frame.
  • the value of the cunent threshold level may be determined through extensive experimentation, and is different for the various levels of the wavelet pyramid.
  • the motion prediction routine used in certain embodiments is refened to herein as bi-directional multi-resolution motion prediction (B-MRMP).
  • B-MRMP bi-directional multi-resolution motion prediction
  • motion is estimated from a previous as well as succeeding frame.
  • the temporal offset between past, cunent and future frames used for motion prediction is a matter of conjecture.
  • a temporal offset of one is usually applied for best results.
  • frames are read and wavelet transformed in pairs. In such a scenario, three popular sequence modes are possible.
  • the first frame in the pair is the bi-directionally predicted frame, where each block in each- sub-band of this frame, which undergoes the motion prediction routine is tested against a homologous block in both a previously coded (reference) frame, as well future (P or otherwise) frame.
  • the frame data is read and wavelet transformed in the natural order.
  • the (succeeding) P frame is motion predicted before the B frame.
  • the P frame is predicted by applying the motion prediction routine using the second frame of the last pair of frames (e.g., the reference frame).
  • the frame is then reconstructed and compensated using the motion prediction techniques, to recover a lossy version of the frame.
  • Each block in the B frame is now motion predicted with homologous blocks from both the (past) reference frame as well as the (future) P frame. If estimation/compensation with the reference block from the reference frame gives a lower energy compensated block, the particular block is compensated using the reference block of the (past) reference frame, or else, compensation is carried out using the reference block of the (future) P frame.
  • the decision to use one of the two frames (past reference or future P) for compensation is based on the frame used for this purpose in the parent blocks in the four coarsest sub-bands.
  • an anay While recording and transmitting the motion information of the B frame, an anay stores the identity of the frame (past reference or future P) used in the compensation process, and using a 2-bit alphabet. This information for all blocks in the frame is transmitted with context over the channel prior to other motion information.
  • the advantage of using B frames is that they do not need compensation and reconstruction in the motion prediction feedback loop, since they are less likely " used as reference frames to predict future frames. Thus this routine passes through the feedback reconstruction loop in the encoding process for half the non-intra-coded frames than in other systems, thereby saving a considerable amount of processing time.
  • the first frame in the pair is predictive coded using the second frame of the previous pair of frames as reference.
  • the intra-coded frame in the latter part of this pair is used as reference for the next pair of the frames.
  • the first frame is an intra-coded frame and is used as reference for the (unidirectional) motion prediction of the second frame in the pair.
  • the second (P) frame is reassigned as the new reference frame for the next pair of frames.
  • the motion prediction is performed using a single predicted frame, also refened to as uni-directional multi-resolution motion prediction (U-MRMP mode).
  • U-MRMP mode uni-directional multi-resolution motion prediction
  • the motion compensation (MC) scheme may be replaced with a motion block superposition (MBS).
  • MBS motion block superposition
  • the motion estimation is performed as described above.
  • the arithmetic encoding scheme is highly inefficient in coding predicted (enor) maps (B and P). Due to the skewed probability distribution of coefficients in B and P frames, they do not satisfy the top-heavy tree structure assumptions made in the case of arithmetic coding. This results in several of the large coefficients being interspersed in the finer sub-bands, causing the iterative mechanism of arithmetic coding to loop through several bit planes before these isolated coefficients have been coded for higher fidelity.
  • one way to resolve this problem is to avoid working on enor maps altogether.
  • the arbitrary GOF size is replaced by a GOF of fixed size.
  • the number of frames in the GOF may be equal to the number of frames per second (e.g., a new GOF every second).
  • a new GOF is defined from this new I frame.
  • the coefficient values of the cunent block are replaced with the homologous pixels of the matching block in the reference frame. This saves time by not computing the difference of the two blocks, and also maintains the general statistics of an intra- coded frame. In effect, this results in the blocks of the first infra-coded frame in the cunent GOF being moved around within a limited region, like a jig-saw puzzle, with the motion being represented using only the conesponding motion vectors.
  • a threshold also refened to as the motion information factor (lvflF) may be used to decide-on the mode in which the cunent and future frames are to be temporally coded.
  • two independent thresholds are used to compute the MIF.
  • Coefficients in the sub-bands in the wavelet map may be used for this purpose.
  • the decision tree to classify blocks based on the average amount of motion is based on the segregation of the coefficients into three categories. For blocks whose total energy after compensation is greater than the energy of the original cunent block itself, the conesponding motion vector co-ordinates are set to a predetermined value, such as, for example, a value of 127.
  • the other two categories of blocks have motion vectors with both coordinates equal to a value other than the predetermined value.
  • these blocks are labeled as NC (non-compensated), Z (zero) and NZ (non-zero) respectively.
  • the first threshold is set for the four coarsest sub-bands in the wavelet map.
  • a is less than 10% of the value of ⁇ then the particular frame is repeated. Otherwise, motion prediction (B-MRMP) is performed.
  • B-MRMP motion prediction
  • a similar test with the same test parameters ( a and ⁇ ) is performed on the remaining finer sub-bands. In one embodiment, if a is less than 10% of ? , motion block substitution (MBS) is performed. Otherwise, motion prediction (B- MRMP) is performed.
  • the threshold factor and the number of sub-bands to be used in either test is a matter of conjecture and diligenrexperimentation. In one embodiment, 4 (out of a possible 10) sub-bands are used for the first test and the remaining 6 are used for the second test, with a threshold factor of 10% in either case.
  • a full search routine of the pixel (spatial) map is introduced prior to the wavelet transformation block, in order to predict and track motion in the spatial domain and thereby exploit the temporal redundancy between consecutive frames in the video sequence, as shown in Figures 9B and 10B.
  • a 16 x 16 block size is best suited for tracking real- world global and local motion. This includes, but is not limited to, rotational, translational, camera- pan, and zoom motion. Hence, blocks of this size are refened to as standard macroblocks.
  • a unidirectional motion prediction (U-MP) is employed to predict motion between consecutive frames using a full search technique.
  • the frame is divided into blocks with height and width of the standard macroblock size (16 x 16).
  • frame dimensions are edge extended to be a multiple of 16.
  • a standard and uniform technique is applied across all frames.
  • the edge extended zone can be filled with the pixels values along the edge of the actual image, for instance, or may be padded with zeros throughout. A variety of techniques may be utilized dependent upon the specific configurations.
  • the U-MP routine is applied to all such blocks in a raster scan sequence.
  • a neighborhood zone is defined around the edges of the macroblock, as shown in Figure 13.
  • the depth of the neighborhood zone is chosen to be equal to 15 pixels in every direction.
  • each macroblock to be processed using U-MP is padded with a 15 pixel neighborhood zone around it from all directions.
  • the neighborhood zone may extend over to the region outside the image map.
  • the neighborhood zone for the macroblock uses pixels from the edge extended zone.
  • the U-MP routine may be split into five basic operations.
  • a threshold is set to determine which pixels, or sets thereof, need to be compensated in the U-MP process.
  • each pixel in the reference frame is subtracted from the homologous pixel in the cunent frame, thereby generating a difference map.
  • Each pixel in the difference map is then compared against a pre-determined threshold. The value of the threshold is a matter of conjecture and rigorous experimentation.
  • the pixel is marked as active; else it is marked as inactive.
  • a count of the number of such active pixels in each 16 x 16 pixels in the reference frame is recorded. If the number of active pixels in the macroblock is above a pre-determined threshold, the macroblock is marked as active; else it is marked as inactive.
  • the value of the threshold is a matter of conjecture and rigorous experimentation.
  • the second operation in the U-MP process is the unidirectional motion prediction (U-MP) operation.
  • U-MP unidirectional motion prediction
  • a modification of the traditional half-pel motion prediction algorithm is performed.
  • each frame is interpolated by a factor of two, leading to a search area that is four times the original image map.
  • the previous frame known as the reference frame
  • the cunent frame (known simply as the cunent frame) is used as the other basis of comparison.
  • the homologous blocks in these two frames that are compared are called the reference block and the cunent block respectively, as shown in Figure 13.
  • the non -integer- pel motion interpolation scheme may be further modified to perform a for of quarter-pel motion prediction as shown in Figure 14.
  • the luma map of the cunent and reference frames are interpolated by a factor of four along both the cardinal directions, such that the effective search area in the search-and- match routine is increased by a factor of sixteen.
  • the choice of the interpolation mechanism is a matter of conjecture and rigorous experimentation, which includes, but is not restricted to, bi- linear, quadratic and cubic-spline interpolation schemes. The tradeoff between accurate prediction of the interpolated coefficients and speed of computation is a major deciding factor for the choice of scheme.
  • each macroblock in the cunent frame is subtracted pixel-by-pixel from the homologous macroblock in the reference frame. This generates the non-displaced compensated block.
  • an integer search is performed on every 16 16 macroblock of the cunent frame. In this routine, the pixels of the cunent macroblock are superimposed over every set of pixels of the same size as the cunent block in the neighborhood zone around the reference block.
  • the metric employed for comparing these two sets of pixels is the Li (sum of absolute differences - SAD) metric.
  • the SAD is computed for all 16 x 16 blocks in the neighborhood zone of the reference block, and the position of the block with the lowest value of SAD is labeled as a matching block.
  • the relative position between the matching block and the reference block is recorded using a unique data structure known as the motion vector for the cunent reference block.
  • a half-pel search is performed on every 16 x 16 macroblock of the cunent frame (see Fig. 13).
  • the motion vector obtained for a particular macroblock in the integer search mode is doubled, and a refined search is performed.
  • the depth of the refined search area is one pixel across in all directions. This operation helps in detecting motion which is less than or equal to half a pixel in all directions.
  • the resultant motion vector is obtained by summing the scaled motion vector obtained in the integer search and the refined search modes. This and the conesponding SAD value are recorded for future mode selection (see Fig. 13).
  • each macroblock is split into four blocks of size 8 x 8, and half-pel search is performed on each of the four blocks (see Fig. 14).
  • the set of four resultant motion vectors, obtained by summing the scaled motion vector obtained in the integer search and refined search modes, and their conesponding SAD values are recorded for mode selection later on (see Fig. 14).
  • each block of 8x8 within the cunent macroblock is further split into four blocks of 4x4, and the above technique of scaling and refined-search outlined above may be repeated for all possible search areas of dimensions 4x4, 4x8 and 8x4 pixels.
  • the SAD values obtained from the refined motion estimation routines outlined in this paragraph are also tabulated for future mode selection.
  • the weights are imposed by comparing the SAD values against some predetermined threshold.
  • the value of the threshold in each of the three cases outlined above, is a matter of conjecture and rigorous experimentation. This is done to ensure that a mode with higher rate is chosen for a particular macroblock, only when the advantage so obtained, in terms of higher fidelity (and lower SAD), is fairly substantial.
  • OBMC overlapped block matching/compensation
  • DMD displaced frame difference
  • the choice of the Matching Block is a function of the motion vectors of the reference block cunently being tested, as well as its abutting neighbors, as shown in Figure 15.
  • the motion vectors from all three blocks are translated to any one comer of the reference block being tested (with no preference being given to any particular comer, though this choice should be consistent throughout the compensation procedure for that block), and the conesponding matching blocks are determined.
  • the dimensions of all three matching blocks should be equal to the dimensions of the reference block (see Fig. 15).
  • homologous pixels from all the matching blocks, so determined are summed with different weights, and then differenced with the homologous pixel in the cunent block (in the cunent frame). The difference values are overwritten on the conesponding pixel positions in the cunent block. This difference block is labeled as the compensated block.
  • the matching block is of size 8 x 8.
  • each of the four 8 x 8 blocks carved out of the original 16 x 16 reference block is used to perform OBMC. If the block directly abutting any one of the 8 x 8 blocks is of mode 1MV, its single motion vector is used in the OBMC process. If the abutting block is of mode 4MV, only that 8 x 8 block of such an abutting block, which shares an entire line of pixels as the border with the 8 x 8 block in question (in the reference block being tested) is used (see Fig. 15).
  • the weighting function applied to the pixels or sets thereof in the reference block cunently being tested, as well as the function applied to the pixels or sets thereof in the blocks abutting the reference block can be determined using a process of rigorous experimentation.
  • a residual frame is generated, as a direct outcome of the OBMC operation described abov ⁇ 7 Using the DFD routine, each block (8 x 8 or 16 x 16) is differenced, and the pixel values are overwritten onto the conesponding pixel positions in the cunent block, thereby generating the residual block. Once all the blocks in the cunent frame have been tested, the resultant frame is labeled as the residual frame.
  • the SAD may be compared against a predetermined threshold. If the SAD is below the predetermined threshold, the particular macroblock is marked as a non- compensated macroblock (NCMB). If four such NCMBs are found adjacent to each other in a 2 x 2 grid anay anangement, this set of four blocks is jointly labeled as a non-coded block (NCB).
  • NCMB non- compensated macroblock
  • the decoder decodes the encoded bit stream has the reverse signal flow as the encoder.
  • the relative order of the various signal processing operations are reversed (for example, the wavelet reconstruction block, or I-DWT comes after the source/entropy decoder, inverse arithmetic coding).
  • I-DWT the wavelet reconstruction block
  • MC + Motion Compensation
  • ME/MC motion estimation/compensation
  • the motion vector information for a particular block of pixels (of any arbitrary sub-band at any arbitrary level of resolution) is used to mark the cunent block under consideration, and the residual frame is updated (or 'compensated') by simply adding the values of the homologous pixels from the residual block to the current block as shown in Figures 9B and 10B.
  • Embodiments of the present invention also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD- ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • ROMs read-only memories
  • RAMs random access memories
  • EPROMs erasable programmable ROMs
  • EEPROMs electrically erasable programmable ROMs
  • magnetic or optical cards or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer).
  • a machine-readable medium includes read only memory ("ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.

Abstract

Methods and apparatuses for compressing digital image data with motion prediction are described herein. In one embodiment ( figures 9A and 9B), for each two consecutive frames of an image sequence, a motion prediction is performed between the consecutive frames by tracking motion on a luminance map of the frames to generate motion prediction information for the luminance component ( ME/ME of fig. 9A). The motion prediction information of the luminance component is then applied to the chrominance maps. In response to the motion prediction, the wavelet coefficients of each frame and the motion prediction information are encoded into a bit stream based on a target transmission rate, where the encoded wavelet coefficients satisfy a predetermined threshold according to a predetermined algorithm. Other methods and apparatuses are also described.

Description

METHODS AND APPARATUSES FOR COMPRESSING DIGITAL IMAGE DATA WITH MOTION PREDICTION
This application claims the benefit of U.S. Provisional Application No. 60/552,153, filed March 10, 2004, U.S. Provisional Application No. 60/552,356, filed March 10, 2004, and U.S. Provisional Application No. 60/552,270, filed March 10, 2004. The above-identified applications are hereby incorporated by references.
FIELD OF THE INVENTION
[0O01] The present invention relates generally to multimedia applications. More particularly, this invention relates to compressing digital image data with motion prediction.
BACKGROUND OF THE INVENTION
[0002] A variety of systems have been developed for the encoding and decoding of audio/video data for transmission over wireline and/or wireless communication systems over the past decade. Most systems in this category employ standard compression/transmission techniques, such as, for example, the ITU-T Rec . H.264 (also referred to as H.264) and ISO/IEC Rec. 14496-10 AVC (also referred to as MPEG-4) standards. However, due to their inherent generality, they lack the specific qualities needed for seamless implementation on low power, low complexity systems (such as hand held devices including, but not restricted to, personal digital assistants and smart phones) over noisy, low bit rate wireless channels. [0003] Due to the likely business models rapidly emerging in the wireless market, in which cost incurred by the consumer is directly proportional to the actual volume of transmitted data, and also due to the limited bandwidth, processing capability, storage capacity and battery power, efficiency and speed in compression of audio/video data to be transmitted is a major factor in the eventual success of any such multimedia content delivery system. Most systems in use today are retrofitted versions of identical systems used on higher end desktop workstations. Unlike desktop systems, where error control is not a critical issue due to the inherent reliability of cable LAN/WAN data transmission, and bandwidth may be assumed to be almost unlimited, transmission over limited capacity wireless networks require integration of such systems that may leverage suitable processing and error-control technologies to achieve the level of fidelity expected of a commercially viable multimedia compression and transmission system.
[0004] Conventional video compression engines, or codecs, can be broadly classified into two broad categories. One class of coding strategies, known as a download-and-play (D&P) profile, not only requires the entire file to be downloaded onto the local memory before playback, leading to a large latency time (depending on the available bandwidth and the actual file size), but also makes stringent demands on the amount of buffer memory to be made available for the downloaded payload. Even with the more sophisticated streaming profile, the current physical limitations on current generation transmission equipment at the physical layer force service providers to incorporate a pseudo-streaming capability, which requires an initial period of latency (at the beginning of transmission), and continuous buffering henceforth, which imposes a strain on the limited processing capabilities of the handheld processor. Most commercial compression solutions in the market today do not possess a progressive transmission capability, which means that transmission is possible only until the last integral frame, packet or bit before bandwidth drops below the minimum threshold. In case of video codecs, if the connection breaks before the transmission of the current frame, this frame is lost forever. [0005] Another drawback in conventional video compression codes is the introduction of blocking artifacts due to the block-based coding schemes used in most codecs. Apart from the degradation in subjective visual quality, such systems suffer from poor performance due to bottlenecks introduced by the additional deblocking filters. Yet another drawback is that, due to the limitations in the word size of the computing platform, the coded coefficients are truncated to an approximate value. This is especially prominent along object boundaries, where Gibbs' phenomenon leads to the generation of a visual phenomenon known as mosquito noise. Due to this, the blurring along the object boundaries becomes more prominent, leading to degradation in overall frame quality. [0006] Additionally, the local nature of motion prediction in some codes - introduces motion-induced artifacts, which cannot be easily smoothened by a simple filtering operation. Such problems arise especially in cases of fast motion clips and systems where the frame rate is below that of natural video (e.g., 25 or 30 fps noninterlaced video). In either case, the temporal redundancy between two consecutive frames is extremely low (since much of the motion is lost in between the frames itself), leading to poorer tracking of the motion across frames. This effect is cumulative in nature, especially for a longer group of frames (GoF). [0007] Furthermore, mobile end-user devices are constrained by low processing power and storage capacity. Due to the limitations on the silicon footprint, most mobile and hand-held systems in the market have to time-share the resources of the central processing unit (microcontroller or RISC/CISC processor) to perform all its DSP, control and communication tasks, with little or no provisions for a dedicated processor to take the video/audio processing load off the central processor. Moreover, most general-purpose central processors lack the unique architecture needed for optimal DSP performance. Therefore, a mobile video-codec design must have minimal client-end complexity while maintaining consistency on the efficiency and robustness front. SUMMARY OF THE INVENTION
[0008] Methods and apparatuses for compressing digital image data with motion prediction are described herein. In one embodiment, for each two consecutive frames of an image sequence, a motion prediction- is performed between the consecutive frames by tracking motion on a luminance map of the frames to generate motion prediction information for the luminance component. The motion prediction information of the luminance component is then applied to the chrominance maps. In response to the motion prediction, the wavelet coefficients of each frame and the motion prediction information are encoded into a bit stream based on a target transmission rate, where the encoded wavelet coefficients satisfy a predetermined threshold according to a predetermined algorithm.
[0009] Other features of the present invention will be apparent from the accompanying drawings and from the detailed description which follows.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
[0011]- Figure 1 is a block diagram illustrating an exemplary-multimedia streaming system according to one embodiment.
[0012] Figure 2 is a block diagram illustrating an exemplary multimedia streaming system according to one embodiment.
[0013] Figure 3 is a block diagram illustrating an exemplary network stack according to one embodiment.
[0014] Figures 4A and 4B are block diagram illustrating exemplary encoding and decoding systems according to certain embodiments.
[0015] Figure 5 is a flow diagram illustrating an exemplary encoding process according to one embodiment.
[0016] Figures 6 and 7 are block diagrams illustrating exemplary pixel maps according to certain embodiments.
[0017] Figure 8 is a flow diagram illustrating an exemplary encoding process according to an alternative embodiment.
[0018] Figures 9A-9B and 10A-10B are block diagrams illustrating exemplary encoding and decoding systems with motion prediction according certain embodiments. [0019] Figure 11 is a flow diagram illustrating an exemplary encoding process with motion prediction according to one embodiment.
[0020] Figures 12-15 are block diagrams illustrating exemplary pixel maps according to certain embodiments.
DETAILED DESCRIPTION
[0021] Methods and apparatuses for compressing digital image data with motion prediction are described herein. Concerns addressed by embodiments in this application are the speed of processing data using a processor having limited processing power, storage memory capacity and/or battery life, to achieve transmission data rates which would reproduce high-fidelity multimedia data (e.g., audio/video), the optimal compression of the payload data by exploiting any form of redundancy (e.g., spatial, temporal or run-length) present therein to achieve a target transmission rate as specified by the channel capacity, and the unique packetization of the data for optimal progressive transmission over the channel. [0022] Embodiments of the system are suited for wireless streaming solutions, due to the seamless progressive transmission capability (e.g., various bandwidths), which help in graceful degradation of video quality in the event of a sudden shortfall in channel bandwidth. Moreover, it also allows for comprehensive intra- as well as inter-frame rate control, thereby allowing for the optimal allocation of bits to each frame, and an optimal distribution of the frame bit budget between the luma and chroma maps. As a result, this helps in improving the perspective quality of frames that have relatively high levels of detail or motion, while maintaining a minimal threshold on the picture quality of uniform texture and/or slow motion sequences within a video clip.
[0023] In the following description, numerous details are set forth to provide a more thorough explanation of the present invention. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention.
[0024] Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification do not necessarily all refer to the same embodiment.
Overview
[0025] Embodiments set forth herein include a stable system to compress and decompress digital audio/video data that is implemented on software and/or hardware platforms. Some advantages of the various embodiments of the invention include, but are not limited to, low battery power consumption, low complexity and low processing load, leading to a more efficient implementation of a commercial audio/video compression/decompression and transmission system.
[0026] According to certain embodiments, some other advantages include, but are not restricted to, a robust error detection and correction routine that exploits the redundancies in the unique data structure used in the source/arithmetic encoder/decoder of the system, and a smaller search space for predicting motion between two consecutive frames, for a more efficient and faster motion prediction routine.
[0027] Figure 1 is a block diagram of one embodiment of an exemplary multimedia streaming system. Referring to Figure 1, exemplary system 100 includes- server component 101 (also referred to herein as a server suite) communicatively coupled to client components 103-104 (also referred to herein as client suites) over a network 102, which may be a wired network, a wireless network, or a combination of both. A server suite is an amalgamation of several services that provide download- and-playback (D&P), streaming broadcast, and/or peer-to-peer communication services. This server suite is designed to communicate with any third party network protocol stack (as shown in Figure 3). In one embodiment, these components of the system may be implemented in the central server, though a lightweight version of the encoder may be incorporated into a handheld platform for peer-to-peer video conferencing applications. The decoder may be implemented in a client-side memory. The server component 101 may be implemented as a plug-in application within a server, such as a Web server. Similarly, each of the client components 103- 104 may be implemented as a plug-in within a client, such as a wireless station (e.g., cellular phone, a personal digital assistant or PDA).
[0028] In one embodiment, server component 101 includes a data acquisition module 105, an encoder 106, and a decoder 107. The data acquisition module 105 includes a video/audio repository, an imaging device to capture video in real-time, and/or a repository of video/audio clips. In one embodiment, an encoder 106 reads the data and entropy/arithmetic encodes it into a byte stream. The encoder 106 may be implemental within a server suite.
[0029] In one embodiment, video/audio services are provided to a client engine (e.g., clients 103-104), which is a product suite encapsulating a network stack implementation (as shown in Figure 3) and a proprietary decoder (e.g;, 108-109). This suite can accept a digital payload at various data rates and footprint formats, segregate the audio and video streams, decode each byte stream independently, and display the data in a coherent and real-life manner.
[0030] In one embodiment, encoder module 106 reads raw data in a variety of data formats, (which includes, but is not limited to, RGB x:y:z, YUV x':y':z\ YCrCb x":y":z", where the letter symbols denote sub-sampling ratios etc.), and converts them into one single standard format for purposes of standardization and simplicity. According to one embodiment, irrespective of the mode of data acquisition, the digital information is read frame wise in a non-interleaved raster format. In case of pre-recorded video compression, the encoder unit 106 segregates the audio and video streams prior to actual processing. This is useful since the encoding and decoding mechanisms used for audio and video may be different.
[0031] In one embodiment, the frame data is then fed into a temporary buffer, and transformed into the spatial-frequency domain using a unique set of wavelet filtering operations. The ingenuity in this wavelet transformation lies in its preclusion of extra buffering, and the conversion of computationally complex filtering operations into simple addition/subtraction operations. This makes the wavelet module in this codec memory more efficient.
[0032] In one embodiment, the source encoder/decoder performs compression of the data by reading the wavelet coefficients of every sub-band of the frame obtained from the previous operation in a unique zigzag fashion, known as a Morton scan (similar to the one shown in Figure 7). -This allows the system to arrange the data in an order based on the significance of the wavelet coefficients, and code it in that order. The coding alphabet can be classified into significance, sign and refinement classes in a manner well-known in the art (e.g., JPEG 2000, etc.) [0033] Based on the location of the pixel coefficient being coded in the sub-band map, the significance, sign and bit plane information of the pixel is coded and transmitted into the byte-stream. The first set of coefficients to be coded thus is the coarsest sub-band in the top-left corner of the sub-band map. Once the coarsest sub- band has been exhausted in this fashion, the coefficients in the finer sub-bands are coded in a similar fashion, based on a unique tree-structure relationship between coefficients in spatially homologous sub-bands.
[0034] To further exploit the redundancy of the bit stream, it is partitioned into independent logical groups of bits, based on their source in the sub-band tree map and the type of information it represents (e.g., significance, sign or refinement), and is arithmetic coded for further compression. This process that achieves results is similar to, but superior to, the context-based adaptive binary arithmetic coding (CABAC) technique specified in the H.264 and MPEG4 standards. [0035] The temporal redundancy between consecutive frames in a video stream is exploited to reduce the bit count even further, by employing a motion prediction scheme. In one embodiment, motion is predicted over four coarsest sub-bands, and by employing a type of affined transformation, is predicted in the remaining finer sub-bands using a lower-entropy refinement search. The effective search area for predicting motion in the finer sub-bands is lesser than in the coarser sub-bands, leading to a speed-up in the overall performance of the system, along with a lower bit-rate as compared to similar video compression systems in current use. [0036] In one embodiment, the video decoder (e.g., decoders 108-109) works in a manner similar to the encoder, with the exception that it does not have the motion prediction feedback loop. To compensate for the spatial and temporal redundancies in the data, the decoder performs the relatively opposite operations as in the encoder. In one embodiment, the byte stream is read on a bit-by-bit basis, and the coefficients are updated using a non-linear quantization scheme, based on the context decision. Similar logic applies to the wavelet transformation and arithmetic coding blocks. [0037] Once the bit budget is exhausted, according to one embodiment, the updated coefficient map is inverse wavelet transformed using a set of arithmetic lifting operations, which may be reverse of the operations undertaken in the forward wavelet transform block in the encoder, to create the reconstructed frame. The reconstructed frame is then rendered by a set of native API (application programming interface) calls in the decoder client.
[0038] Techniques described herein are compatible with peer-peer video/audio, text/multimedia messaging, download-and-play and streaming broadcast solutions available from any third party content or service provider. For this purpose, according to one embodiment, the codec suite is made compatible with several popular third party multimedia network protocol suites.
[0039] In one embodiment, the exemplary system can be deployed on a variety of operating systems and environments, both on the hand-held as well as PC domain. These include, but are not restricted to; Microsoft® Windows® 9x/Me/XP/NT - 4.X/2000, Microsoft® Windows® CE, PocketLinux™ (and its various third party flavors), SymbianOS™ and PalmOS™. It is available on a range of third-party development platforms. These include, but are not limited to, Microsoft® PocketPC™ 200X, Sun Microsystems® J2ME™ MIDP® X.O/CLDC® X.O, Texas Instruments® OMAP™ and Qualcomm® BREW™. On the hardware front, embodiments of the invention can be provided as a solution on a wide range of platforms including, but not limited to, Field Programmable Gate Arrays (FPGA), Application Specific Integrated Circuits (ASIC) and System-on-Chip (SoC) implementations.
Exemplary Systems
[0040] A technique to enhance the delivery of audio and video multimedia data over low-bandwidth wireless networks is described herein. The following paragraphs explain the structure of an integrated system that provides services to a user from a single consolidated platform. It addresses transcoding and scalability issues, as well as integration between different third-party network protocol implementations. [0041] Figure 2 is a block diagram illustrating an exemplary multimedia streaming system according to one embodiment. Referring to Figure 2, exemplary system 200 includes a server or servers 201 communicatively coupled to one or more clients 202-203 over a various types of networks, such as wireless network 204 and/or wired networks 205-206, which may be the same network. For example, server 201 may be implemented as server 101 of Figure 1. Clients 202-203 may be implemented as clients 103-104 of Figure 1. In one embodiment, the server platform 201 includes, but is not limited to, three units labeled A, B and C. However, it is not so limited. These units may be implemented as a single unit or module. These units can communicate with one another, as well with external units, to provide all relevant communication and video/audio processing capabilities.
[0042] Unit C may be an application server, which provides download services for client-side components such as decoder/encoder APIs to facilitate third party support, browser plug-ins, drivers and plug-and-play components. Unit B may be a web services platform. This addresses component reusability and scalability issues by providing COM™, COM+™, EJB™, CORB A™, XML and other related web and/or MMS related services. These components are discrete and encapsulate the data. They minimize system dependencies and reduce interaction to a set of inputs and desired outputs. To use a component, a developer may call its interface. The functionality once developed can be used in various applications, hence making the component reusable. The components are decoupled from each other, hence different parts can be scaled without need to change the entire application. Because of these features, the applications could be customized to provide differentiating services and could be scaled to handle more customers as the demand grows. Unit A may be an actual network services platform. Unit A provides network services required to transmit encoded data over the wireless network, either in a D&P (Download and Play) or a streaming profile. Unit A also provides support for peer-to-peer (P2P) communications in mobile video conferencing applications, as well as communicates with the wireless service provider to expedite billing and other ancillary issues. [0043] According to one embodiment, for the purposes of illustration, three main types of scenarios are described herein. However, other types of scenarios may be applied. In one instance, a user 203 with unrestricted mobility (such as a person driving a car downtown) is able to access his or her wireless multimedia services using the nearest wireless base station (BS) 209 of the service provider to which he or she subscribes. The connection could be established using a wide range of technologies including, but not limited to, WCDMA [UMTS], IS-95 A/B-CDMA 1.X/EVDO EVDV, IS-2000-CDMA2000, GSM-GPRS-EDGE, AMPS, iDEN/WiDEN, and Wi-MAX. The BS 209 communicates with the switching office (MTSO) 210 of the service provider over a TCP/IP or UDP/IP connection on the wireless WAN 204. The MTSO 210 handles hand-off, call dropping, roaming and other user profile issues. The payload and profile data is sent to the wireless ISP server for processing.
[0044] In another type of scenario, the user 202 has limited mobility, for example, within a home or office building (e.g., a LAN controlled by access point/gateway 211). Such a user sends in a request for a particular service over a short-range wireless connection, which includes, but is not restricted to, a Bluetooth™, Wi-Fi™ (IEEE™ 802.1 lx), HomeRF, HiperLAN/1 or HiperLan/2 connection, via an access point (AP) and the corporate gateway 211, to the web gateway of his or her service provider. The ISP communicates with the MTSO 210, to forward the request to the server suite 201. All communications are over a TCP/IP or UDP/IP connection 206. Once the required service has been processed by the server 201, the payload is transmitted over substantially the same route in the reverse direction back to the user.
[0045] In a third type of scenario, peer-to-peer (P2P) communication is enabled by bypassing the server 201 altogether. In this case, all communications, payload transfer and audio/video processing are routed or delegated through the wireless ISP server (e.g., server 207) without any significant load on the server, other than performing the functions of control, assignment, and monitoring. [0046] The system capabilities may be classified based on the nature of the services and modes of payload transfer. In a D&P service, the user waits for the entire payload (e.g., video/audio clip) to be downloaded onto his or her wireless mobile unit or handset before playing it. Such a service has a large latency period, but can be transported over secure and reliable TCP/IP connections. [0047] In a streaming service, the payload routing is the same as before, with the exception that it is now transported over a streaming protocol stack (e.g., RTSP/RTP, RTCP, SDP) over a UDP/IP network (e.g., networks 205-206). This ensures that the payload packets are transmitted quickly, though there is a chance of data corruption (e.g., packet loss) due to the unreliable nature of the UDP connection. In P2P services, the payload is routed through a UDP IP connection, to ensure live video/audio quality needed for video conferencing applications. [0048] The decoder as well as the encoder may be available on hardware, software, or a combination of both. For download-and-play and streaming services, the encoder may be stored in the remote server, which provides the required service over an appropriate connection, while a lightweight software decoder may be stored in the memory of the wireless handheld terminal. The decoder APIs can be downloaded from an application server (e.g., unit A) over an HTTP/FTP-over- TCP/IP connection. For services that require more interaction, such as MMS and P2P video conferencing services, both the encoder (e.g., on a stand-alone hardware platform riding piggy-back on the handset chipset) as well as decoder (e.g., an application layer software) are installed on the handheld terminal, for example, integrated within a network protocol stack as shown in Figure 3.
Exemplary Data Compression Processes
[0049] Figures 4A and 4B are data flow diagrams illustrating exemplary encoding and decoding processes through an encoding system and a decoding system respectively, according to certain embodiments of the invention. Figure 5 is a flow diagram illustrating an exemplary process for encoding digital image data according to one embodiment. The exemplary process 500 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system, a server, or a dedicated machine), or a combination of both. For example, exemplary process 500 may be performed by a server component (e.g., server suite), such as, for example, server 101 of Figure 1 or sever 201 of Figure 2.
[0050] Referring to Figures 4 A and 5, the codec works on a raw file format 401 having raw YUV color frame data specified by any of the several standard file and frame formats including, but not limited to, high definition television (HDTV), standard definition television (SDTV), extended video graphics array (XVGA), standard video graphics array (SVGA), video graphics array (VGA), common interchange format (GIF), quarter common interchange format (QCIF) and sub- quarter interchange format (S-QCIF). In one embodiment, the pixel data is stored in a byte format, which is read in serial fashion and stored in a 1 -dimensional array. [0051] In one embodiment, each image, henceforth to be called the 'frame', includes three maps. Each of these maps may be designated to either store one primary color component, or in a more asymmetric scheme, one map stores the luminance information (also referred to as a luma map or Y map), while the other two maps store the chrominance information (also referred to as chroma maps or Cb/Cr maps). The Y map stores the luma information of the frame, while the chroma information is stored in two quadrature components. The system is designed to work on a wide variety of chrominance sub-sampling formats (which includes, but is not restricted to, the 4: 1 : 1 color format). In this case, the dimensions of the chroma maps are an integral fraction of the dimensions of the luma map, along both the cardinal directions. The pixel data is stored in a byte format in the raw payload file, which is read in serial fashion and stored in a set of 3 one-dimensional arrays, one for each map. [0052] Since the dimensions of the frame are previously known, the 2- dimensional co-ordinate of each pixel in the image map is mapped onto the indexing system of the 1 -dimensional array representing the current color map. The actual 1- dimensional index is divided by the width of the frame, to obtain the "row number". A modulo operation on the 1-dimensional index gives a remainder that is the "column number" of the corresponding pixel.
[0053] In one embodiment, as one of the more significant pre-processing operations, the pixel coefficient values are scaled up by shifting the absolute value of each pixel coefficient by a predetermined factor (e.g., factor of 64). This increases the dynamic range of the pixel coefficient values, thereby allowing for a finer approximation of the reconstructed frame during decoding. [0054] The next operation in the encoding process is to transform the payload from the spatial domain into the multi-resolution domain. In one embodiment, a set of forward and backward wavelet filter 402 coefficients with an integral number of taps is used for the low pass and high pass filtering operations (e.g., operation 501). In one embodiment, the filter coefficients may be modified in such a way that all operations can be done in-place, without a need for buffering the pixel values in a separate area in memory. This saves valuable volatile-memory space and processing time.
Exemplary Wavelet Filtering Operations
[0055] In one embodiment, the wavelet filtering operations on each image pixel are performed in-place, and the resultant coefficients maintain their relative position in the sub-band map. In one embodiment, the entire wavelet decomposition process is split into its horizontal and vertical components, with no particular preference to the order in which the cardinal orientation of filtering may be chosen. Due to the unique lifting nature of the filtering process, the complex mathematical computations involved in the filtering process is reduced to a set of fast, low-complexity addition and/or subtraction operations. - - - -
[0056] In every pass, according to one embodiment, a row or column (also referred to as a row vector or a column vector) is chosen, depending on the direction of the current filtering process. In a particular embodiment, a low pass filtering operation is performed on every pixel that has an even index relative to the first pixel in the current vector, and a high pass filtering operation is performed on every pixel that has an odd index relative to the first pixel of the same vector. For each filtering operation, the pixel whose wavelet coefficient is to be determined, along with a set of pixels symmetrically arranged around it in its neighborhood along the current orientation of filtering, in the current vector is chosen.
[0057] Wavelet filters with a vanishing moment of four is applied on the pixels. In one embodiment, four tap high pass and low pass filters are used for the transformation. The high pass filter combines the four neighboring even pixels weighted and normalized, as shown below, for filtering an odd pixel [9*(Xk-! + Xk+1) - (Xk-3 + Xk+3) + 16] /32 The low pass filter combines the four neighboring odd pixels weighted and normalized, as shown below, for filtering an even pixel [9*(Xk-! + Xk+1) - (Xk.3 + χk+3) + 8] /16 where X is the pixel at position k.
[0058] Different normalized weights are applied to each pixel (with a weight of 1.0 being applied to the (central) pixel whose transformed value is to be determined), and a series of addition, subtraction and multiplication operations generate the wavelet coefficient in the same pixel position relative to the beginning of the current vector. It can be deduced that all pixel positions^with even indices within a particular vector (row or column) (0, 2, 4,....) represent low pass coefficients, while all pixel positions with odd indices within a particular vector (row or column) (1, 3, 5, ) represent high pass coefficients. Each such an iteration or pass, creates four sets of coefficients, each constituting a sub-band or sub-image.
[0059] These four sub-bands are intimately inter-meshed into one another. In the first pass, for instance, coefficients belonging to the same sub-image are at an offset of 1 pixel position from one another. In the next pass, all the low pass coefficients (which constitute the LLk sub-image) are selected, and the entire process is repeated. This operation shall, likewise, result in four sub-images with each sub-image having pixels, which are at an offset of 2 pixel positions from each other. Due to the iterative decimation, every pass involves computations on exactly one-quarter the number of pixels as compared to the previous pass, with the offset between pixels in the same sub-image at the same level, doubling with every iteration.
Alternative Wavelet Filtering Operations
[0060] In another embodiment, the wavelet filtering operation is viewed as a dyadic hierarchical filtering process, meaning that the end-result of a single iteration of the filtering process on the image is to decimate it into four sub-bands, or sub- images, each with half the dimensions in both directions as the original image. The four sub-bands, or sub-images are labeled as HHk, HLk, LHk and LL (where k is the level of decomposition beginning with one for the finest level), depending on their spatial orientation relative to the original image. In the next iteration, the entire filtering process ifr repeated on only the LLk sub-image obtained in the previous pass, to obtain four sub-images called HHk-l5 IMk-u HLfc-t and LLk-l5 which have half the dimensions of LL , as explained above. This process is repeated for as many levels of decomposition as is desired, or until the LL sub-band has been reduced to a block which is one pixel across, in which case, further decimation is no longer possible. [0O61] In one embodiment, the filtering is split into horizontal and vertical filtering operations. For the vertical filtering mode, each column (e.g., vertical vector) in the three maps is processed one at a time. Initially, all the pixels in the column are copied into a temporary vector, which has as many locations reserved as there are pixels in the vector. For the actual filtering operation, the temporary vector is split into two halves. Pixels located in the even numbered memory locations (such as 0, 2, 4,...) of the temporary vector are low pass filtered using the low pass filter (LPF) coefficients, while the pixels in the odd numbered memory locations (such as 1, 3, 5,...) of the temporary vector are high pass filtered using the high pass filter (HPF) coefficients.
[0O62] The result of each filtering operation (high-pass or low-pass) is stored in the current vector, such that all the results of the low-pass filtering operations are stored in the upper half of the vector (e.g., the top half of a vertical vector, or the left half of a horizontal vector, depending on the current orientation of filtering), while the results from the high-pass filtering operations are stored in the lower half of the column (e.g., the bottom half of a vertical vector, or the right half of a horizontal vector). In this way, the pixel data is decimated in a single iteration. The entire process is repeated for all the columns and rows in the current map and frame. The entire process is repeated for all three maps for the current frame, to obtain the wavelet transformed image.
[0063] Referring back to Figure 4A, the bootstrapped source entropy and arithmetic coding process 403 of the wavelet map is also referred to as channel coding (e.g., operation 502). The arithmetic coding exploits the intimate relationships between spatially homologous blocks within the sub-band tree structure generated in the wavelet transformation 402 described above. The data in the wavelet map is encoded by representing the significance (e.g., with respect to a variable-size quantization threshold), sign and bit plane information of the pixels using a single bit alphabet. The bit stream is encoded in an embedded form, meaning that all the relevant information of a single pixel at a particular quantization threshold is transmitted as a continuous stream of bits. The quantization threshold depends on the number of bits used to represent the Wavelet coefficients. In this embodiment, sixteen bits are used for representing the coefficients. Hence for the first pass the quantization threshold is set, for example, at 0x8000. After a single pass, the threshold is lowered, and the pixels are encoded in the same or similar order as before until substantially all the pixels have been processed. This ensures that all pixels are progressively coded and transmitted in the bit stream. [0064] According to one embodiment, the entropy coded bit stream is further compressed by passing the outputted bit through a context based adaptive arithmetic encoder 404 (also referred to as a channel encoder), as shown as operation 503. This context based adaptive binary arithmetic coder (CAB AC) encodes the bit information depending on the probability of occurrence of a predetermined set of bits immediately preceding the current bit. The context in which the current bit is" encoded depends on the nature of the information represented by the bit (significance, sign or bit plane information) and the location of the coefficient being coded in the hierarchical tree structure. The concept of a CAB AC is similar in principle to the one specified in the ITU-T SG16 WP3 Q.6 (VCEG) Rec. H.264 and ISO/EEC JTC 1/SC 29/WG 11 (MPEG) Rec. 14496-10 (MPEG4 part 10). The difference lies in the context modeling, estimation and adaptation of probabilities. Since the transform and source coding technique of the embodiment is different from ITU-T SG16 WP3 Q.6 (VCEG) Rec. H.264 and ISO/IEC JTC 1/SC 29/WG 11 (MPEG) Rec. 14496-10 ( PEG4 part 10), the coefficients of the embodiment has different statistical characteristics. The CAB AC-type entropy coder, as specified in the embodiment, is designed to exploit these characteristics to the maximum. [0065] In one embodiment, the context is an n-bit data structure with a dynamic range of 0 to 2n. With every new bit coded, the context variable assigned to the current bit is updated, based on a probability estimation table (PET). Once n bits have been coded, the contents of the context variable is packed into a compact data structure as compressed file 405 and transmitted onward, and the variable is refreshed for the next batch of n bits (e.g., operation 504). In one embodiment, the system uses (9 x m) context variables for each frame - for three bit classes over three spatial orientation trees, and all sub-bands over m levels of decomposition. [0066] According to one embodiment, the decoder, which may reside in the client, may be implemented similar to the exemplary encoder 400 of Figure 4A, but in a reversed order as shown in Figure 4B.
Exemplary Encoding Hierarchy
[0067] The hierarchical tree structure relating spatially homologous pixels and their descendants can be explained using a parent-child hierarchy. Figure 6 is a diagram illustrating an exemplary pixel map for encoding processing according to one embodiment. Referring to Figure 6, in one embodiment, the root of the tree structure may be made up of the set of all the pixels in the coarsest sub-band, LL , and the set be labeled as H. In one embodiment, the pixels in set H are grouped in sets of 2x2, or quads. Referring to Figure 6, each quad in set H (e.g., block 601) has four pixels, with all but the top-left member 602 of every quad having four descendants (e.g., blocks 603-605) in the spatially homologous next finer level of decomposition. Thus, for instance, the top-right pixel in a quad has four descendant pixels 604 (in a 2x2 format) in the next finer sub-band with the same spatial orientation (ΗL^.\ in this case). The relative location of the descendants, too, is related to the spatial orientation of the tree root. The first generation descendants of a coefficient (henceforth labeled as offspring) of the top-right pixel in the top-left quad of set H are the top-left 2x2 quad in HLk-j (e.g., block 604). Similarly, the offspring of the bottom right pixel in any quad of set H lie in spatially homologous positions in the HHk-ι sub-band, while the descendants of the bottom left pixel in any quad of set H lie in spatially homologous positions in the LHk-i sub-band (e.g., block 603). Descendants beyond the first generation of pixels, and sets (including quads) thereof, are generally labeled as grandchildren coefficients, for example, blocks 606-611 as shown in Figure 6.
[0068]- For the encoding process, in one embodiment, a unique data structure records the order in which the coefficients are encoded. Three dynamically linked data structures, or queues, are maintained for this purpose, labeled as insignificant pixel queue (IPQ), insignificant set queue (ISQ) and significant pixel queue (SPQ). In one embodiment, each queue is implemented as a dynamic data structure, which includes, but is not restricted to, a doubly linked list or a stack array structure, where each node stores information about the pixel such as coordinates, bit plane number when the pixel becomes significant and type of ISQ list. [0069] In one embodiment, three types of sets of transform coefficients are defined to partition the pixels and their descendant trees. However, more or less sets may be implemented. The set D(T) is the set of all descendants of a pixel, or an arbitrary set, T, thereof. This includes direct descendants (e.g., offspring such as blocks 603-605) as well as grandchildren coefficients (e.g., blocks 606-608). The set O(T) is defined as the set of all first generation, or direct, descendants of a pixel, or an arbitrary set, T, thereof (e.g., blocks 603-605). Finally, the set L(T) is defined as the set of descendants other than the offspring of a pixel, or an arbitrary set, T, thereof, i.e., L(T) = D(T) - O(T) (e.g., blocks 609-611). In one embodiment, two types of ISQ entries may be defined. ISQ entries of type α represent the set D(T). ISQ entries of type β represent the set L(T).
[0070] In one embodiment, a binary metric used extensively in the encoding process is the significance function, Sn(T). In one embodiment, the significance function gives an output of one if the largest wavelet coefficient in the set τ is larger than the current quantization threshold level (e.g., the quantization threshold in the current iteration), or else give an output of zero. In one embodiment, the significance function may be defined as follows:
1, max jc; |}> 2"' (i,j)≡ τ
0, otherwise where, Sn(T) is the set of pixels, T whose significance is to be measured against the current threshold, ni.
[0071] Figure 8 is a flow diagram illustrating an exemplary encoding process according to one embodiment. The exemplary process 500 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system, a server, or a dedicated machine), or a combination of both. For example, exemplary process 500 may be performed by a server component (e.g., server suite), such as, for example, server 101 of Figure 1 or sever 201 of Figure 2.
[0072] Referring to Figure 8, the first phase in the encoding process of the encoder (also referred to as the initialization pass) is the determination and transmission (as a sequence of 8 bits, in binary format) of the number of passes the encoder has to iterate through (block 801). The number of iterations, according to one embodiment, is less than or equal to the number of bit-planes of the largest wavelet coefficient in the current map. The number of iterations to code all the bits in a single map is determined by the number of quantization levels. In one embodiment, this is determined using a formula that may be defined as follows: npceilQogalwroaxl)
where w^ is the largest wavelet coefficient in the current map. This number is transmitted (without context) into the byte stream. The coding process is then iterated ni times over the entire map.
[0073] Initially, at block 802, in one embodiment all pixels are marked as insignificant against the current threshold level, which may be defined as T=2nl. Hence, IPQ is populated with all the pixels in set H, while ISQ is populated with all the pixels in set H that have descendants (i.e., in set H, all the pixels in every quad except the top-left one). The SPQ is kept empty and is filled gradually as pixels become significant against the current quantization threshold. [0074] As a next phase, at block 803, also referred to as a sorting pass, in one embodiment, all the pixels in the IPQ are sorted to determine which ones have become significant with respect to a current quantization threshold. For every entry in the IPQ, the value of the significance function for the current pixel (or entry in the IPQ) is determined, and the value is sent out as the output in the form of a single bit. In one embodiment, as a next operation, the sign bit of the pixel entry, if the value of the significance function in the previous operation were one, is sent as the output in the form of a single bit. The output of the sign is 1 if the entry is positive and 0 if the entry is negative. Once all the significant pixels in the set H have been segregated from the insignificant ones, a more complex version of the same sorting pass is performed on the entries of the ISQ.
[0075] In one embodiment, at block 804, if the current entry of the ISQ is of type α (e.g., the class of entries that represents all the descendants of the pixel across all generations), the significance of the set D(T) is transmitted as a single bit. If this output bit is one (e.g., the entry has one or more significant descendants), a similar test is performed for the direct (e.g., first generation) descendants, or offspring of the entry. For all four offspring of the entry (defined by set O(T)), according to one embodiment, two operations are performed. First, the significance of the offspring pixel is determined and transmitted. As a second operation, if the offspring pixel is significant, the sign of the ISQ entry is transmitted. That is, a value of one is transmitted if the entry is positive, or a value of zero is transmitted if the entry is negative. The entry is then deleted from the ISQ and appended to the SPQ. If, however, the offspring pixel is insignificant, the offspring pixel is removed from the ISQ, and appended to the IPQ.
[0076] Once all the offspring pixels have been tested, the current ISQ entry is retained depending on the depth of the descendant hierarchy. If the entry has no descendants beyond the immediate offspring (L (T) ≠ (0), the entry is purged from the ISQ. If, however, descendants for the current set exist beyond the first generation, the entry is removed from its current position in the ISQ, and appended to the end of the ISQ as an entry of type β (block 805).
[0077] In one embodiment, if the current entry in the ISQ were of type β, the significance test is performed on the set L (T). For every entry in the ISQ of type β, the significance of the set L(T) is tested (e.g., using a significant function) and transmitted as a single-bit. If there exists one or more significant pixels in the set ~ L(T), all four offspring of the current ISQ entry are appended to the ISQ as type ot entries at block 806, to be processed in future passes. The current entry in the ISQ is then purged from the queue at block 807.
[0078] In one embodiment, the final phase in the coding process is referred to as the refinement pass. At the end of the sorting pass, all the pixels (or sets thereof) that have become significant against the current quantization threshold level up to the current iteration are removed from the IPQ and appended to the SPQ. For every such entry, the iteration number "n", when the entry was appended to the queue (and the corresponding coefficient became significant against the current quantization threshold level), is recorded along with the co-ordinate information. For every entry in the SPQ that has been appended to it in previous iterations (nl5 -1... nj-n), the nΛ most significant bit is transmitted. As a final pass, in one embodiment, the value of n is decremented by one, so that the quantization threshold, T is reduced by half, and the entire process is repeated for all the entries currently listed in the three queues. [0079] To achieve additional compression, in one embodiment, according to one embodiment, the output of the entropy coder may be passed through a CABAC-type processor. The embedded output stream of the entropy coder has been designed in a way, such that the compression is optimized by segregating the bit stream based on the context in which the particular bit has been coded. The bit stream includes the bits representing the binary decisions made during the coding. The bits corresponding to the same decisions are segregated and coded separately. Since the Wavelet transformed coefficients are arranged such that the coefficients with identical characteristics are grouped together, the decisions made on the coefficients in a group are expected to be similar or identical. Hence the bit-stream generated as a result would have longer runs of identical bits, making it more suitable for compression, and achieving more optimal level of compression. [0080] Note that the wavelet coefficients "w" have a unique spatial correlation with one another, depending on which sub-band and tree it may belong to. Particularly, such a close correlation exists between the pixels of a single sub-band at a particular level, though the level of correlation weakens across pixels of different sub-bands in the same or different trees and levels. Also note that there is a run- length based correlation between bits that have the similar syntactic relationship. For example, some of the bits in the embedded stream represent sign information for a particular pixel, while others represent significance information. For example, a value of one in this case denotes that the pixel currently being processed is significant with respect to the current quantization threshold, while a zero value denotes otherwise. A third and final class of bits represent refinement bits, which encode the actual quantization error information.
[0081] In one embodiment, each bit in the output stream may be classified based on the nature of the information it represents (3 types) or the location in the sub-band tree map (3 ni + 1 possible locations, where n! is the number of levels of decomposition). This gives rise to 3 x (3 nj + 1) possible contexts in which a bit can exist, and a unique context is used to code an arbitrary bit.
[0082] In order to implement the partitioning of the stream based on contexts, 3 x (3 ri\ + 1), context variables act as an interface between the output of the entropy coder and the binary arithmetic coder. Each context variable is an 8-bitmemory location, which updates its value one bit at a time, as additional coded bits are outputted. Once eight bits have been outputted, the contents of the memory location represented by the context variable under question are processed using a probability estimation table (PET). After transmitting the arithmetic encoded bits, the contents of the context variable are flushed out, for the next batch of 8 bits.
Alternative Source Encoding Schemes
[0083] In another embodiment, which is referred to as arithmetic coding II herein, the wavelet map may be split into blocks of size 32 x 32 (in pixels), and each such block is source coded independent of all other blocks in the wavelet map. For each wavelet map, if the dimensions of the map are not a multiple of 32 in either direction, a set of columns and/or rows are padded with zeros such that the new dimensions of the map is a multiple of 32 in both directions. In one embodiment, the coefficients in each such block are arranged in the hierarchical Mallat format. The number of levels of decomposition may be arbitrary. In one embodiment, the number is five, so that the coarsest sub-band in Mallat format of each block is one pixel across. Under such a scheme, there are ten partitions, also referred to as bands, with the finest ten sub-bands in the Mallat representation coinciding with the nine finest bands. The coarsest band is constructed by amalgamating the six coarsest sub- bands in the Mallat format.
[0084] In one embodiment, the bands are numbered in a zigzag manner, similar to the sequence shown in Figure 7. The coarsest band is labeled as band 0, while the next three bands (HL, LH and HH orientations, in that order) are labeled as bands 1, 2 and 3 respectively, and so on. An additional data structure, known as a stripe, may be used to represent a set of 4 x 4 coefficients. Thus, each of bands 0, 1, 2 and 3 is made up of one such stripe. Bands in the second and third level of decomposition are made of four and sixteen stripes each.
[0085] Initially, according to one embodiment, quantization thresholds are assigned to all coefficients in band 0 (coarsest), as well as all finer bands. There exists a linear progressive relationship between the thresholds assigned to the various coefficients and bands in the wavelet map. The value of the thresholds are arbitrary, and a matter of conjecture and rigorous experimentation. In one embodiment, for a 5 level decomposition, the top-left (coarsest) sub-band (which is a part of the band 0) is assigned a particular threshold value (labeled x), while the top-right and bottom-left sub-bands of the same level of decomposition (also part of band 0) are assigned a threshold of 2x, and the threshold for the bottom-right sub-band is 4x. [0086] Upon graduating to the next (finer) level of the decomposition tree, the threshold for the top-right and bottom-left sub-bands is the same as the threshold value of the bottom-right sub-band of the previous (coarser) level, while the bottom- right sub-band of the current (finer) level has a threshold that is double that value. This process is applied to the assignment of threshold values for all consecutive bands numbered 0 through 9 in the cunent block. For example, the initial thresholds for the four coarsest pixels in the top-left corner of band 0 are set at 4000 , 8000h, 8000h and lOOOOh (h denotes a number in hexadecimal notation). Similarly, the four- pixel quartet in the top-right comer of band 0 is assigned a threshold of lOOOOh, while the quartets in the bottom-left and bottom-right comer of band 0 are assigned- thresholds of lOOOOh and 20000h respectively. With each successive iteration of the source coding scheme, the threshold for each sub-band and band explained above is reduced by half.
[0087] In one embodiment, the coding scheme includes four passes, labeled 0 to 3. In pass 0, in one embodiment, the decision on the significance of the cunent band is considered. In one embodiment, it may be assumed that the coarsest band (band 0) is always significant, since all the coefficients in the band are likely above the current threshold. For all other bands, if any one coefficient is above the cunent threshold for the particular band/sub-band, the cunent band is marked as significant. An extra bit (e.g., 1) is transmitted to the output stream to represent a significant band. If the cunent band has already been marked as significant, then no further action is necessary.
[0088] In pass 1 , in one embodiment the decision on the significance of the stripes is considered. Each stripe is a set of 4 x 4 pixels, and each set of 2 x 2 pixels in the stripe has a hierarchical parent-child relationship with a homologous pixel in the previous coarser sub-band with the same orientation. Thus, each stripe has a parent-child hierarchical relationship with a 2 x 2 quad that is homologous in its spatial orientation in the previous coarser sub-band (see Fig. 11). [0089] A stripe is designated as significant if its 2 x 2 quad parent (as explained above) is also significant, or the band within which the stripe resides has been marked as significant (in pass 0). A parent quad is marked as significant if one or more of the coefficients in the quad is above the cunent threshold level for the band in which the quad resides.
[0090] In pass 2, in one embodiment, the significance information of individual pixels in the cunent stripe, along with their sign information, is considered. As an initial operation, the number of pixels in the cunent stripe that are significant is recorded. This information is used to determine which context variable is to be used to code the significance information of the pixels in the cunent stripe (see discussion on CAB AC above). For every pixel whose absolute value is above the cunent threshold for the cunent band and snipe, a binary 1 is transmitted, followed by a single bit for the sign of that coefficient (1 for a positive coefficient, or a 0 for a negative coefficient). If the cunent coefficient is insignificant, a 0 is transmitted, and its sign need not be checked. This test is performed on all 16 pixels in the cunent stripe, and is repeated over all the stripes in the cunent band, and for all bands in the cunent block of the wavelet map.
[0091] In pass 3, in one embodiment, the refinement information for each pixel in the cunent block is transmitted. For every band, each pixel is compared against the threshold level for the particular band and stripe. If the absolute value of the coefficient is above the threshold level for the cunent band and stripe, then a 1 (bit) is transmitted, else a 0 is transmitted.
[0092] - In one embodiment, the first three passes (pass 0 to 2) are nested within each other for the cunent block, band and stripe. Thus, pass 0 is performed on every band in the cunent block, with the bands numbered sequentially in a zigzag fashion, and tested in that order. For each band, pass 1 for performed on all the stripes in the" band in a raster scan fashion. Within the stripe, pass 2 is performed on every coefficient of the cunent stripe, also in raster scan mode. Pass 3 is performed on all the coefficients of the block, without consideration to the sequence of bands of stripes within the bands.
[0093] At the end of each iteration of the four pass routines explained above, the process is repeated for all blocks in the wavelet map. Once all the blocks in the wavelet map have been tested in this fashion, the process is repeated on the first block onwards, with the threshold levels for all bands in the block reduced to half its previous value. This continues until all the pixels have been coded down to their least significant bit plane or the bit-budget is completely exhausted.
Exemplary Motion Prediction Schemes
[0094] In one embodiment, a fast and efficient motion prediction scheme is made to take optimal advantage of the temporal redundancy inherent in the video stream. In one embodiment of the scheme, the spatial shift in the wavelet coefficient's location is tracked using an innovative, fast and accurate motion prediction routine, in order to exploit the temporal redundancy between the wavelet coefficients of homologous sub-bands in successive frames in a video clip. [0095] It is well known that in multi-resolution representations, every sub-band, or sub-image, in the entire wavelet map for each frame in the video clip represents a sub-sampled and decimated version of the original image. To track motion between consecutive frames, a feedback loop is introduced in the linear signal flow path. [0096] Figures 9A-9B and 10A-10B are block diagrams illustrating exemplary encoding and decoding processes according to certain embodiments of the invention. The overall motion in the original image is tracked by following the motion of homologous blocks of pixels in every sub-band of consecutive frames. In order to speed up the process, according to one embodiment, motion is tracked only in the luma (Y) map, while the same motion prediction information is used in the two chroma (Cr and Cb) maps. This works relatively well since it can be assumed that chroma information follows changes in the luma map fairly assiduously. Within the luma (Y) map, in one embodiment, a full-fledged search of the entire search space is performed only in the four coarsest sub-bands as shown in Figure 6, while this information is scaled and refined using a set of affined transformations, for example, in the six finer sub-bands. This saves a considerable amount of bandwidth, due to the less number of bits that now needs to be coded and transmitted to represent the motion information, without any significant loss of fidelity. [0097] In one embodiment, three categories of frames are decided upon, for labeling purposes, depending on the type of prediction the frame shall undergo to remove any inherent temporal redundancies using any of the motion prediction routines outlined herein, cunent frames that do not need to be predictively coded for temporal redundancies are labeled as infra-coded frames (I-frames). Frames that are coded using information from previously coded frames are called predicted frames (P-frames). Frames that need previously coded frames as well as frames that come after the cunent frame are called bi-directional frames (B -frames). [0098] In one embodiment, referring to Figures 9A and 9B-, the luma (Y) map of- the cunent frame may be encoded using the arithmetic coding I II scheme with a target bit-rate. Once the bit budget is exhausted (e.g. a number of bits encoded that will be transmitted within a period of time determined by the target bit rate), or all the bit-planes have been coded, the coding is stopped, and the similar reverse procedure (called inverse arithmetic coding I/II) is executed to recover the (lossy) version of the luma (Y) component of the wavelet map of the cunent frame. The version of arithmetic coding to be used here, is similar or the same as the version used in the forward entropy coder described above.
[0099] Figure 11 is a flow diagram illustrating an exemplary process for motion prediction according to one embodiment. The exemplary process 1100 may be performed by processing logic that may include hardware (e.g., circuitry, dedicated logic, etc.), software (such as is ran on a general-purpose computer system, a server, or a dedicated machine), or a combination of both. For example, exemplary process 1100 may be performed by a server component (e.g., server suite), such as, for example, server 101 of Figure 1 or sever 201 of Figure 2. [00100] Referring to Figure 11, for the first frame (and any successive frames which have been classified as I-frames in the video sequence, the recovered wavelet map is buffered as the reference frame, for use as a reference for the next frame in the sequence (block 1101). In one embodiment, the second frame is read and decomposed using the n-level wavelet decomposition filter-bank, to generate a new cunent frame. A unique search-and-match algorithm is performed on the wavelet map to keep track on pixels, and sets thereof, which have changed their location due to general motion in the video sequence. In one embodiment, the search algorithm is refened to as motion estimation (ME), while the match algorithm is refened to as motion compensation (MC").
[00101] As a first operation to the ME/MC" routine, in one embodiment, a lower threshold is set, to determine which coefficient values need to be tracked for motion, and eventually compensated for. In the predicted frames (P or B or otherwise), most coefficients in the finer sub-bands are automatically quantized to zero, while most of the coefficients in the coarsest sub-bands are typically not quantized to zero. Hence, it makes sense to determine the largest coefficient in the intermediate (level 2) sub- bands during the encoding process, which is quantized down to zero during the lossy reconstruction process, and use that as a lower threshold (also refened to as loThres).
Exemplary Coarse Motion Prediction
[00102] As the next operation, a traditional search-and-match is performed on the four coarsest sub-bands of the wavelet maps of the reference and cunent frames (block 1102). The motion prediction routine performed on these sub-bands involves a simple block search-and-match algorithm on homologous blocks of pixels. This operation identifies the blocks where motion has occuned. The amount of motion is determined and compensated for. This reduces the amount of information that is required to be transmitted, hence leading to better compression. [00103] To estimate motion of a block of pixels between the cunent and the reference wavelet maps, at block 1103, a block neighborhood is defined around the block of pixels in the reference map, whose motion is to be estimated (which is called the reference block), as shown in-Figure-12. . . . . . .
[00104] For the four coarsest sub-bands, the dimensions of the sub-bands are equal to k = (w/2n) and 1 = (h/2n), where w and h are the width and height of the original image with an n level decomposition. The depth of the neighborhood around the pixel block is usually set at equal to k and 1 respectively, though a slightly lower value (e.g., k-1 and 1-1) performs equally well.
[00105] For blocks of pixels along the edge of the sub-band, the neighborhood region spills over outside the sub-band. In one embodiment, for the block, an edge extension zone is used to create the block neighborhood. In order to smoothen the signal contours and reduce the dynamic range of the wavelet coefficients along the image edges, in one embodiment, a minoring scheme is used to create the edge extension zone. Along horizontal edges of the block, columns of pixels in the neighborhood zone are filled with pixels in the same column along the horizontal edge of the block, in a reverse order. Thus, the pixel in the neighborhood zone closest to the edge is filled with the value of the pixel directly abutting it in the same column, and inside the block. This process continues with pixels equidistant from the block boundary, till all the pixels in the cunent column of the neighborhood zone of the block are filled up. This process is repeated for all columns abutting both horizontal edges of the block. A similar process is repeated with abutting rows along the vertical boundary of the block. For pixels abutting the comer of the block, the value of the pixel in the respective comer of the reference block is replicated. [00106] Once the block neighborhood has been demarcated around the reference block, and the conesponding edge extension zone populated using the scheme outlined above, the actual search-routine is performed.
[00107] The block of pixels in the cunent as well as the reference frames that are in the same relative position in homologous sub-bands are used for the ME routine. The block of pixels in the cunent frame, which is to be matched is called the cunent block. The region encompassed by the block neighborhood around the reference block can be viewed as being made up of several blocks having the same dimensions as the reference (or cunent) block.
[00108] The metric used to measure the objective numerical difference between the cunent and any block of the same size in the neighborhood of the reference block is the popular Li, or mean absolute enor (MAE) metric. To measure the metric, according to one embodiment, a block of pixels with the same dimensions as the cunent block is identified within the neighborhood zone. The difference between the absolute values of conesponding pixels in the two blocks is computed and summed. This process is repeated for all such possible blocks of pixels within the neighborhood region, including the reference block itself (block 1104). [00109] One important aspect of the search technique is the order in which the search take place. Rather than using a traditional raster scan, according to one embodiment, an innovative outward coil technique is used. In this modification, the first block in the cunent neighborhood in the cunent sub-band of the reference frame to be matched with the cunent block (of the homologous sub-band in the cunent frame) is the reference block itself. Once the reference block has been tested, all the blocks which are at a one pixel offset on all sides of the reference block are tested. After the first iteration, all blocks that are at two pixels' offset from the reference block are tested. In this fashion, the search space progressively moves outwards-, until all the blocks in the cunent neighborhood have been tested. [00110] The particular block within the neighborhood region that possesses the minimum MAE is of special interest to the cunent system (also refened to as a matching block). This is the block of pixels in the reference (previous) frame, which is closest in terms of absolute difference to the cunent block of pixels in the cunent frame.
[00111] To record the relative displacement of the matching block from the cunent (or reference) block, in one embodiment, a unique data structure, also refened to as a motion vector (MV) is utilized. The MV of the block being tested contains information on the relative displacement between the reference block (e.g., a block of a previous or future frame) and the matching block (in the cunent frame). The top- left comer of each block is chosen as the point of reference to track matching blocks. The relative shift between the coordinates of the top-left comer of the reference block and that of the matching block is stored in the motion vector data structure. The motion vectors in the LLk sub-band are labeled Nι°, while the motion vectors in the three other coarsest sub-bands are labeled V2°, where o is the orientation (HL, LH and HH), as shown in Figure 12. [00112] After computing the motion vector, the data is transmitted without context through a baseline binary arithmetic compression algorithm (also refened to herein as the 'pass through mode'). In one embodiment, a hierarchical order is followed while transmitting motion information, especially the motion vector data structure. Motion vector information, both absolute (from coarser levels) and refined (from finer levels), according to one embodiment, has a hierarchical structure. The motion vectors conesponding to blocks that share a parent-child relationship along the same spatial orientation tree have some degree of conelation, and hence may be transmitted using the same context variable.
[00113] After the motion prediction process is performed as explained above, according to one embodiment, the pixel values of the matching block in the reference frame are subtracted from the homologous block in the cunent frame, and the result of each operation is used to overwrite the conesponding pixel in the cunent block. This difference block, also refened to as the compensated block, replaces the current block in the cunent frame. This process is refened to as motion compensation (MC). [00114] In order to ensure that the compensation process is applied only to coefficients that contribute substantially to the final reconstructed image in the decoder, according to one embodiment, the previously defined lower threshold (loThres) is used to perform motion compensation only on such coefficients. If it is observed that the compensated coefficient value is lower than the loThres value, according to one embodiment, the compensated coefficient may be quantized down to zero. This ensures that only those coefficients that make some significant contribution to the overall fidelity of the reconstructed frame are allowed to contribute to the final bit rate. The above ME/MC process is repeated over all 2 x 2 blocks in the four coarsest sub-bands of the cunent and reference wavelet maps.
Exemplary Refined Motion Prediction
[00115] The process is slightly modified for all remaining finer sub-bands in the wavelet maps of the cunent and reference maps. For all but the four coarsest sub- - bands, in one embodiment, a refinement motion prediction scheme may be implemented using an affined transformation over the motion vectors conesponding to the homologous blocks in the coarsest sub-bands, and applying a regular search routine over a limited area in the region around the displaced reference block as shown in Figure 12.
[00116] Due to the parent-child hierarchy between pixels of successively finer sub-bands, the relative position of the reference block in the finer sub-bands is closely related to the reference blocks in the coarsest sub-bands. For example, the descendants of the top-left 2 x 2 block of pixels in the HL3 sub-band include the 4 x 4 block of pixels in the top-left comer of HL2, and the 8 x 8 block of pixels in the top- left comer of HLl5 as shown in Figure 6. Note that the size of a reference block along both dimensions is twice that of a homologous reference block in the previously coarser sub-band. Intuitively, the size of a motion vector in the finer sub- band may be assumed to be twice as the motion vector in a homologous coarser sub- band. This provides a very coarse approximation of the spatial shift of the pixels in the reference block in the sub-band. To further refine this approximation and track the motion of pixels in finer sub-bands more accurately, according to one embodiment, a refined-search-and-match routine is performed on reference blocks in the finer sub-bands.
[00117] As a first operation in the refined motion prediction routine, in one embodiment the dimensions of the reference block depend upon the level of the sub- band where the reference block resides. For example, reference blocks in level 2 are of size 4 4, while those-in level 3 are of size 8 8, and so on. In other words, the size of a reference block along both directions is twice as the reference block in the immediately preceding (coarser) sub-band. In one embodiment, a block with the same or similar dimensions as the reference block in a particular level and shifted by a certain amount along both cardinal directions is identified. [00118] The amount of displacement depends on the level where the reference block resides, as shown in Figure 12. For example, referring to Figure 12, in level 2 sub-bands, the approximate displacement is 2 * Vk°, where, Vk° is the motion vector for a homologous reference block in the coarsest (level 1) sub-band. Thus, the new reference block is displaced by 2 * Vk° from the original reference block. A search region, which is identical to the neighborhood zone around the reference block defined earlier, is defined around the new reference block, along with edge extension if the block happens to abut the sub-band edge. The depth of the neighborhood zone depends on the level of decomposition. In one embodiment, it has been set at 4 pixels for level 2 sub-bands, and 8 for level 3 sub-bands, and so on. [00119] For sub-bands in any intermediate levels, the refined-search-and-match routine is implemented in a manner that is similar or identical to the search-and- match routine for the level 1 (coarsest) sub-bands, as described above. The refined motion vector, labeled as Δ °, where k=2n-l, n=total level of decomposition and l=cunent level of decomposition, is transmitted in a manner similar to the motion vectors of coarser sub-bands (see Fig. 12). The (resultant) conected motion vector, Vk°, pointing to the net displacement of the matching block is given by adding the approximate (scaled) motion vector, 2 * V^0, and the refinement vector, Δ k°. [00120] For sub-bands of the finest level, the approximate motion vector (to~ account for the doubling of the dimensions of the reference block) is given by Vk° + (2 * VM 0). A block that is displaced from the original reference block by the approximate motion vector is then used as the new reference block. The depth of the neighborhood zone is set at twice the size as that set in the immediately coarser level, around this block. The new refined motion vector, Δ2*k°, thus obtained is transmitted in a manner similar to that of coarser levels (see Fig. 12). [00121] The motion compensation (MC) routine for the refined motion prediction algorithm performed on the finer sub-bands is similar or identical to the process outlined for the coarsest sub-bands. The matching block, pointed to by the refined motion vector is subtracted pixel-by-pixel from the cunent block (in the homologous position in the cunent frame), and the difference is overwritten in the location occupied by the cunent block. This block is now called the compensated block (as described above for coarser sub-bands).
[00122] After all the cunent blocks in the cunent frame have been compensated, the new frame is called the compensated frame. This compensated (difference) frame (also called the predicted frame) is now source and arithmetic coded using the source entropy/arithmetic coder, and the bit stream is transmitted over the transmission channel (e.g., blocks 403-405 of Figure 4A).
[00123] The source coding and motion compensation feedback loop for predicted frame is similar to the process employed for Intra-frames, with some minor modifications. It is well known that the statistical distribution of coefficient values in a predicted-frame is different from the one found in Intra-coded frames. In case of Intra coded frames, it is assumed that the energy compaction of the wavelet filter ensures superior energy compaction. This ensures that a majority portion of the energy is concentrated in the four coarsest sub-bands. However, during the entire setup, the data has the non-deterministic statistical properties of real time visual signals, such as video sequences. But in the case of predicted frames, only the spatially variant difference values of the pixels are stored, and these coefficients lack the entropy of a real video clip. Hence, the superior energy compaction of the predicted wavelet map cannot be taken for granted.
[00124] In one embodiment, for an intra coded map, going along a particular orientation free, the coarsest sub-band has the largest mean and variance of coefficient values, and these statistics decrease along a logarithmic curve towards finer levels. Such a "downhill" contour maintains the high level of energy compaction in the wavelet map. This "top-heavy" distribution helps in the high coding efficiency and gain of the source coder. However, the first and second statistical moments of these sub-bands are not so intimately related in predicted wavelet maps. To simulate the "nearly-logarithmic curve" relationship in the predicted maps, according to one embodiment, the wavelet coefficients of the finer sub-bands in a predicted map may be scaled down from their original values. This forces the contour plot into a more "logarithmic" fit, thereby simulating the similar statistical distribution of wavelet coefficients that makes the compression of intra coded wavelet maps so efficient. According to one embodiment, this process of scaling is reversed in the decoding process. According to one embodiment, scaling factors of 8, 16 and 32 for the finest sub-bands (other than the LLk sub-band) along a particular tree orientation for a three level decomposition.
[00125] In one embodiment, a group-of-frames (GOF) is defined as a temporally contiguous set of frames, beginning with an intra-coded frame, and succeeded by predicted (B or P or otherwise) frames. At the end of a GOF, an intra-coded frame signals the beginning of a new GOF.
[O0126] An important facet of rate control is to ensure that intra-coded frames are introduced only when it is needed, due to their inherently higher coding rates. In one embodiment, the two events that wanant the introduction of intra-coded frames is a fall in the average frame PSNR below acceptable levels and/or a change in scene in a video clip. Due to the accurate motion prediction routine used by the system, the average PSNR of the frame less likely falls below a previously accepted threshold (thereby ensuring good subjectively quality throughout the entire video sequence). [O0127] Since most data processing and heuristic decisions are made in a multi- resolution domain, a change in scenery is also detected based on the distribution of pixel values in the coarsest (LLk) sub-band. As a pre-processing operation prior to motion prediction and source/entropy coding, the coarsest sub-bands of two frames on which the motion prediction routine is to be performed are compared. 05/086981
[00128] In one embodiment, the absolute difference of homologous pixels in the LLk sub-band is computed and compared with respect to a threshold. This threshold is determined upon experimentation on a wide range of video clips. In a particular embodiment, a value of 500 is suitable for most purposes. This absolute differencing operation is performed on all coefficients of the coarsest sub-band, and a counter keeps track of the number of cases where the value of the absolute difference exceeds the threshold. If the number of pixels in whose case the absolute difference exceeds the threshold is above or equal to a predetermined level, it can be assumed that there has been such a drastic change in the scenery in the video frame, so as to wanant an introduction of an intra-coded frame, thereby marking the end of the cunent GOF, and the beginning of a new one. The numeric level, hereby labeled as the scene change factor (SCF) that determines a scene change is a matter of experimentation. In one embodiment, a value of 50 is suitable for most cases. [00129] In one embodiment, a technique is employed to ensure that only those matching blocks (within a sub-band) that satisfy certain minimum and maximum threshold requirements are compensated and coded. This technique is called adaptive threshold. In one embodiment during the actual block matching routine, the first block to be compared with the cunent block is the reference block. For all other blocks in the neighborhood zone, the MAE of this block is compared with the MAE of the reference block against a threshold. If the difference in the values of the MAE of these two blocks is less than a threshold value, this match is discarded, and the reference block continues to be regarded as the best match. [00130] The threshold value may be determined by experimentation, and is different for different levels of the wavelet tree structure. At the coarser level (higher sub-bands) the coefficients are the average values while at the finer level (lower sub- bands) the coefficients are the difference values. Average values are larger than the difference values. Hence for LL sub-band the threshold value is higher than other sub-bands. All the sub-bands at given decomposition levels have the same quantization value and the value reduces as we go down the decomposition levels. Once a match has been found, according to one embodiment, the energy of the cunent block (in the cunent frame) may be compared with the energy of the compensated block (obtained by differencing homologous pixels of the cunent block in the cunent frame and the matching block in the reference frame). The energy in this case is a simple first order metric obtained by summing the coefficient values of the particular compensated block. If this energy value is greater than the conesponding energy value of the cunent block, the compensated block is discarded and the cunent block is used in its place in the compensated (residual) frame. Similar to the previous threshold case, according to one embodiment, the value of the cunent threshold level may be determined through extensive experimentation, and is different for the various levels of the wavelet pyramid.
Exemplary Motion Prediction Modes
[00131] The motion prediction routine used in certain embodiments is refened to herein as bi-directional multi-resolution motion prediction (B-MRMP). In one embodiment, motion is estimated from a previous as well as succeeding frame. The temporal offset between past, cunent and future frames used for motion prediction is a matter of conjecture. In one embodiment, a temporal offset of one is usually applied for best results. In one embodiment, frames are read and wavelet transformed in pairs. In such a scenario, three popular sequence modes are possible. [00132] In the first mode, also refened to as BP mode, the first frame in the pair is the bi-directionally predicted frame, where each block in each- sub-band of this frame, which undergoes the motion prediction routine is tested against a homologous block in both a previously coded (reference) frame, as well future (P or otherwise) frame. In one embodiment, the frame data is read and wavelet transformed in the natural order. However, the (succeeding) P frame is motion predicted before the B frame. The P frame is predicted by applying the motion prediction routine using the second frame of the last pair of frames (e.g., the reference frame). The frame is then reconstructed and compensated using the motion prediction techniques, to recover a lossy version of the frame. Each block in the B frame is now motion predicted with homologous blocks from both the (past) reference frame as well as the (future) P frame. If estimation/compensation with the reference block from the reference frame gives a lower energy compensated block, the particular block is compensated using the reference block of the (past) reference frame, or else, compensation is carried out using the reference block of the (future) P frame.
[00133] In the finer sub-bands of the B frame, the decision to use one of the two frames (past reference or future P) for compensation is based on the frame used for this purpose in the parent blocks in the four coarsest sub-bands. [00134] While recording and transmitting the motion information of the B frame, an anay stores the identity of the frame (past reference or future P) used in the compensation process, and using a 2-bit alphabet. This information for all blocks in the frame is transmitted with context over the channel prior to other motion information. The advantage of using B frames is that they do not need compensation and reconstruction in the motion prediction feedback loop, since they are less likely " used as reference frames to predict future frames. Thus this routine passes through the feedback reconstruction loop in the encoding process for half the non-intra-coded frames than in other systems, thereby saving a considerable amount of processing time.
[00135] In the second mode, also refened to as PI mode, the first frame in the pair is predictive coded using the second frame of the previous pair of frames as reference. The intra-coded frame in the latter part of this pair is used as reference for the next pair of the frames.
[00136] In the PI mode, the first frame is an intra-coded frame and is used as reference for the (unidirectional) motion prediction of the second frame in the pair. Once the second (P) frame has been compensated for, it is reassigned as the new reference frame for the next pair of frames.
[00137] In another embodiment, the motion prediction is performed using a single predicted frame, also refened to as uni-directional multi-resolution motion prediction (U-MRMP mode). In this scheme, all the operations outlined previously for B- MRMP are performed using a single (previous) predicted frame, known simply as the P frame. Since, only one frame is needed to be read at a time, this technique requires 005/086981
less latency time, though the motion prediction is not as accurate. Since all non- intra-coded frames are predicted from a previous (I or P) frame, there is no need to send a stream of mode bits, as described above.
[00138] In another embodiment, the motion compensation (MC) scheme may be replaced with a motion block superposition (MBS). In the MBS scheme, the motion estimation is performed as described above. However, the arithmetic encoding scheme is highly inefficient in coding predicted (enor) maps (B and P). Due to the skewed probability distribution of coefficients in B and P frames, they do not satisfy the top-heavy tree structure assumptions made in the case of arithmetic coding. This results in several of the large coefficients being interspersed in the finer sub-bands, causing the iterative mechanism of arithmetic coding to loop through several bit planes before these isolated coefficients have been coded for higher fidelity. [00139] In one embodiment, one way to resolve this problem is to avoid working on enor maps altogether. In this scheme, the arbitrary GOF size is replaced by a GOF of fixed size. In certain embodiments, the number of frames in the GOF may be equal to the number of frames per second (e.g., a new GOF every second). In case of rapid scene changes, where an intra-coded frame may be inserted within the 1 -second duration between two intra-coded GOF end-markers (to mark the beginning of a new scene), a new GOF is defined from this new I frame. After the motion estimation routine has determined the spatial location of the matching block relative to the reference block, according to one embodiment, the coefficient values of the cunent block (in the cunent frame) are replaced with the homologous pixels of the matching block in the reference frame. This saves time by not computing the difference of the two blocks, and also maintains the general statistics of an intra- coded frame. In effect, this results in the blocks of the first infra-coded frame in the cunent GOF being moved around within a limited region, like a jig-saw puzzle, with the motion being represented using only the conesponding motion vectors. In this scheme, only the motion vectors need to be transmitted, and the predictively coded maps are no longer source coded, since the blocks in each predictively coded frame is essentially made up of blocks translated and lifted from the first infra-coded frame of the cunent GOF. These operations are performed on all three maps of each B or P frame. [00140] Once the superposition from the reference to the cunent frame is completed (to create the Superimposed Frame, in lieu of the Compensated Frame described above), this acts as the reference frame for the future frames. Since the actual pixel values in the first I frame of the current GOF are simply translated around their position throughout the entire process, it may not need to encode the superimposed frame using arithmetic coding and transmit the bits to the decoder. Thus, the new system saves time and bandwidth by not encoding and transmitting bits from the superimposed frames.
[00141] In another embodiment of BMS, only the four coarsest sub-bands of the luma (and/or chroma) map(s) of the predictively coded frames are source encoded using arithmetic coding. This is done by duplicating these four sub-bands in an n- level sub-band map (similar to the normal maps), and padding all other sub-bands with zeros. Motion estimation (ME) and motion compensation (MC7MC+) are performed, as described above, and the compensated frame is encoded using arithmetic coding. This ensures greater reliability in tracking motion compared to the previous embodiment, but at the cost of higher bit rates.
[00142] As described above, the choice of the modes of motion prediction is a matter of open conjecture, and some diligent experimentation. In one embodiment, a threshold, also refened to as the motion information factor (lvflF), may be used to decide-on the mode in which the cunent and future frames are to be temporally coded.
[00143] In one embodiment, two independent thresholds are used to compute the MIF. Coefficients in the sub-bands in the wavelet map may be used for this purpose. The decision tree to classify blocks based on the average amount of motion is based on the segregation of the coefficients into three categories. For blocks whose total energy after compensation is greater than the energy of the original cunent block itself, the conesponding motion vector co-ordinates are set to a predetermined value, such as, for example, a value of 127. The other two categories of blocks have motion vectors with both coordinates equal to a value other than the predetermined value. For convenience, these blocks are labeled as NC (non-compensated), Z (zero) and NZ (non-zero) respectively.
[00144] The first threshold is set for the four coarsest sub-bands in the wavelet map. We denote the total number of NC blocks by the factor a and the total number of NC and Z blocks by the factor ? . In one embodiment, if a is less than 10% of the value of β , then the particular frame is repeated. Otherwise, motion prediction (B-MRMP) is performed. [00145] A similar test with the same test parameters ( a and β ) is performed on the remaining finer sub-bands. In one embodiment, if a is less than 10% of ? , motion block substitution (MBS) is performed. Otherwise, motion prediction (B- MRMP) is performed.
[00146] The threshold factor and the number of sub-bands to be used in either test is a matter of conjecture and diligenrexperimentation. In one embodiment, 4 (out of a possible 10) sub-bands are used for the first test and the remaining 6 are used for the second test, with a threshold factor of 10% in either case.
Alternative Motion Prediction Schemes
[00147] In another embodiment, a full search routine of the pixel (spatial) map is introduced prior to the wavelet transformation block, in order to predict and track motion in the spatial domain and thereby exploit the temporal redundancy between consecutive frames in the video sequence, as shown in Figures 9B and 10B. In one embodiment, a 16 x 16 block size is best suited for tracking real- world global and local motion. This includes, but is not limited to, rotational, translational, camera- pan, and zoom motion. Hence, blocks of this size are refened to as standard macroblocks.
[00148] In one embodiment, a unidirectional motion prediction (U-MP) is employed to predict motion between consecutive frames using a full search technique. In this embodiment, the frame is divided into blocks with height and width of the standard macroblock size (16 x 16). To prevent any inconsistencies along the frame boundaries, frame dimensions are edge extended to be a multiple of 16. To fill the pixels in the edge extended zones so created, a standard and uniform technique is applied across all frames. The edge extended zone can be filled with the pixels values along the edge of the actual image, for instance, or may be padded with zeros throughout. A variety of techniques may be utilized dependent upon the specific configurations.
[00149] Once the frame is divided into blocks of size 16 x 16, the U-MP routine is applied to all such blocks in a raster scan sequence. For each block, a neighborhood zone is defined around the edges of the macroblock, as shown in Figure 13. The depth of the neighborhood zone is chosen to be equal to 15 pixels in every direction. Hence, each macroblock to be processed using U-MP is padded with a 15 pixel neighborhood zone around it from all directions.
[00150] For macroblocks along the edge of the image map, the neighborhood zone may extend over to the region outside the image map. In such cases, the neighborhood zone for the macroblock uses pixels from the edge extended zone. [00151] The U-MP routine may be split into five basic operations. In the first operation, known as selective motion prediction, in one embodiment, a threshold is set to determine which pixels, or sets thereof, need to be compensated in the U-MP process. In one embodiment, each pixel in the reference frame is subtracted from the homologous pixel in the cunent frame, thereby generating a difference map. Each pixel in the difference map is then compared against a pre-determined threshold. The value of the threshold is a matter of conjecture and rigorous experimentation. If the difference value at the cunent pixel position is above the threshold, the pixel is marked as active; else it is marked as inactive. After all the pixels in the reference frame are checked, a count of the number of such active pixels in each 16 x 16 pixels in the reference frame is recorded. If the number of active pixels in the macroblock is above a pre-determined threshold, the macroblock is marked as active; else it is marked as inactive. The value of the threshold is a matter of conjecture and rigorous experimentation.
[00152] The second operation in the U-MP process is the unidirectional motion prediction (U-MP) operation. In one embodiment, a modification of the traditional half-pel motion prediction algorithm is performed. In this modification, each frame is interpolated by a factor of two, leading to a search area that is four times the original image map. As described above, the previous embodiment, the previous frame, known as the reference frame, is used as the reference for predicting and tracking motion. The cunent frame (known simply as the cunent frame) is used as the other basis of comparison. The homologous blocks in these two frames that are compared are called the reference block and the cunent block respectively, as shown in Figure 13.
[00153] In another embodiment of the motion prediction routine, the non -integer- pel motion interpolation scheme may be further modified to perform a for of quarter-pel motion prediction as shown in Figure 14. In this modification, the luma map of the cunent and reference frames are interpolated by a factor of four along both the cardinal directions, such that the effective search area in the search-and- match routine is increased by a factor of sixteen. In both aforementioned forms of the interpolation scheme, the choice of the interpolation mechanism is a matter of conjecture and rigorous experimentation, which includes, but is not restricted to, bi- linear, quadratic and cubic-spline interpolation schemes. The tradeoff between accurate prediction of the interpolated coefficients and speed of computation is a major deciding factor for the choice of scheme.
[00154] In the second operation, three tests are performed to determine the optimal direction of motion in a particular block of pixels in the cunent frame. Initially, in the non-displaced motion prediction operationτ in one embodiment, each macroblock in the cunent frame is subtracted pixel-by-pixel from the homologous macroblock in the reference frame. This generates the non-displaced compensated block. In the next operation, an integer search (see Figs. 13 and 14) is performed on every 16 16 macroblock of the cunent frame. In this routine, the pixels of the cunent macroblock are superimposed over every set of pixels of the same size as the cunent block in the neighborhood zone around the reference block. The metric employed for comparing these two sets of pixels, as in the previous embodiment, is the Li (sum of absolute differences - SAD) metric. Starting with the reference block, the SAD is computed for all 16 x 16 blocks in the neighborhood zone of the reference block, and the position of the block with the lowest value of SAD is labeled as a matching block.
[00155] The relative position between the matching block and the reference block is recorded using a unique data structure known as the motion vector for the cunent reference block. In the next operation, a half-pel search is performed on every 16 x 16 macroblock of the cunent frame (see Fig. 13). In this mode, the motion vector obtained for a particular macroblock in the integer search mode is doubled, and a refined search is performed. The depth of the refined search area is one pixel across in all directions. This operation helps in detecting motion which is less than or equal to half a pixel in all directions. The resultant motion vector is obtained by summing the scaled motion vector obtained in the integer search and the refined search modes. This and the conesponding SAD value are recorded for future mode selection (see Fig. 13).
[00156]- After recording the motion vectors and the value of the SAD for the conesponding macroblocks in the cunent frame, each macroblock is split into four blocks of size 8 x 8, and half-pel search is performed on each of the four blocks (see Fig. 14). The block (of dimension 8x8) offset from the cunent block (being tested for motion prediction) by a distance along both cardinal directions equal to the conesponding components of the motion vector obtained as outlined above, is used as a basis for a refined search routine, with a search area of 1 pixel around the block. This is a modification of the refined search technique, with a refined search area of one pixel across, as described above. The set of four resultant motion vectors, obtained by summing the scaled motion vector obtained in the integer search and refined search modes, and their conesponding SAD values are recorded for mode selection later on (see Fig. 14).
[00157] In another embodiment, a more refined scheme is implemented for smaller moving objects in the global motion field. In this scheme, each block of 8x8 within the cunent macroblock is further split into four blocks of 4x4, and the above technique of scaling and refined-search outlined above may be repeated for all possible search areas of dimensions 4x4, 4x8 and 8x4 pixels. The SAD values obtained from the refined motion estimation routines outlined in this paragraph are also tabulated for future mode selection.
[00158] In the third operation in U-MP according to one embodiment, different weights are applied to the SAD values obtained from the three different modes described above. Since the SAD (and the conesponding motion vector) from the non-displaced motion prediction operation contribute the least to any addition to the cumulative transmission rate, the motion vectors conesponding to this mode is given the lowest weight (of value zero), and hence the highest priority; the mode of the block is labeled as 0MV. Similarly, increasing weights (and lower priorities) are confened upon the SAD values and motion vectors obtained from the 16 x 16 integer/half-pel search and 8 x 8 half-pel search modes, respectively. The blocks are labeled 1MV (see Fig. 13) and 4MV (see Fig. 14), respectively. The weights are imposed by comparing the SAD values against some predetermined threshold. The value of the threshold, in each of the three cases outlined above, is a matter of conjecture and rigorous experimentation. This is done to ensure that a mode with higher rate is chosen for a particular macroblock, only when the advantage so obtained, in terms of higher fidelity (and lower SAD), is fairly substantial. [00159] In the fourth operation of U-MP, according to one embodiment, overlapped block matching/compensation (OBMC) is performed on each 16 x 16 macroblock of the reference frame as shown in Figure 15. In this operation, a displaced frame difference (DFD) algorithm is implemented, using the techniques described above. However, in one embodiment, the choice of the Matching Block is a function of the motion vectors of the reference block cunently being tested, as well as its abutting neighbors, as shown in Figure 15.
[00160] Consider the instance when the reference block cunently being tested is of mode 1MV, and the blocks directly above and to its left and abutting it are of mode 1MV each. In such an instance, according to one embodiment, the motion vectors from all three blocks are translated to any one comer of the reference block being tested (with no preference being given to any particular comer, though this choice should be consistent throughout the compensation procedure for that block), and the conesponding matching blocks are determined. The dimensions of all three matching blocks should be equal to the dimensions of the reference block (see Fig. 15). In the cunent embodiment, homologous pixels from all the matching blocks, so determined, are summed with different weights, and then differenced with the homologous pixel in the cunent block (in the cunent frame). The difference values are overwritten on the conesponding pixel positions in the cunent block. This difference block is labeled as the compensated block.
[00161] In the instance when the reference block cunently being tested is of mode 4MV, according to one embodiment, the matching block is of size 8 x 8. In this instance, each of the four 8 x 8 blocks carved out of the original 16 x 16 reference block is used to perform OBMC. If the block directly abutting any one of the 8 x 8 blocks is of mode 1MV, its single motion vector is used in the OBMC process. If the abutting block is of mode 4MV, only that 8 x 8 block of such an abutting block, which shares an entire line of pixels as the border with the 8 x 8 block in question (in the reference block being tested) is used (see Fig. 15). [00162] The weighting function applied to the pixels or sets thereof in the reference block cunently being tested, as well as the function applied to the pixels or sets thereof in the blocks abutting the reference block can be determined using a process of rigorous experimentation.
[00163] As a final operation in U-MP, a residual frame is generated, as a direct outcome of the OBMC operation described abovβ7 Using the DFD routine, each block (8 x 8 or 16 x 16) is differenced, and the pixel values are overwritten onto the conesponding pixel positions in the cunent block, thereby generating the residual block. Once all the blocks in the cunent frame have been tested, the resultant frame is labeled as the residual frame.
[00164] For every macroblock (of size 16 x 16) which has a motion vector of size zero, the SAD may be compared against a predetermined threshold. If the SAD is below the predetermined threshold, the particular macroblock is marked as a non- compensated macroblock (NCMB). If four such NCMBs are found adjacent to each other in a 2 x 2 grid anay anangement, this set of four blocks is jointly labeled as a non-coded block (NCB).
[00165] The decoder decodes the encoded bit stream has the reverse signal flow as the encoder. The relative order of the various signal processing operations are reversed (for example, the wavelet reconstruction block, or I-DWT comes after the source/entropy decoder, inverse arithmetic coding). Within each block, the flow of input data is reversed relative to the encoder, and the actual logical and mathematical operations are also reversed. [00166] The decoder however lacks the complexity of the encoder, in that the Motion Compensation (MC+) routine in the decoder is a relatively simple addition process, and does not involve the computationally intensive search-and-difference routine of the motion estimation/compensation (ME/MC) operation. The motion vector information for a particular block of pixels (of any arbitrary sub-band at any arbitrary level of resolution) is used to mark the cunent block under consideration, and the residual frame is updated (or 'compensated') by simply adding the values of the homologous pixels from the residual block to the current block as shown in Figures 9B and 10B.
[00167] Thus, methods and apparatuses for compressing digital image data with motion prediction are described herein. Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transfened, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. [00168] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as "processing" or "computing" or "calculating" or - "determining" or "displaying" or the like, refer to the action and processes of ar computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. [00169] Embodiments of the present invention also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD- ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable programmable ROMs (EEPROMs), magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. [00170] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method operations. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
[00171] A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory ("ROM"); random access memory ("RAM"); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc. [00172] In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will be evident that various modifications may be made thereto without departing from the broader spirit and scope of the invention as set forth in the following claims. The specification and drawings are, accordingly, to be regarded in an illustrative sense rather than a restrictive sense.

Claims

CLAIMSWhat is claimed is:
1. A computer implemented method, comprising: for each two consecutive frames of an image sequence, performing a motion prediction between the consecutive frames by tracking motion on a luminance map of the frames to generate motion prediction information for the luminance component and applying the motion prediction information of the luminance map to the chrominance maps; and in response to the motion prediction, encoding wavelet coefficients of each frame and the motion prediction information into a bit stream based on a target transmission rate, wherein the encoded wavelet coefficients satisfy a predetermined threshold according to a predetermined algorithm.
2 The method of claim 1, wherein performing motion prediction between two frame comprises: tagging movement of homologous pixels in one or more sub-bands of each frame to determine one or more motion vectors of the homologous pixels; and encoding the determined one or more motion vectors in a least absolute difference sense.
3. The method of claim 1, wherein the motion prediction is performed on coarsest sub-bands of the luminance map to generate a motion vector for each of the coarsest sub-bands, wherein the motion vectors of the coarsest sub-bands are used as parent motion vectors to determine motion vectors of finer sub-bands of the frame.
4. The method of claim -3, further comprising: estimating spatial shifting of pixels of child sub-bands using the motion vector of the conesponding parent sub-band to determine a search area for the child sub-bands; and performing motion prediction for the child sub-bands within the determined search area to determine the motion vectors of the child sub-bands.
5. The method of claim 4, further comprising: defining a reference block from one of a conesponding parent block, first frame of the sequence, and an I-frame; defining the search area sunounding the reference block having a neighborhood zone determined based on a decomposition level; and performing a search and match operation within the defined search area to obtain a refined motion vector to identify a matching block.
6. The method of claim 5, wherein the search and match operation is performed based on a cunent decomposition level and a total number of decomposition levels.
7. The method of claim 5, further comprising performing a motion compensation using the refined motion vector by subtracting the matching block from a cunent block to generate a compensated block, wherein at least a portion of the compensated block is encoded in the bit stream.
8. The method of claim 1-, wherein encoding wavelet coefficients is performed iteratively on a sub-band of a frame obtained from a parent sub-band of a previous iteration into a bit stream based on a target transmission rate, wherein the wavelet coefficients that do not satisfy the predetermined threshold are ignored in the respective iteration.
9. The method of claim 8, further comprising transmitting at least a portion of the bit stream to a recipient over a network according to the target transmission rate, wherein the transmitted bit stream when decoded by the recipient, is sufficient to represent an image of the frame.
10. The method of claim 9, wherein iterative encoding is performed according to an order representing significance of the wavelet coefficients.
11. The method of claim 10, wherein the order is a zigzag order across the frame such that significant coefficients are encoded prior to less significant coefficients in the bit stream, wherein when a portion of the bit stream is transmitted due to the target transmission rate, at least a portion of bits representing the significant coefficients are transmitted while at least a portion of bits representing the less significant coefficients are ignored.
12. A machine-readable medium having executable code to cause a machine to perform a method, the method comprising: for each two consecutive frames of an image sequence, performing a motion prediction between the consecutive frames by tracking motion on a luminance map of the frames to generate motion prediction information for the luminance component and applying the motion prediction information of the luminance map to the chrominance maps; and in response to the motion prediction, encoding wavelet coefficients of each frame and the motion prediction information into a bit stream based on a target transmission rate, wherein the encoded wavelet coefficients satisfy a predetermined threshold according to a predetermined algorithm.
13. The machine-readable medium of claim 12, wherein performing motion prediction between two frame comprises: tagging movement of homologous pixels in one or more sub-bands of each frame to determine one or more motion vectors of the homologous pixels; and encoding the determined one or more motion vectors in a least absolute difference sense.
14. The machine-readable medium of claim 12, wherein the motion prediction is performed on coarsest sub-bands of the luminance map to generate a motion vector for each of the coarsest sub-bands, wherein the motion vectors of the coarsest sub- bands are used as parent motion vectors to determine motion vectors of finer sub- bands of the frame.
15. The machine-readable medium of claim 14, wherein the method further comprises: estimating spatial shifting of pixels of child sub-bands using the motion vector of the conesponding parent sub-band to determine a search area for the child sub-bands; and performing motion prediction for the child sub-bands within the determined search area to determine the motion vectors of the child sub-bands.
16. The machine-readable medium of claim 15, wherein the method further comprises: defining a reference block from one of a conesponding parent block, first frame of the sequence, and an I-frame; defining the search area sunounding the reference block having a neighborhood zone determined based on a decomposition level; and performing a search and match operation within the defined search area to obtain a refined motion vector to identify a matching block.
17. The machine-readable medium of claim 16, wherein the search and match operation is performed based on a cunent decomposition level and a total number of decomposition levels.
18. The machine-readable medium of claim 16, wherein the method further comprises performing a motion compensation using the refined motion vector by - subtracting the matching block from a cunent block to generate a compensated block, wherein at least a portion of the compensated block is encoded in the bit stream.
19. The machine-readable medium of claim 12, wherein encoding wavelet coefficients is performed iteratively on a sub-band of a frame obtained from a parent sub-band of a previous iteration into a bit stream based on a target transmission rate, wherein the wavelet coefficients that do not satisfy the predetermined threshold are ignored in the respective iteration.
20. A data processing system, comprising: a capturing device to capture one or more frames of an image sequence; and an encoder coupled to the capturing device, for each frame, the encoder configured to for each two consecutive frames of an image sequence, perform a motion prediction between the consecutive frames by tracking motion on a luminance map of the frames to generate a motion prediction information for the luminance component and applying the motion prediction information of the luminance map to the chrominance maps, and in response to the motion prediction, encode wavelet coefficients of each frame and the motion prediction information into a bit stream based on a target transmission rate, wherein the encoded wavelet coefficients satisfy a predetermined threshold according to a predetermined algorithm.
21. A computer implemented method, comprising: receiving at a mobile device at least a portion of a bit stream having at least one frame, wherein the mobile device includes one of Pocket PC based PDAs and smart phones, Palm based PDAs and smart phones, Symbian based phones, PDAs, and phones supporting at least one of J2ME and BREW; and iteratively decoding the bits received to reconstruct the image of the frame.
22. The method of claim 21 , wherein iteratively decoding comprises generating significance, sign, and bit plane information associated with the encoded coefficient based on a location of encoded coefficient within a respective sub-band.
23. The method of claim 22, further comprising maintaining separate contexts to represent the significance, sign, and bit plane information respectively for different decomposition levels, wherein content of the contexts are updated from the received bit stream.
24. The method of claim 22, wherein iterative decoding is performed according to an order representing significance of the wavelet coefficients.
25. The method of claim 22, further comprising decrementing a predetermined threshold of a cunent iteration by a predetermined offset to generate a new threshold for a next iteration. -
26. The method of claim 25, wherein the predetermined offset includes up to a half of the predetermined threshold of the cunent iteration.
27. The method of claim 25, wherein an decoding area for the next iteration is larger than the decoding area of the cunent iteration by a factor determined based on the predetermined offset.
28. The method of claim 22, where in the amount of data to be decoded from the bit sfream is determined based on the required quality of the reconstructed frame.
29. The method of claim 24, wherein the order is a zigzag order across the frame such that significant coefficients are decoded prior to less significant coefficients in the bit stream, wherein when a portion of the bit sfream is received, at least a portion of bits representing the significant coefficients are decoded while at least a portion of bits representing the less significant coefficients are ignored, depending on the required quality of the reconstructed frame.
30. The method of claim 21, wherein an inverse wavelet transform is performed on each reconstructed coefficient to generate a plurality of pixels representing an image of the frame.
31. The method of- claim 21 , wherein for each- two consecutive decoded frame of an image sequence, performing a motion compensation between the consecutive frames by using motion vectors present in the bit stream, for luminance as well as chrominance maps.
32. The method of claim 21, wherein the motion vectors for the finer subbands are constructed from the motion vector of the coarsest subband by adding the incremental difference values present in the bit stream.
33. A computer-implemented method, comprising: performing a wavelet transform on each pixel of a frame to generate a plurality of wavelet coefficients representing each pixel in a frequency domain; and iteratively encoding wavelet coefficients of a sub-band of the frame obtained from a parent sub-band of a previous iteration into a bit stream based on a target transmission rate, wherein the encoded wavelet coefficients satisfy a predetermined threshold based on a predetermined algorithm while the wavelet coefficients that do not satisfy the predetermined threshold are ignored in the respective iteration.
34. The method of claim 33, further comprising fransmitting at least a portion of the bit stream to a recipient over a network according to the target transmission rate, wherein the transmitted bit stream when decoded by the recipient, is sufficient to represent an image of the frame.
35. The method of claim 34-, wherein iterative encoding is performed according to an order representing significance of the wavelet coefficients.
36. The method of claim 35, wherein the order is a zigzag order across the frame such that significant coefficients are encoded prior to less significant coefficients in the bit stream, wherein when a portion of the bit sfream is transmitted due to the target transmission rate, at least a portion of bits representing the significant coefficients are transmitted while at least a portion of bits representing the less significant coefficients are ignored.
37. The method of claim 35, wherein an amount of data in the bit stream is determined based on the target transmission rate, which is determined based on a communications bandwidth associated with a recipient over the network.
38. The method of claim 37, wherein the bit stream includes significance, sign, and bit plane information associated with the encoded coefficient based on a location of encoded coefficient within a respective sub-band.
39. The method of claim 38, further comprising maintaining separate contexts to represent the significance, sign, and bit plane information respectively for different decomposition levels, wherein content of the contexts are compressed into the bit stream for transmission.
40. The method of claim . 35, further comprising decrementing a predetermined threshold of a cunent iteration by a predetermined offset to generate a new threshold for a next iteration.
41. The method of claim 40, wherein the predetermined offset includes up to a half of the predetermined threshold of the cunent iteration.
42. The method of claim 40, wherein an encoding area for the next iteration is larger than an encoding area of the cunent iteration by a factor determined based on the predetermined offset.
43. A machine-readable medium having executable code to cause a machine to perform a method, the method comprising: performing a wavelet transform on each pixel of a frame to generate a plurality of wavelet coefficients representing each pixel in a frequency domain; and iteratively encoding wavelet coefficients of a sub-band of the frame obtained from a parent sub-band of a previous iteration into a bit stream based on a target transmission rate, wherein the encoded wavelet coefficients satisfy a predetermined threshold based on a predetermined algorithm while the wavelet coefficients that do not satisfy the predetermined threshold are ignored in the respective iteration.
44. The machine-readable medium of claim 43, wherein the method further comprises transmitting at least a portion of the bit stream to a recipient over a network according to the target fransmission rate^ wherein the transmitted bit stream- when decoded by the recipient, is sufficient to represent an image of the frame.
45. The machine-readable medium of claim 44, wherein iteratively encoding is performed according to an order representing significance of the wavelet coefficients.
46. The machine-readable medium of claim 45, wherein the order is a zigzag order across the frame such that significant coefficients are encoded prior to less significant coefficients in the bit stream, wherein when a portion of the bit stream is transmitted due to the target fransmission rate, at least a portion of bits representing the significant coefficients are transmitted while at least a portion of bits representing the less significant coefficients are ignored.
47. The machine-readable medium of claim 45, wherein an amount of data in the bit stream is determined based on the target transmission rate, which is determined based on a communications bandwidth associated with a recipient over the network. 005/086981
48. The machine-readable medium of claim 47, wherein the bit stream includes significance, sign, and bit plane information associated with the encoded coefficient based on a location of encoded coefficient within a respective sub-band.
49. The machine-readable medium of claim 45, wherein the method further comprises decrementing a predetermined threshold of a cunent iteration by a predetermined offset to generate a new threshold for a next iteration.
50. The machine-readable medium of claim 49, wherein the predetermined offset includes up to a half of the predetermined threshold of the cunent iteration.
51. The machine-readable medium of claim 49, wherein an encoding area for the next iteration is larger than an encoding area of the cunent iteration by a factor determined based on the predetermined offset.
52. A data processing system, comprising: a capturing device to capture one or more frames of an image sequence; and an encoder coupled to the capturing device, for each frame, the encoder configured to perform a wavelet transform on each pixel of a frame to generate a plurality of wavelet coefficients representing each pixel in a frequency domain, and iteratively encode wavelet coefficients of a sub-band of the frame obtained from a parent sub-band of a previous iteration into a bit stream based on a target transmission rate, wherein the encoded wavelet coefficients satisfy a predetermined threshold based on a predetermined algorithm while the wavelet coefficients that do not satisfy the predetermined threshold are ignored in the respective iteration.
53. A computer implemented method, comprising: receiving at a mobile device at least a portion of a bit stream having at least one frame, wherein the mobile device includes one of Pocket PC based PDAs and smart phones, Palm based PDAs and smart phones, Symbian based phones, PDAs, and phones supporting at least one of J2ME and BREW; and iteratively decoding the at least a portion of a bit stream to reconstruct an image of the at least one frame.
54. The method of claim 53, wherein iteratively decoding comprises generating significance, sign, and bit plane information associated with an encoded coefficient based on a location of encoded coefficient within a respective sub-band.
55. The method of claim 54, further comprising maintaining separate contexts to represent the significance, sign, and bit plane information respectively for different decomposition levels, wherein content of the contexts are updated from the received bit stream.
56. The method of claim 54, wherein iterative decoding is performed according to an order representing significance of the wavelet coefficients.
57. The method of claim 54, further comprising decrementing a predetermined threshold of a cunent iteration by a predetermined offset to generate a new threshold for a next iteration.
58. The method of claim 57, wherein the predetermined offset includes up to a half of the predetermined threshold of the cunent iteration.
59. method of claim 57, wherein an decoding area for the next iteration is larger than the decoding area of the cunent iteration by a factor determined based on the predetermined offset.
60. The method of claim 54, where in the amount of data to be decoded from the bit sfream is determined based on the required quality of the reconstructed frame.
61. The method of claim 56, wherein the order is a zigzag order across the frame such that significant coefficients are decoded prior to less significant coefficients in the bit stream, wherein when a portion of the bit stream is received, at least a first portion of bits representing the significant coefficients are decoded while at least a second portion of bits representing the less significant coefficients are ignored, depending on the required quality of the reconstructed frame.
62. The method of claim 53, wherein an inverse wavelet transform is performed on each reconstructed coefficient to generate a plurality of pixels representing an image of the frame.
PCT/US2005/008391 2004-03-10 2005-03-10 Methods and apparatuses for compressing digital image data with motion prediction WO2005086981A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2007503104A JP2007529184A (en) 2004-03-10 2005-03-10 Method and apparatus for compressing digital image data using motion estimation
EP05725507A EP1730846A4 (en) 2004-03-10 2005-03-10 Methods and apparatuses for compressing digital image data with motion prediction

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US55235604P 2004-03-10 2004-03-10
US55215304P 2004-03-10 2004-03-10
US60/552,356 2004-03-10
US60/552,153 2004-03-10
US11/076,746 US20050207495A1 (en) 2004-03-10 2005-03-09 Methods and apparatuses for compressing digital image data with motion prediction
US11/077,106 US7522774B2 (en) 2004-03-10 2005-03-09 Methods and apparatuses for compressing digital image data
US11/076,746 2005-03-09
US11/077,106 2005-03-09

Publications (2)

Publication Number Publication Date
WO2005086981A2 true WO2005086981A2 (en) 2005-09-22
WO2005086981A3 WO2005086981A3 (en) 2006-05-26

Family

ID=34976280

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/008391 WO2005086981A2 (en) 2004-03-10 2005-03-10 Methods and apparatuses for compressing digital image data with motion prediction

Country Status (4)

Country Link
EP (1) EP1730846A4 (en)
JP (1) JP2007529184A (en)
KR (1) KR20070026451A (en)
WO (1) WO2005086981A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011509538A (en) * 2007-09-26 2011-03-24 クゥアルコム・インコーポレイテッド Efficient conversion technology for video coding
CN103327319A (en) * 2012-03-21 2013-09-25 Vixs系统公司 Method and device to identify motion vector candidates using a scaled motion search
CN106230611A (en) * 2015-06-02 2016-12-14 杜比实验室特许公司 There is intelligence retransmit and system for monitoring quality in the service of interpolation
CN113924775A (en) * 2019-05-31 2022-01-11 北京字节跳动网络技术有限公司 Constrained upsampling in matrix-based intra prediction
US11805275B2 (en) 2019-06-05 2023-10-31 Beijing Bytedance Network Technology Co., Ltd Context determination for matrix-based intra prediction
CN117041597A (en) * 2023-10-09 2023-11-10 中信建投证券股份有限公司 Video encoding and decoding methods and devices, electronic equipment and storage medium
US11831877B2 (en) 2019-04-12 2023-11-28 Beijing Bytedance Network Technology Co., Ltd Calculation in matrix-based intra prediction

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7583844B2 (en) * 2005-03-11 2009-09-01 Nokia Corporation Method, device, and system for processing of still images in the compressed domain
CN1964219B (en) * 2005-11-11 2016-01-20 上海贝尔股份有限公司 Realize the method and apparatus of relaying
KR100950417B1 (en) * 2008-01-16 2010-03-29 에스케이 텔레콤주식회사 Method for Modeling Context of Wavelet Transform based on Directional Filtering and Apparatus for Coding Wavelet, and Recording Medium therefor
KR101423466B1 (en) 2008-05-06 2014-08-18 삼성전자주식회사 Method and apparatus for transforming bit-plane image, and method and apparatus for inverse-transforming bit-plane image
KR101634228B1 (en) * 2009-03-17 2016-06-28 삼성전자주식회사 Digital image processing apparatus, method for tracking, recording medium storing program to implement the method, and digital image processing apparatus adopting the method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5321776A (en) * 1992-02-26 1994-06-14 General Electric Company Data compression system including successive approximation quantizer
US5477272A (en) * 1993-07-22 1995-12-19 Gte Laboratories Incorporated Variable-block size multi-resolution motion estimation scheme for pyramid coding
US5495292A (en) * 1993-09-03 1996-02-27 Gte Laboratories Incorporated Inter-frame wavelet transform coder for color video compression
WO1997017797A2 (en) * 1995-10-25 1997-05-15 Sarnoff Corporation Apparatus and method for quadtree based variable block size motion estimation
DE69836696T2 (en) * 1997-05-30 2007-10-31 Mediatek Inc. METHOD AND DEVICE FOR IMPLEMENTING A HIERARCHICAL MOTOR ESTIMATION USING A NONLINEINE PYRAMID

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP1730846A4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8654833B2 (en) 2007-09-26 2014-02-18 Qualcomm Incorporated Efficient transformation techniques for video coding
JP2011509538A (en) * 2007-09-26 2011-03-24 クゥアルコム・インコーポレイテッド Efficient conversion technology for video coding
CN103327319A (en) * 2012-03-21 2013-09-25 Vixs系统公司 Method and device to identify motion vector candidates using a scaled motion search
EP2645721A3 (en) * 2012-03-21 2016-11-30 ViXS Systems Inc. Method and device to identify motion vector candidates using a scaled motion search
CN106230611A (en) * 2015-06-02 2016-12-14 杜比实验室特许公司 There is intelligence retransmit and system for monitoring quality in the service of interpolation
CN106230611B (en) * 2015-06-02 2021-07-30 杜比实验室特许公司 In-service quality monitoring system with intelligent retransmission and interpolation
US11831877B2 (en) 2019-04-12 2023-11-28 Beijing Bytedance Network Technology Co., Ltd Calculation in matrix-based intra prediction
CN113924775A (en) * 2019-05-31 2022-01-11 北京字节跳动网络技术有限公司 Constrained upsampling in matrix-based intra prediction
CN113924775B (en) * 2019-05-31 2023-11-14 北京字节跳动网络技术有限公司 Restricted upsampling in matrix-based intra prediction
US11943444B2 (en) 2019-05-31 2024-03-26 Beijing Bytedance Network Technology Co., Ltd. Restricted upsampling process in matrix-based intra prediction
US11805275B2 (en) 2019-06-05 2023-10-31 Beijing Bytedance Network Technology Co., Ltd Context determination for matrix-based intra prediction
CN117041597A (en) * 2023-10-09 2023-11-10 中信建投证券股份有限公司 Video encoding and decoding methods and devices, electronic equipment and storage medium
CN117041597B (en) * 2023-10-09 2024-01-19 中信建投证券股份有限公司 Video encoding and decoding methods and devices, electronic equipment and storage medium

Also Published As

Publication number Publication date
EP1730846A2 (en) 2006-12-13
JP2007529184A (en) 2007-10-18
KR20070026451A (en) 2007-03-08
EP1730846A4 (en) 2010-02-24
WO2005086981A3 (en) 2006-05-26

Similar Documents

Publication Publication Date Title
US7522774B2 (en) Methods and apparatuses for compressing digital image data
US20050207495A1 (en) Methods and apparatuses for compressing digital image data with motion prediction
WO2005086981A2 (en) Methods and apparatuses for compressing digital image data with motion prediction
US10375409B2 (en) Method and apparatus for image encoding with intra prediction mode
CN108848376B (en) Video encoding method, video decoding method, video encoding device, video decoding device and computer equipment
JP5606591B2 (en) Video compression method
US8811484B2 (en) Video encoding by filter selection
EP3570545B1 (en) Low-complexity intra prediction for video coding
US11284107B2 (en) Co-located reference frame interpolation using optical flow estimation
US8761252B2 (en) Method and apparatus for scalably encoding and decoding video signal
US20080247467A1 (en) Adaptive interpolation filters for video coding
WO2014120374A1 (en) Content adaptive predictive and functionally predictive pictures with modified references for next generation video coding
EP1466477A2 (en) Coding dynamic filters
CN113923455B (en) Bidirectional inter-frame prediction method and device
US11876974B2 (en) Block-based optical flow estimation for motion compensated prediction in video coding
US20100086048A1 (en) System and Method for Video Image Processing
JP2011519220A (en) Encoding and decoding method, coder and decoder
Hua et al. Inter frame video compression with large dictionaries of tilings: algorithms for tiling selection and entropy coding
US8218639B2 (en) Method for pixel prediction with low complexity
Igarta A study of MPEG-2 and H. 264 video coding
JP2007235299A (en) Image coding method
Yang et al. Video coding: Death is not near
Wang Fully scalable video coding using redundant-wavelet multihypothesis and motion-compensated temporal filtering
CN114830645A (en) Image encoding method and image decoding method
CN114521325A (en) Image encoding method and image decoding method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

WWE Wipo information: entry into national phase

Ref document number: 2007503104

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

WWE Wipo information: entry into national phase

Ref document number: 2005725507

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020067021047

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWP Wipo information: published in national office

Ref document number: 2005725507

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020067021047

Country of ref document: KR