CA2277373A1 - Multi-dimensional data compression - Google Patents
Multi-dimensional data compression Download PDFInfo
- Publication number
- CA2277373A1 CA2277373A1 CA 2277373 CA2277373A CA2277373A1 CA 2277373 A1 CA2277373 A1 CA 2277373A1 CA 2277373 CA2277373 CA 2277373 CA 2277373 A CA2277373 A CA 2277373A CA 2277373 A1 CA2277373 A1 CA 2277373A1
- Authority
- CA
- Canada
- Prior art keywords
- output
- produce
- compressed
- data
- transform
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/80—Responding to QoS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
- H04L65/612—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/765—Media network packet handling intermediate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/04—Protocols for data compression, e.g. ROHC
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/112—Selection of coding mode or of prediction mode according to a given display mode, e.g. for interlaced or progressive display mode
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/142—Detection of scene cut or scene change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/36—Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/62—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding by frequency transforming in three dimensions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
- H04N19/64—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/649—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding the transform being applied to non rectangular image segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/94—Vector quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/99—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals involving fractal coding
Abstract
A method of compressing a data signal, the method comprising the steps of selecting a sequence of image frames, the sequence being part of a video stream, applying a three dimensional transform to the selected sequence to produce a first transformed output, and encoding the transformed output to produce a compressed stream output.
Description
Multi-dimensional Data Compression This invention relates to the field of data compression and more particularly, to a method and system for efficient compression of digital video data.
BACKGROUND OF THE INVENTION
One of the most significant trends affecting the efficiency of the Internet today is the movement towards the full motion video and audio data across the Internet. As web sites continue to increase their multimedia content through the integration of audio, video and data, the ability of the web to effectively deliver this media to the Internet end users will yield a congestion problem due to the architecture of the web. The significant increase in multimedia incorporated in web pages is due in part to the developments in hardware and software that have allowed web page designers to efficiently create, design, access and utilize multimedia applications. These developments in multimedia content place significant demands on the network access functions of the Internet.
There is ongoing development in improving network access functions such as providing high speed links, not only throughout the Internet backbone but down to the local access to the user. To reduce network traffic is to decrease the size of the data transferred across the network.
This is achieved in many ways. One of these techniques is the use of data compression and manipulation.
Traditionally, image compression methods may be classified as those which reproduce the original data exactly, that is, "loss less compression" and those which trade a tolerable divergence from the original data for greater compression, that is, "lossy compression".
Typically, lossless methods have a problem that they are unable to achieve a compression of much more than 70%. Therefore, where higher compression ratios are needed, lossy techniques have been developed. In general, the amount by which the original media source is reduced is referred to as the compression ratio. Compression technologies have evolved over time to adapt to the various user requirements. Historically, compression technology focused on telephony, where sound wave compression algorithms were developed and optimized. These algorithms all implemented a one-dimensional (1D) transformation, which increased the 1D
entropy of the data in the transformed domain to allow for efficient quantization and ID data coding.
Compression technologies then focused on two-dimensional (2D) data such as images or pictures. At first, the 1D audio algorithms were applied to the line data of each image to build up a compressed image. Research then progressed to the point today where the 1D
algorithms have been extended to implement a two dimensional (2D) transformation, which increases the 2D
entropy to allow for efficient quantization and 2D data coding.
Currently, state of the art technology requires compression of moving pictures or video.
In this area, research is focused on applications of 2D image coding algorithms to a multitude of images which comprise video (frames) and apply motion compensation techniques to take advantage of correlation between frame data. For example, United States Patent No. RE 36015, re-issued December 29, 1998, describes a video compression system which is based on the image data compression system developed by the motion picture experts group (MPEG) which uses various groups of field configurations to reduce the number of binary bits used to represent a frame composed of odd and even fields of video information.
In general, MPEG systems integrate a number of well known data compression techniques into a single system. These include motion compensated predictive coding, discrete cosine transformation (DCT), adaptive quantization and variable length coding (VLC). The motion compensated predictive coding scheme processes the video data in groups of frames in order to achieve relatively high levels of compression without allowing the performance of the system to be degraded by excessive error propagation. In these group of frame processing schemes, image frames are classified into one of three types: the intraframe (I-Frame), the predicted frame (P-Frame) and the bidirectional frame (B-Frame). A 2D DCT is applied to small regions such as blocks of 8 x 8 pixels to encode each of the I-Frames. The resulting data stream is quantized and encoded using a variable length code such as amplitude run length Huffman code to produce the compressed output signal. As may be seen, this quantization technique still focuses on compressing single frames or images which may not be the most effective means of compression for current multimedia requirements. Also, for low bit rate applications, MPEG
suffers from 8 x 8 blocking artifacts known as tiling. Furthermore, these second-generation compression approaches as described above, have reduced the media of data requirements for video by as much as 100:1. Typically, these technologies are focused on the following approaches: wavelet algorithms and vector quantization.
BACKGROUND OF THE INVENTION
One of the most significant trends affecting the efficiency of the Internet today is the movement towards the full motion video and audio data across the Internet. As web sites continue to increase their multimedia content through the integration of audio, video and data, the ability of the web to effectively deliver this media to the Internet end users will yield a congestion problem due to the architecture of the web. The significant increase in multimedia incorporated in web pages is due in part to the developments in hardware and software that have allowed web page designers to efficiently create, design, access and utilize multimedia applications. These developments in multimedia content place significant demands on the network access functions of the Internet.
There is ongoing development in improving network access functions such as providing high speed links, not only throughout the Internet backbone but down to the local access to the user. To reduce network traffic is to decrease the size of the data transferred across the network.
This is achieved in many ways. One of these techniques is the use of data compression and manipulation.
Traditionally, image compression methods may be classified as those which reproduce the original data exactly, that is, "loss less compression" and those which trade a tolerable divergence from the original data for greater compression, that is, "lossy compression".
Typically, lossless methods have a problem that they are unable to achieve a compression of much more than 70%. Therefore, where higher compression ratios are needed, lossy techniques have been developed. In general, the amount by which the original media source is reduced is referred to as the compression ratio. Compression technologies have evolved over time to adapt to the various user requirements. Historically, compression technology focused on telephony, where sound wave compression algorithms were developed and optimized. These algorithms all implemented a one-dimensional (1D) transformation, which increased the 1D
entropy of the data in the transformed domain to allow for efficient quantization and ID data coding.
Compression technologies then focused on two-dimensional (2D) data such as images or pictures. At first, the 1D audio algorithms were applied to the line data of each image to build up a compressed image. Research then progressed to the point today where the 1D
algorithms have been extended to implement a two dimensional (2D) transformation, which increases the 2D
entropy to allow for efficient quantization and 2D data coding.
Currently, state of the art technology requires compression of moving pictures or video.
In this area, research is focused on applications of 2D image coding algorithms to a multitude of images which comprise video (frames) and apply motion compensation techniques to take advantage of correlation between frame data. For example, United States Patent No. RE 36015, re-issued December 29, 1998, describes a video compression system which is based on the image data compression system developed by the motion picture experts group (MPEG) which uses various groups of field configurations to reduce the number of binary bits used to represent a frame composed of odd and even fields of video information.
In general, MPEG systems integrate a number of well known data compression techniques into a single system. These include motion compensated predictive coding, discrete cosine transformation (DCT), adaptive quantization and variable length coding (VLC). The motion compensated predictive coding scheme processes the video data in groups of frames in order to achieve relatively high levels of compression without allowing the performance of the system to be degraded by excessive error propagation. In these group of frame processing schemes, image frames are classified into one of three types: the intraframe (I-Frame), the predicted frame (P-Frame) and the bidirectional frame (B-Frame). A 2D DCT is applied to small regions such as blocks of 8 x 8 pixels to encode each of the I-Frames. The resulting data stream is quantized and encoded using a variable length code such as amplitude run length Huffman code to produce the compressed output signal. As may be seen, this quantization technique still focuses on compressing single frames or images which may not be the most effective means of compression for current multimedia requirements. Also, for low bit rate applications, MPEG
suffers from 8 x 8 blocking artifacts known as tiling. Furthermore, these second-generation compression approaches as described above, have reduced the media of data requirements for video by as much as 100:1. Typically, these technologies are focused on the following approaches: wavelet algorithms and vector quantization.
The wavelet algorithms are implemented with efficient significance map coding such as EZW and line detection with gradient vectors depending on the application's final reconstructed resolution. The wavelet algorithms operate on the entire image and have efficient implementation due to finite impulse response (FIR) filter realizations. All wavelet algorithms S decompose an image into coarser, smooth approximations with low pass digital filtering (convolution) on the image. IN addition, the wavelet algorithms generate detailed approximations (error signals) with high pass digital filtering or convolution on the image. This decomposition process can be continued as far down the pyramid as a designer requires where each step in the pyramid has a sample rate reduction of two. This technique is also known as spatial sample rate decimation or down sampling of the image where the resolution is one half in the next sub-band of the pyramid as shown schematically in figures 1 and 2.
In vector quantization (VQ), algorithms are used with efficient code books.
The VQ
algorithm codebooks are based on macroblocks (8 x 8 or 16 x 16) to compress image data.
These algorithms also have efficient implementations. However, they suffer from blocking artifacts (tiling) at low bit rates (high compression ratio). The codebooks have a few codes to represent a multitude of bit patterns where fewer bits are allocated to the bit patterns in a macro block with the highest probability. The VQ technique is shown schematically in figure 3.
As discussed earlier, these current techniques are limited when applied to third generation compression requirements, that is, compression ratios approaching 1000:1. That is, wavelet and vector quantization techniques as discussed above still focus on compressing single frames or images which may not be the most effective for third generation compression requirements.
SUMMARY OF THE INVENTION
In accordance with this invention there is provided a method of compressing a data signal, the method comprising the steps of:
(a) selecting a sequence of image frames, the sequences comprising part of a video stream;
(b) applying a three dimensional transform to the selected sequence to produce a first transformed output; and (c) encoding the transformed output to produce a compressed stream output.
In vector quantization (VQ), algorithms are used with efficient code books.
The VQ
algorithm codebooks are based on macroblocks (8 x 8 or 16 x 16) to compress image data.
These algorithms also have efficient implementations. However, they suffer from blocking artifacts (tiling) at low bit rates (high compression ratio). The codebooks have a few codes to represent a multitude of bit patterns where fewer bits are allocated to the bit patterns in a macro block with the highest probability. The VQ technique is shown schematically in figure 3.
As discussed earlier, these current techniques are limited when applied to third generation compression requirements, that is, compression ratios approaching 1000:1. That is, wavelet and vector quantization techniques as discussed above still focus on compressing single frames or images which may not be the most effective for third generation compression requirements.
SUMMARY OF THE INVENTION
In accordance with this invention there is provided a method of compressing a data signal, the method comprising the steps of:
(a) selecting a sequence of image frames, the sequences comprising part of a video stream;
(b) applying a three dimensional transform to the selected sequence to produce a first transformed output; and (c) encoding the transformed output to produce a compressed stream output.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other features of the preferred embodiments of the invention will become more apparent in the following detailed description in which reference is made to the appended drawings wherein:
Figure 1 is a schematic diagram of a multi resolution wavelet compressor;
Figure 2 is a schematic diagram of a one-stage wavelet decoder;
Figure 3 is a schematic diagram showing a single frame vector quantization technique;
Figure 4(a) and (b) is a schematic diagram of a video frame sequence for use in the present invention;
Figure 4(c) is a schematic representation of a transformed sequence;
Figure 5 is a schematic diagram of a 3D wavelet dyadic sub-cube structure in accordance with the present invention;
Figure 6 is a graph showing compression ratio versus frame depth for different media types;
Figure 7 is a flow chart showing the operation of a 3D compression system; and Figure 8 is a flow chart showing the operation of a 3D decompression system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following description, like numerals refer to like structures in the drawings.
Referring to figure 4(a), a schematic diagram of a sequence of digitized video frames is shown generally by numeral 40. The sequence comprises N frames 42 each temporally sampled by an amount Otn . Each frame is made up of a two dimensional matrix of pixels. In order to compress this video frame sequence, a three dimension transform is applied to a three dimensional matrix of pixels defined in the sequence of frames defined in 3D
space by (x,y,t) to yield a 3D cubic structure in the transformed domain. For the case of a 3D
Fourier or cosine transform the center of the 3D structure shall be DC (for the case of spatial to spectral transformations). As one leaves the center of the cubic structure the density will decrease since image data information in the 3D structures dictates the spectral distribution. This is shown graphically in figures 4(b) and 4(c). Because there is a high correlation over the space defined by the (x,y,t) dimensions, there will be very high entropy in the transformed domain which will provide for compression ratios that can approach 1000:1. The 3D algorithm may use (~, Dy, 0t). , spatial/frame data pixel values where (fix, Dy, 0t) are constant and the total number of frames (NOt) used in the transformation shall be variable, i.e., the frame depth, depending on the scene data and the media type. In scene data the probability is high that adjacent pixels in a frame are the same. This also applies to neighbouring pixels in adjacent frames.
Referring back to figure 4(b), a sequence of frames to be transformed is indicated by label A. The three dimensional continuous fourier transform when applied to the object A that is defined by a function f of three independent variables x, y, and z , is:
+~
~~f~x~Y~z~~=F'~~v~N'~= ,~ ~~f~x~Y~z~xe 'z~(~+,~+~)dxdydz Using Eider's formula, this may be expressed as:
+~
F(u, v, w) = f j J f (x, y, z) x ~cos{2~(ux + vy + wz)} - j sin{2~(ux + vy +
wz)}~ dxdydz or:
F(u, v, w) = R(u, V, w) + jI (u, v, w) = I F(u, V, w~el ~~u'v'w~
Wlth:
+~
R(u, v, w) _ ~ f j f (x, y, z)x ~cos{2~(ux + vy + wz)}~ dxdydz 1 S and, +~
I (u, v, w) = j J f f (x, y, z) x ~-1 x sin~2~t(ux + vy + wz)~~ dxdydz The Fourier spectrum and spectral density are then defined by the following equations:
F(u, v, w~ _ ~f R(u, v, w)~z + ~I (u, v, w)}Z r = 3D Fourier Spectrum _, I (u, v, w) ~(u, v, w) = tan = 3D Fourier Phase R(u, v, w) P(u, v, w) = f R(u, v, w)}z + ~I (u, v, w)~z = 3D Spectral Density The transformation of the object A which is represented by P(u,v,w) will define the three dimensional spectral information with DC located at the center P(0, 0, 0). As u, v, or w are changed, the spectral density also changes. In fact, the largest percentage of the energy within P(u, v,w) will be contained near the center P(0, 0, 0) with the density falling off dramatically(non-linearly) as u,v,w are non-zero.
For the case of the object A being a cubic structure, boundary conditions exist and the triple integration will result in a cubic structure with the spectral density being the greatest at the S center of the cubic structure. Proceeding away from the center of the cubic structure, the spectral density will rapidly become smaller and the spectral density will approach zero at the edges of the cubic structure. This is shown graphically in figure 4(c). The uniformity throughout the object A, determines the rate at which the spectral density decreases from maximum at the center of the cubic structure to zero at the edges. With a high level of uniformity throughout A, there will be a large correlation or low entropy in A. As a result, the 3D Spectral Density will result in high entropy. This means the rate of change of density from the center of the transformed object A will be very high.
Rather than the continuous Fourier transform above, a three dimensional discrete fourier transform may be applied to the object A. In this case, a function f(x,y,z) that is sampled in the x, y, and z, dimensions by Ox, Dy, and 0z is given by the following equation:
1 M-1 N-1 O-1 _ _~ vy _i"z F(u,v,w) ~f(x'y'Z)Xe JZ~M N O
MNO x=0 y=0 z=0 where:
u=0, 1, 2, . . . M-l, v=0, 1, 2, . . . N-1, w=0, 1, 2, . . . O-1.
Using Eider's formula, this may be expressed as:
1 M-1 N-1 O-1 ~ WZ1 tlx WZ1 F(u,v,w)=MNO~~~f(x'y,z~x cos 2~M+~+ OJ -jsin 2~M+~+ OJ
x=0 y=0 z=0 or:
F(u, v, w~ = R(u, v, w~ + jI (u, v, w~ = I F(u, v, W~e~~~u'v,w~
with:
1 M-1 N-1 O-1 ~ WZ
R(u,v,w)= ~~~f(x,y,z)x cos 2~-+~+-MNO x=o y=o z=o M N O
These and other features of the preferred embodiments of the invention will become more apparent in the following detailed description in which reference is made to the appended drawings wherein:
Figure 1 is a schematic diagram of a multi resolution wavelet compressor;
Figure 2 is a schematic diagram of a one-stage wavelet decoder;
Figure 3 is a schematic diagram showing a single frame vector quantization technique;
Figure 4(a) and (b) is a schematic diagram of a video frame sequence for use in the present invention;
Figure 4(c) is a schematic representation of a transformed sequence;
Figure 5 is a schematic diagram of a 3D wavelet dyadic sub-cube structure in accordance with the present invention;
Figure 6 is a graph showing compression ratio versus frame depth for different media types;
Figure 7 is a flow chart showing the operation of a 3D compression system; and Figure 8 is a flow chart showing the operation of a 3D decompression system.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following description, like numerals refer to like structures in the drawings.
Referring to figure 4(a), a schematic diagram of a sequence of digitized video frames is shown generally by numeral 40. The sequence comprises N frames 42 each temporally sampled by an amount Otn . Each frame is made up of a two dimensional matrix of pixels. In order to compress this video frame sequence, a three dimension transform is applied to a three dimensional matrix of pixels defined in the sequence of frames defined in 3D
space by (x,y,t) to yield a 3D cubic structure in the transformed domain. For the case of a 3D
Fourier or cosine transform the center of the 3D structure shall be DC (for the case of spatial to spectral transformations). As one leaves the center of the cubic structure the density will decrease since image data information in the 3D structures dictates the spectral distribution. This is shown graphically in figures 4(b) and 4(c). Because there is a high correlation over the space defined by the (x,y,t) dimensions, there will be very high entropy in the transformed domain which will provide for compression ratios that can approach 1000:1. The 3D algorithm may use (~, Dy, 0t). , spatial/frame data pixel values where (fix, Dy, 0t) are constant and the total number of frames (NOt) used in the transformation shall be variable, i.e., the frame depth, depending on the scene data and the media type. In scene data the probability is high that adjacent pixels in a frame are the same. This also applies to neighbouring pixels in adjacent frames.
Referring back to figure 4(b), a sequence of frames to be transformed is indicated by label A. The three dimensional continuous fourier transform when applied to the object A that is defined by a function f of three independent variables x, y, and z , is:
+~
~~f~x~Y~z~~=F'~~v~N'~= ,~ ~~f~x~Y~z~xe 'z~(~+,~+~)dxdydz Using Eider's formula, this may be expressed as:
+~
F(u, v, w) = f j J f (x, y, z) x ~cos{2~(ux + vy + wz)} - j sin{2~(ux + vy +
wz)}~ dxdydz or:
F(u, v, w) = R(u, V, w) + jI (u, v, w) = I F(u, V, w~el ~~u'v'w~
Wlth:
+~
R(u, v, w) _ ~ f j f (x, y, z)x ~cos{2~(ux + vy + wz)}~ dxdydz 1 S and, +~
I (u, v, w) = j J f f (x, y, z) x ~-1 x sin~2~t(ux + vy + wz)~~ dxdydz The Fourier spectrum and spectral density are then defined by the following equations:
F(u, v, w~ _ ~f R(u, v, w)~z + ~I (u, v, w)}Z r = 3D Fourier Spectrum _, I (u, v, w) ~(u, v, w) = tan = 3D Fourier Phase R(u, v, w) P(u, v, w) = f R(u, v, w)}z + ~I (u, v, w)~z = 3D Spectral Density The transformation of the object A which is represented by P(u,v,w) will define the three dimensional spectral information with DC located at the center P(0, 0, 0). As u, v, or w are changed, the spectral density also changes. In fact, the largest percentage of the energy within P(u, v,w) will be contained near the center P(0, 0, 0) with the density falling off dramatically(non-linearly) as u,v,w are non-zero.
For the case of the object A being a cubic structure, boundary conditions exist and the triple integration will result in a cubic structure with the spectral density being the greatest at the S center of the cubic structure. Proceeding away from the center of the cubic structure, the spectral density will rapidly become smaller and the spectral density will approach zero at the edges of the cubic structure. This is shown graphically in figure 4(c). The uniformity throughout the object A, determines the rate at which the spectral density decreases from maximum at the center of the cubic structure to zero at the edges. With a high level of uniformity throughout A, there will be a large correlation or low entropy in A. As a result, the 3D Spectral Density will result in high entropy. This means the rate of change of density from the center of the transformed object A will be very high.
Rather than the continuous Fourier transform above, a three dimensional discrete fourier transform may be applied to the object A. In this case, a function f(x,y,z) that is sampled in the x, y, and z, dimensions by Ox, Dy, and 0z is given by the following equation:
1 M-1 N-1 O-1 _ _~ vy _i"z F(u,v,w) ~f(x'y'Z)Xe JZ~M N O
MNO x=0 y=0 z=0 where:
u=0, 1, 2, . . . M-l, v=0, 1, 2, . . . N-1, w=0, 1, 2, . . . O-1.
Using Eider's formula, this may be expressed as:
1 M-1 N-1 O-1 ~ WZ1 tlx WZ1 F(u,v,w)=MNO~~~f(x'y,z~x cos 2~M+~+ OJ -jsin 2~M+~+ OJ
x=0 y=0 z=0 or:
F(u, v, w~ = R(u, v, w~ + jI (u, v, w~ = I F(u, v, W~e~~~u'v,w~
with:
1 M-1 N-1 O-1 ~ WZ
R(u,v,w)= ~~~f(x,y,z)x cos 2~-+~+-MNO x=o y=o z=o M N O
and, 1 M-~ N-~ o-~ ux wz I(u,v,w)= ~~~f(x,y,z)x -lxsin 2~-+~+-MNO X=o y=o Z=o M N O
The spectral density is given by:
P(u, v, w) _ {R(u, v, w)}2 + {I (u, v, w)~Z
For the case where N=M=O, the numerical complexity of implementation is proportional to N3. There will be 2N3 trigonometric calculations, 2N3 real multiplication's, and 2N3 real additions.
For video processing applications where Ox, Dy, and 0z correspond to the horizontal spatial sample rate, the vertical spatial sample rate, and the temporal sample rate respectively, the object A will be a cubic structure defined by the video input format such as Common Input Format (CIF). This results in a three dimensional (352,240,z) pixel array where z varies with the frame rate and the scene change data. The discrete fourier transform will result in a cubic structure (352,240,z) with the spectral density being the greatest at the center (352/2,240/2,z/2).
Proceeding away from the center, the spectral density will decrease and approach zero at the edges. The pixel con elation throughout the area (352,240,z), determines the rate at which the spectral density decreases from maximum at the center of the cubic structure to zero at the edges.
Generally there is a high correlation of temporal and spatial neighbors of a pixel with A. As a result, the 3D transformation will result in low correlation or high entropy.
This means the rate of change of density from the center of the transformed object A will be very high. The rate of change is dependant on the type of video being processed and the temporal dimension z defined by scene changes. More clearly by type of video is meant talk shows, high action movies, cartoons and such like. Thus for different types of video the spectral content will vary.
The transformed object A may be recovered by applying the appropriate inverse transform. The three dimensional inverse discrete fourier transform is defined as follows:
M-1N-1 O-1 j2~~+~+"-~~
s-' ~F(u, v, w)} = f (x, y, z) _ ~ ~ ~ F(u, v, w) x a l "' N o u=o v=o w=o where:
x=0, 1, 2, . . . M-1, y=0, 1, 2, . . . N-1, z=0, 1, 2, . . . O-1.
Using Eider's formula, this may be expressed as:
M-1 N-; O-1 f (x, y, z) _ ~ ~ ~ F(u, v, w) x cos 2~'~ + ~ + W zz ~ + j sin 2 ~ + ~' + wz u=o v=o w=o M N O ~ M N O
or:
f ~x~Y~ z) = r(x~Y~ z)+ j i(x~Y~ z) = I f ~x~ y~ z~e'm(X.v.=) with:
r(x, y, z)= ~~~F(u,v,w)x cos 2 ~ + ~' + wz u=o v=o ".o ~ M N O
and, i(x,y,z)= ~~~F(u,v,w)x sin 2~~ + ~ + i'zl u=o v=o w=o M N JO
The amplitude off(x,y,z) is given by:
If~x~Y~z~ _ ~{r(x~Y~z)}z +~i(x~Y~z)~z~
For the case where N=M=O, the numerical complexity of implementation is proportional to N3. There will be 2N3 trigonometric calculations, 2N3 real multiplication's, and 2N3 real additions.
Those experienced in the art will see the application of a three dimensional Discrete Cosine Transform for highly correlated video frames will yield optimal compaction in the density of the transformed video. The Discrete Sine Transform as well as other transforms can also be applied to the 3D structure defined by A.
Referring to figure 5, a schematic diagram of a 3D wavelet transform applied to the 3D
matrix of pixels, is shown generally by numeral 50. The illustration shows a 3D wavelet dyadic sub-cube tree structure. In general, a 3D wavelet and / or a fractal algorithm may also be applied to the 3D transformation process to yield a multiresolution sub-cube with a dyadic sub-cube tree structure where a 3D Embedded Zerotree Wavelet (EZW) coding technique can be applied. In addition, an efficient DCT (FFT) expanded for 3D can be followed with entropy coding or code books for 3D spaces (i.e., 8x8x8, 16x16x16, etc.).
The spectral density is given by:
P(u, v, w) _ {R(u, v, w)}2 + {I (u, v, w)~Z
For the case where N=M=O, the numerical complexity of implementation is proportional to N3. There will be 2N3 trigonometric calculations, 2N3 real multiplication's, and 2N3 real additions.
For video processing applications where Ox, Dy, and 0z correspond to the horizontal spatial sample rate, the vertical spatial sample rate, and the temporal sample rate respectively, the object A will be a cubic structure defined by the video input format such as Common Input Format (CIF). This results in a three dimensional (352,240,z) pixel array where z varies with the frame rate and the scene change data. The discrete fourier transform will result in a cubic structure (352,240,z) with the spectral density being the greatest at the center (352/2,240/2,z/2).
Proceeding away from the center, the spectral density will decrease and approach zero at the edges. The pixel con elation throughout the area (352,240,z), determines the rate at which the spectral density decreases from maximum at the center of the cubic structure to zero at the edges.
Generally there is a high correlation of temporal and spatial neighbors of a pixel with A. As a result, the 3D transformation will result in low correlation or high entropy.
This means the rate of change of density from the center of the transformed object A will be very high. The rate of change is dependant on the type of video being processed and the temporal dimension z defined by scene changes. More clearly by type of video is meant talk shows, high action movies, cartoons and such like. Thus for different types of video the spectral content will vary.
The transformed object A may be recovered by applying the appropriate inverse transform. The three dimensional inverse discrete fourier transform is defined as follows:
M-1N-1 O-1 j2~~+~+"-~~
s-' ~F(u, v, w)} = f (x, y, z) _ ~ ~ ~ F(u, v, w) x a l "' N o u=o v=o w=o where:
x=0, 1, 2, . . . M-1, y=0, 1, 2, . . . N-1, z=0, 1, 2, . . . O-1.
Using Eider's formula, this may be expressed as:
M-1 N-; O-1 f (x, y, z) _ ~ ~ ~ F(u, v, w) x cos 2~'~ + ~ + W zz ~ + j sin 2 ~ + ~' + wz u=o v=o w=o M N O ~ M N O
or:
f ~x~Y~ z) = r(x~Y~ z)+ j i(x~Y~ z) = I f ~x~ y~ z~e'm(X.v.=) with:
r(x, y, z)= ~~~F(u,v,w)x cos 2 ~ + ~' + wz u=o v=o ".o ~ M N O
and, i(x,y,z)= ~~~F(u,v,w)x sin 2~~ + ~ + i'zl u=o v=o w=o M N JO
The amplitude off(x,y,z) is given by:
If~x~Y~z~ _ ~{r(x~Y~z)}z +~i(x~Y~z)~z~
For the case where N=M=O, the numerical complexity of implementation is proportional to N3. There will be 2N3 trigonometric calculations, 2N3 real multiplication's, and 2N3 real additions.
Those experienced in the art will see the application of a three dimensional Discrete Cosine Transform for highly correlated video frames will yield optimal compaction in the density of the transformed video. The Discrete Sine Transform as well as other transforms can also be applied to the 3D structure defined by A.
Referring to figure 5, a schematic diagram of a 3D wavelet transform applied to the 3D
matrix of pixels, is shown generally by numeral 50. The illustration shows a 3D wavelet dyadic sub-cube tree structure. In general, a 3D wavelet and / or a fractal algorithm may also be applied to the 3D transformation process to yield a multiresolution sub-cube with a dyadic sub-cube tree structure where a 3D Embedded Zerotree Wavelet (EZW) coding technique can be applied. In addition, an efficient DCT (FFT) expanded for 3D can be followed with entropy coding or code books for 3D spaces (i.e., 8x8x8, 16x16x16, etc.).
Referring to figure 6, a graph showing typical compression ratios estimated by the present invention for various types of media as a function of the frame depth (z) is shown generally by numeral 60.
The basic concept which is the subject of the present application, may be used in extending conventional transforms to 3D fixed frame depth using such approaches as fractals, VQ, DCT and wavelets. Furthermore, optimizations can be realized as a result of the human visual system response to contrast sensitivity and the adaption range of the eye due to brightness levels. Optimal 3D coding techniques may also be derived by extending present 2D coding methods such as Huffman coding, arithmetic coding, and vector or surface quantization coding.
Although the compression requirements for such approaches is expected to be high, an efficient 3D variable frame depth decoder may be implemented in hardware on a desktop PC. Such a variable frame depth decoder may also be implemented using a neural network or the like.
In addition, an algorithm may be used for determining the optimal frame depth on the fly for the 3D transformation which is dependent on the video content of the frame to frame pixel correlation. For low frame to frame pixel correlation (or SNR Methods), a scene change is detectable and the length of the 3D matrix of pixels is determined on the fly.
In this regard, the curves of compression ratio effectiveness vs. frame depth for media type as shown in figure 6 may be developed to indicate the expected performance for the applications ranging from high action movies, television broadcasts, to white boarding.
For lossless compression, 1D, 2D or 3D entropy coding can be used to achieve >70%
compression. For lossy compression, a 3D Pixel Quantization mask is applied before entropy coding to achieve larger compression ratios.
Referring to figure 7, a flow chart of the general steps implemented in 3D
compression systems is shown by numeral 70. Similarly, figure 8 a flow chart of the general steps implemented in a 3D decompression system is shown generally by numeral 80. The descriptions in each block therein incorporated herein.
Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the claims appended hereto.
The basic concept which is the subject of the present application, may be used in extending conventional transforms to 3D fixed frame depth using such approaches as fractals, VQ, DCT and wavelets. Furthermore, optimizations can be realized as a result of the human visual system response to contrast sensitivity and the adaption range of the eye due to brightness levels. Optimal 3D coding techniques may also be derived by extending present 2D coding methods such as Huffman coding, arithmetic coding, and vector or surface quantization coding.
Although the compression requirements for such approaches is expected to be high, an efficient 3D variable frame depth decoder may be implemented in hardware on a desktop PC. Such a variable frame depth decoder may also be implemented using a neural network or the like.
In addition, an algorithm may be used for determining the optimal frame depth on the fly for the 3D transformation which is dependent on the video content of the frame to frame pixel correlation. For low frame to frame pixel correlation (or SNR Methods), a scene change is detectable and the length of the 3D matrix of pixels is determined on the fly.
In this regard, the curves of compression ratio effectiveness vs. frame depth for media type as shown in figure 6 may be developed to indicate the expected performance for the applications ranging from high action movies, television broadcasts, to white boarding.
For lossless compression, 1D, 2D or 3D entropy coding can be used to achieve >70%
compression. For lossy compression, a 3D Pixel Quantization mask is applied before entropy coding to achieve larger compression ratios.
Referring to figure 7, a flow chart of the general steps implemented in 3D
compression systems is shown by numeral 70. Similarly, figure 8 a flow chart of the general steps implemented in a 3D decompression system is shown generally by numeral 80. The descriptions in each block therein incorporated herein.
Although the invention has been described with reference to certain specific embodiments, various modifications thereof will be apparent to those skilled in the art without departing from the spirit and scope of the invention as outlined in the claims appended hereto.
Claims (25)
OR PRIVILEGE IS CLAIMED ARE DEFINED AS FOLLOWS:
1. A method of compressing a data signal, the method comprising the steps of:
(a) selecting a sequence of image frames, the sequence being part of a video stream;
(b) applying a three dimensional transform to the selected sequence to produce a first transformed output; and (c) encoding the transformed output to produce a compressed stream output.
(a) selecting a sequence of image frames, the sequence being part of a video stream;
(b) applying a three dimensional transform to the selected sequence to produce a first transformed output; and (c) encoding the transformed output to produce a compressed stream output.
2. A method as defined in claim 1, said frames being represented as analog data.
3. A method as defined in claim 1, said frames being represented as digital data representing an array of pixels arranged in two dimensions.
4. A method as defined in claim 1, said transform being a 3D Fourier Transform to produce a 3D coefficient array output.
5. A method as defined in claim 1, said transform being a 3D cosine transform to produce a 3D coefficient array output.
6. A method as defined in claim 1, said transform being a 3D wavelet transform to produce a 3D coefficient array output.
7. A method as defined in claim 1, said transform being a 3D fractal transform to produce a 3D coefficient array output.
8. A method as defined in claim 1, including the step of determining whether lossless compression is to be applied prior to encoding the transformed output.
9. A method as defined in claim 1, including quantizing the transformed output to produce a 3D quantized coefficient array.
10. A method as defined in any of claims 4, 5, 6, or 7, including the step of quantizing the 3D
coefficient array values to produce a 3D quantized coefficient array.
coefficient array values to produce a 3D quantized coefficient array.
11. A method as defined in claim 1, said encoding including selecting one of a run length limited, 1D, 2D, or 3D entropy coding.
12. A method as defined in claim 1, including the step of caching the compressed stream output in a database.
13. A method as defined in claim 12, including the step of applying multipass compression.
14. A method as defined in claim 12, including transmitting said compressed stream over a communication medium to a receiver.
15. A method as defined in claim 14, including the step of transmitting said compressed stream as packet data.
16. A method for communicating compressed data between a media server and a client in a data communication network, the method comprising the steps of:
(a) selecting a sequence of image frames as part of a video stream to be transmitted to the recipient;
(b) applying a three dimensional transform to the selected sequence to produce a first transformed output;
(c) encoding the transformed output to produce a compressed stream output;
(d) transmitting said compressed stream output to said recipient;
(e) said recipient decoding said compressed stream data; and (f) applying an inverse of said three dimensional transform to the decoded data to produce an uncompressed frame sequence.
(a) selecting a sequence of image frames as part of a video stream to be transmitted to the recipient;
(b) applying a three dimensional transform to the selected sequence to produce a first transformed output;
(c) encoding the transformed output to produce a compressed stream output;
(d) transmitting said compressed stream output to said recipient;
(e) said recipient decoding said compressed stream data; and (f) applying an inverse of said three dimensional transform to the decoded data to produce an uncompressed frame sequence.
17. A method as defined in claim 16, including the step of formatting said compressed data into coded data blocks prior to decoding said compressed data.
18. A method for storing data on a media server, said method comprising the steps of receiving a media stream to be stored on said server;
selecting a sequence of frames of said media;
applying a three dimensional transform to the selected sequence to produce a first transformed output and encoding the transformed output to produce a compressed stream output; and storing said compressed stream on said server.
selecting a sequence of frames of said media;
applying a three dimensional transform to the selected sequence to produce a first transformed output and encoding the transformed output to produce a compressed stream output; and storing said compressed stream on said server.
19. A method as defined in claim 18, including storing said compressed output on a digital tape.
20. A method as defined in claim 18, including the step of storing said compressed output on a digital video disk.
21. A method for communicating compressed data between a media source and a customer, the method comprising the steps of:
selecting a sequence of image frames of said media by said source;
applying a three dimensional transform to the selected sequence to produce a first transformed output;
encoding the transformed output to produce a compressed stream output;
storing said compressed output on a portable storage medium; and providing said portable medium to said customer.
selecting a sequence of image frames of said media by said source;
applying a three dimensional transform to the selected sequence to produce a first transformed output;
encoding the transformed output to produce a compressed stream output;
storing said compressed output on a portable storage medium; and providing said portable medium to said customer.
22. A method as defined in claim 21, said portable medium for use in a home entertainment system.
23. A method as defined in claim 1, including using said compressed output in a video electronic mail or video advertising system.
24. A method as defined in claim 1, including said compressed stream output by wireless, cable, ADSL or similar medium, to a recipient.
25. A method as defined in claim 1, including the step of using said compressed output in a video on demand system.
Priority Applications (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA 2277373 CA2277373A1 (en) | 1999-05-21 | 1999-07-09 | Multi-dimensional data compression |
CA 2280662 CA2280662A1 (en) | 1999-05-21 | 1999-09-02 | Media server with multi-dimensional scalable data compression |
AU26528/00A AU2652800A (en) | 1999-05-21 | 2000-02-15 | Media server with multi-dimensional scalable data compression |
AU26530/00A AU2653000A (en) | 1999-05-21 | 2000-02-15 | System and method for streaming media over an internet protocol system |
PCT/CA2000/000132 WO2000072602A1 (en) | 1999-05-21 | 2000-02-15 | Multi-dimensional data compression |
PCT/CA2000/000131 WO2000072599A1 (en) | 1999-05-21 | 2000-02-15 | Media server with multi-dimensional scalable data compression |
PCT/CA2000/000133 WO2000072517A1 (en) | 1999-05-21 | 2000-02-15 | System and method for streaming media over an internet protocol system |
AU26529/00A AU2652900A (en) | 1999-05-21 | 2000-02-15 | Multi-dimensional data compression |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA2,272,590 | 1999-05-21 | ||
CA 2272590 CA2272590A1 (en) | 1999-05-21 | 1999-05-21 | System and method for streaming media over an internet protocol system |
CA 2277373 CA2277373A1 (en) | 1999-05-21 | 1999-07-09 | Multi-dimensional data compression |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2277373A1 true CA2277373A1 (en) | 2000-11-21 |
Family
ID=31189136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA 2277373 Abandoned CA2277373A1 (en) | 1999-05-21 | 1999-07-09 | Multi-dimensional data compression |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2277373A1 (en) |
-
1999
- 1999-07-09 CA CA 2277373 patent/CA2277373A1/en not_active Abandoned
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2000072602A1 (en) | Multi-dimensional data compression | |
KR100308627B1 (en) | Low bit rate encoder using overlapping block motion compensation and zerotree wavelet coding | |
Marpe et al. | Very low bit-rate video coding using wavelet-based techniques | |
Sudhakar et al. | Image compression using coding of wavelet coefficients–a survey | |
KR100664928B1 (en) | Video coding method and apparatus thereof | |
Ohta et al. | Hybrid picture coding with wavelet transform and overlapped motion-compensated interframe prediction coding | |
US8340181B2 (en) | Video coding and decoding methods with hierarchical temporal filtering structure, and apparatus for the same | |
US20050163217A1 (en) | Method and apparatus for coding and decoding video bitstream | |
Xing et al. | Arbitrarily shaped video-object coding by wavelet | |
KR20050028019A (en) | Wavelet based coding using motion compensated filtering based on both single and multiple reference frames | |
Bernatin et al. | Video compression based on Hybrid transform and quantization with Huffman coding for video codec | |
US6760479B1 (en) | Super predictive-transform coding | |
CA2552800A1 (en) | Video/image coding method and system enabling region-of-interest | |
Singh et al. | JPEG2000: A review and its performance comparison with JPEG | |
KR20000031283A (en) | Image coding device | |
CA2277373A1 (en) | Multi-dimensional data compression | |
Lee et al. | Subband video coding with scene-adaptive hierarchical motion estimation | |
Efstratiadis et al. | Image compression using subband/wavelet transform and adaptive multiple-distribution entropy coding | |
Sengupta et al. | Computationally fast wavelet-based video coding scheme | |
Shavit et al. | Group testing for video compression | |
Young | Software CODEC algorithms for desktop videoconferencing | |
Cheong et al. | Significance tree image sequence coding with DCT-based pyramid structure | |
Mandal | Digital image compression techniques | |
Indoh et al. | The video coder with filterbank and subband motion estimation and compensation | |
Rinaldo | G. CALVAGNO |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued | ||
FZDE | Discontinued |
Effective date: 20040709 |