GB2425011A - Encoding video data using a transformation function - Google Patents

Encoding video data using a transformation function Download PDF

Info

Publication number
GB2425011A
GB2425011A GB0507092A GB0507092A GB2425011A GB 2425011 A GB2425011 A GB 2425011A GB 0507092 A GB0507092 A GB 0507092A GB 0507092 A GB0507092 A GB 0507092A GB 2425011 A GB2425011 A GB 2425011A
Authority
GB
United Kingdom
Prior art keywords
signal
data
transformation
functions
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0507092A
Other versions
GB0507092D0 (en
Inventor
Ely Jay Malkin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEAMUPS LIMITED
Original Assignee
BEAMUPS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEAMUPS Ltd filed Critical BEAMUPS Ltd
Priority to GB0507092A priority Critical patent/GB2425011A/en
Publication of GB0507092D0 publication Critical patent/GB0507092D0/en
Priority to PCT/GB2006/001296 priority patent/WO2006106356A1/en
Priority to GB0607067A priority patent/GB2425013A/en
Publication of GB2425011A publication Critical patent/GB2425011A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/99Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals involving fractal coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Abstract

Method and apparatus for encoding a signal comprising performing a transformation on the signal using a transformation function 140 generating at least one output parameter from the transformation. Associating a synchronisation signal 75 with the output parameter, and providing data output 130 comprising the synchronisation signal 75 and the output parameter. There is also provided a method and apparatus for decoding an encoded signal comprising receiving a data input containing a synchronisation signal 75 and a parameter 95 derived from a transformation 140 performed on the signal during encoding. Maintaining synchronisation of the decoding method using the synchronisation signal 75, and generating a signal by performing a function corresponding to the transformation 140 on the output parameter. Data describing the functions may be stored in databases at the encoder and decoder and means to maintain synchronisation between the functions at the encoder and decoder may be provided.

Description

--
ENCODING AND DECODING A SIGNAL
Background of the Invention
S The present invention relates to a method and apparatus for encoding and decoding a signal.
The storage and transmission of data requires sufficient permanent memory to store these data and sufficient bandwidth to transmit the data. As data volumes increase, these finite resources become ever more stretched.
The situation is particularly acute when the data are audio or visual information. Ever increasing generation and use of such data will further stretch the bandwidth requirements of data transmission services such as those that use fixed line telephony, mobile telephony, satellite, or other radio communication techniques. The situation will only get worse with the introduction of high definition television (HDTV) services, and the increasing demands of users of mobile devices to view live motion video.
Where an encoded signal is to be transmitted it is convenient to transmit the signal in a standard format that may be interpreted and decoded by a standard technique on a number of different platforms, such as a mobile phone or desktop computers, for instance. This allows a single encoded signal to be transmitted or stored as a single, unique file that may be read by many types of output device.
Each of these output devices may reconstruct the signal to a different resolution or quality. The resolution of the reconstructed signal is, therefore, only dependant on the resources of the reconstructing device. Furthermore, the process is effectively independent of any particular platform or environment.
Taking an analogue camcorder as an example of a device for generating such data, a moving image is recorded electrically by focussing the images on a CCD sensor. The light intensity falling on each pixel is converted into an electrical charge. The electrical charge from each pixel is read out from the CCD forming an analogue signal which represents the image recorded on the COD. This signal is processed and stored on a magnetic tape or other suitable recording medium. Digital camcorders work in a similar way except that the output signal is either converted into a digital signal or recorded as a digital signal from the CCD sensor itself. The colour component is achieved by either splitting the image into more than one filtered image and directing each of these components to a separate COD, or applying a colour filter mask directly to a single CCD. It should be noted that the data are recorded "raw" on to the magnetic tape or other suitable recording medium and this represents a vast amount of information for each second of video.
Each colour component can be considered as a 1- Dimensional (l-D) signal. The combined colour audio / video signal, therefore, comprises several l-D signals, which are typically three individual colour components and a set of audio channels (e.g. two channels for stereo sound) Typically, the output from a digital camcorder represents approximately 3.6 MB5'. Therefore, when storing or transmitting such data (on a DVD or other suitable medium, for instance) it is advantageous to compress the data using a ubiquitous compression technique that may be read my many different devices. Compression of data is often necessary where the encoded data is to be transmitted, via a wireless connection, e.g. to a personal computer. The most popular video compression standard used is MPEG. MPEG- 1 is suitable for storing about one hour of video on a CDROM, but this is at the expense of video quality which results from a reduced sample rate as well as other compression artefacts.
The subsequent MPEG-2 standard is considerably broader in scope and of wider appeal. For example, MPEG-2 supports interlace and High Definition TV (H]JTV) whereas MPEG-l does not. MPEG-2 has become very important because it has been chosen as the compression scheme for both DVB (Digital Video Broadcasting) and DVD (Digital Video Disk).
MPEG-4 achieves further compression than MPEG-2 and is used in such applications as storing video from a digital camera onto solid state memory devices.
Certain compression techniques are lossless and therefore result in no loss of signal quality. However, compression efficiency is usually quite low for these techniques. Therefore, lossy compression techniques are usually chosen. All lossy compression techniques necessitate a trade-off between output quality and data volume. Furthermore, it is not possible to obtain a higher resolution from the uncompressed signal than was encoded within the compressed data.
It is therefore desirable to provide a method and apparatus for encoding and decoding a l-D signal or several l-D signals making up a composite signal in order to reduce storage volume and bandwidth requirements. It is also desirable that the quality of the decoded signal, with respect to the original signal, is largely independent of the storage volume and bandwidth requirements of the encoded signal.
Summary of the Invention
In a first aspect of the present invention there is provided a method for encoding a signal comprising the steps of performing at least one transformation on the signal using at least one transformation function thereby generating at least one output parameter from the transformation function; associating a synchronisation signal with the at least one output parameter; and providing a data output comprising the synchronisation signal and the at least one output parameter. This has the advantage that a signal, such as a 1-D signal, may be encoded into a unique file format that may be transmitted or stored in a device independent form. The same file may be used by many different devices to decode and regenerate the original signal (that may be a l-D signal or a combination of several l-D signals) to whatever level of precision or quality is required. This has the advantage that the output quality or the regenerated signal is only dependent on the processing or memory resources of the decoding device. Furthermore, the signal may be transmitted or stored as this unique file and in a form that requires much lower bandwidth or storage volume than would be required to transmit or store the original signal.
The signal may be modelled by one or more modelisation functions. These modelisation functions correspond to the transformation used to generate the output parameters during the encoding process. The same modelisation function may be used within the decoding process to regenerate the signal.
Optionally, the output parameters are associated with a reference to the modelisation function used to generate the output parameter during the encoding process and to regenerate the signal during the decoding process. These references may be added to the data output.
Optionally, the signal is an analogue signal and the method further comprises the step of: converting the analogue signal into a digital signal before performing the at least one transformation on the digital signal. This has the advantage that the signal may be processed in the digital domain and so enables the use of known digital processing techniques.
Optionally, at least some or all of the processing may be performed using analogue techniques such as band pass filters, for instance. This has advantages over digital techniques such as retaining all of the information present in the original signal, especially high frequency information. Digital processing requires analogue to digital converters that require anti-aliasing filters that remove high frequency information from the original signal.
Such anti-alias filtering is not required with analogue processing. Furthermore, analogue processing is more efficient in terms of processing speed than digital processing.
Preferably, a portion of the processing will be performed in the analogue domain and a portion will be performed in the digital domain. More preferably, digital processing techniques will be used to extract the modelisation function parameters.
Preferably, the method further comprises the step of dividing the signal into discrete time segments before performing the at least one transformation and wherein the at least one transformation is performed on each discrete time segment of the signal. This allows the continuous signal to be processed in discrete sections using one or more clock cycles of a digital processor.
Conveniently, one or more additional transformation functions may be generated and performed on the signal in response to at least one property of the signal. The advantage of this is that where the decoded signal does not match the original signal to within specified limits (a difference operation is performed on the original and decoded signals) further additional transformation functions and associated modelisation functions may be retrieved or generated and used to encode the signal to improve the resulting decoded signal. These additional modelisation functions and associated transformation functions may be stored within the database and retrieved or generated as and when required.
Optionally, the method may further comprise the step of: adding the details of the additional transformation functions to the data output. The output parameters are associated with the transformation function used to generate them, within the data output. The details of the corresponding modelisation function may also be added to the data output. Alternatively, only the details of the modelisation function required to regenerate the signal from the output parameters are added to the data output. This provides the decoder with the details of the modelisation function so that the decoder can regenerate the signal from the parameters contained within the data file. This is because the decoder database may not contain the modelisation function used. Optionally, the modelisation function may be retained, along with a reference to it, within the database of the decoder device for subsequent reference and use. An existing modelisation function contained within the database of the decoder may be updated in response to the presence of instructions contained within the output data from the encoder device. The output data may also contain instructions to delete particular modelisation functions from the database. These instructions may be in the form or flags or other data. In this way, the encoder and decoder databases may be kept up to date with each other. They may also be maintained so as to contain the most suitable modelisation functions for the properties of the particular signal being encoded and decoded.
Preferably, at least a portion of the data content of the data stream is compressed. This is in contrast to the prior art method of compressing the data content of the signal itself. This further reduces the storage volume and bandwidth required by the signal. The compression technique used would be any suitable lossless compression technique.
According to a preferred embodiment of the present invention, the method further comprises the step of: separating the signal into a set of separate signals each corresponding to a different frequency band before performing the at least one transformation on each of the separate signals. This separation of signals may be performed with a filter bank where each filter isolates a particular frequency band from the original signal. This allows each signal to be processed in parallel by modelisation functions suitable for that particular frequency band. The signal may be separated into separate signals based on other properties of the signal, such as amplitude or phase, using suitable filters.
Preferably, the method further comprises the steps of: digitising a portion of the signal; and adding the digitised portion to the data output. This step allows artefacts to be encoded as either raw or compressed data. It allows portions of the signal that cannot be encoded using any of the available modelisation functions or additional functions to be encoded and added to the data output directly in digital form.
Optionally, the method further comprises the step of: storing a portion of the data output as referenced data in a memory buffer so that repetitive signal portions can be identified and duplicated in the data output. This further optimises the output data by referencing repetitive signals and encoding a reference to the repeated portion of the signal instead of re-encoding the complete set of parameters extracted for that particular signal portion. This allows further efficiency improvements as fewer parameters need to be transmitted or stored.
In a second aspect of the present invention there is provided a method for decoding an encoded signal comprising the steps of: receiving a data input containing a synchronisation signal and at least one output parameter derived from at least one transformation performed on the signal during encoding; maintaining synchronisation of the decoding method using the synchronisation signal; and generating a signal by performing at least one function corresponding to the at least one transformation on the at least one output parameter. The function corresponding to the at least one transformation is a modelisation function used to regenerate the signal from the output parameter or parameters contained within the data input. This decoding method corresponds to the encoding method described above and has the same advantages as that method.
Optionally, the method may further comprise the step of: performing a function contained within the data input on the at least one output parameter to generate the signal.
This allows additional modelisation functions to be sent with the data stream when the decoded signal does not match the encoded signal within predetermined limits (as discussed above) . A transformation function is used to generate extracted parameters. This transformation function may be associated with the parameters in the data input.
Alternatively, the corresponding modelisation function used to regenerate the signal within the decoder may be associated with parameters in the data input.
- 10 - Advantageously, the method further comprises the step of at least partially decompressing the data contained within the data stream. This allows compressed data containing the output parameters to be decompressed and so an even smaller volume of data is required to describe the signal.
Preferably, the method may further comprise the step of converting the generated signal into a digital signal. This has the advantage of enabling the output signal to be used by digital equipment and processes. The decode process may be conducted in the digital or analogue domains or a combination of both. Therefore, the output signal may be analogue or digital depending on its intended purpose.
Preferably, the at least one transformation is stored as data describing the at least one transformation in a database and wherein the method further comprises the step of: retrieving data describing the at least one transformation from the database. Advantageously, the use of a database within the encoding and decoding processes allow the transformation and modelisation functions to be stored in a convenient manner.
Preferably, the method further comprises the step of maintaining synchronisation between the functions contained within an encoder database (50) and the functions contained within a decoder database (260). This allows the set of transformation and modelisation functions to be optimised in response to the content of the signal. This also allows additional functions to be added to the databases within the - 11 encoder and decoder when the existing functions cannot be used to regenerate the signal to within predefined limits.
This also allows commonly used additional functions to be stored within the databases and so avoids the need to repeatedly store or transmit these additional functions within the data output file. The data output may contain additional information for amending, adding or deleting the functions within the decoder database. These may be in the form of flags or similar data types.
Preferably, the encoded signal is decoded with a precision that may be varied. This has the advantage of decoding a single encoded file using different decoding devices. The output signal is, therefore, independent of the data file and is only dependent on the resources of the decoding device.
In a third aspect of the present invention there is provided an apparatus for encoding a signal comprising: a database containing details of at least one transformation; means for maintaining synchronisation; and a processor adapted to perform at least one transformation on the signal using the at least one transformation retrieved from the database thereby generating an output parameter and form a data output containing the synchronisation signal and the output parameter. This apparatus corresponds to the encoding method and therefore has the same advantages. The means for maintaining synchronisation may be a clock signal or a synchronisation signal contained within the signal itself, such as an NTSC, PAL or SECAM synchronisation signal, for instance.
- 12 - Preferably, the apparatus may further comprise: means to compress at least some of the data output. This has the advantage of further reducing the storage and bandwidth requirements of the signal.
Advantageously, the signal is an analogue signal and the apparatus further comprises: an analogue to digital converter for converting the analogue signal into a digital signal. This has the advantage of working in the digital domain and so utilising digital signal processing techniques. The apparatus may contain analogue processing means such as filters, for instance. Therefore, some or all of the processing may take place in the analogue or digital domains.
Preferably, the apparatus further comprises memory to store discrete segments of the digital signal. This enables the continuous signal to be split up and stored in discrete signal segments whilst they await processing.
In a fourth aspect of the present invention there is provided an apparatus for decoding a signal comprising: a database containing details of at least one function; means for maintaining synchronisation; and a processor arranged to receive a data input containing a synchronisation signal and an output parameter produced from a transformation on the signal, retrieve the details of at least one function corresponding to the transformation on the signal from the database and generate a signal from the output parameter using the retrieved details of the at least one function.
This apparatus corresponds with the method for decoding the signal and, therefore, has the same advantages.
- 13 - Advantageously, the apparatus may comprise means for decompressing at least a portion of the data contained in the data input. This allows the signal to be described by an even smaller volume of data.
The present invention also extends to a computer program comprising instructions that, when executed on a computer cause the computer to perform the method for encoding a signal, as described above. The present invention also extends to a computer program comprising instructions that, when executed on a computer cause the computer to perform the method for decoding a signal, as described above.
The present invention also extends to a computer programmed to perform the method for encoding a signal, as described above. The present invention also extends to a computer programmed to perform the method for decoding a signal, as described above.
Brief Description of the Figures
The present invention may be put into practice in a number of ways and a preferred embodiment will now be described by way of example only, and with reference to the accompanying drawings, in which: Figure 1 shows a flow diagram of the method of encoding an analogue signal from a CCD sensor and forming a data stream, including a database containing decomposition and transformation functions, to produce an encoded data stream - 14 - in accordance with a first embodiment of the present invention; Figure 2 shows a flow diagram for decoding the data stream produced in Figure 1 and for reconstructing an analogue signal in accordance with a first embodiment of the present invention; Figure 3 shows a flow diagram of the initial steps in processing the signal of Figure 1 including converting the analogue signal into a digital signal and storing the digital signal in a set of memory banks; Figure 4 shows a schematic diagram of the generation of decomposition and transformation functions from properties of the analogue signal in accordance with a first embodiment of the present invention; Figure 5 shows a schematic diagram of the generation of the data stream of Figure 1 from the analogue signal; Figure 6 shows a schematic diagram of the reconstruction of an output signal from the data stream of Figure 1; and Figure 7 shows a schematic diagram of a method of encoding an analogue signal, including a filter bank, in accordance with a second embodiment of the present invention.
- 15 -
Detailed Description of a Preferred Embodiments
Figure 1 shows a flow chart describing a method for encoding the signal according to one aspect of the present invention. The encoding process starts when a sensor 20 generates an analogue input signal 30. This analogue input signal 30 may comprise one or more l-D signals. Each l-D signal may correspond to a particular colour signal, a composite signal or an audio channel, for instance. Where multiple 1-D signals are present, each l-D is processed separately. The following description relates to the processing of individual l-D but it will be appreciated by the skilled person that it is possible to process multiple l-D signals in parallel. The sensor 20 may be a CCD or a microphone or other electronic device suitable for detecting a sound or image and generating an electronic signal from it. The device generating the electronic signals may be a video camera, for example. Such devices generate an analogue input signal 30 and this analogue input signal 30 is processed in a signal processor 35 to form a digital signal 40. The analogue input signal 30 is filtered and digitised before being sampled in discrete time interval components. This process is shown by the group of components 45 enclosed by dashed lines and is described in further detail and with reference to Figure 3.
The digital signal 40 is passed to signal transform component 60. Signal transfer component 60 also retrieves details from database 50. Database 50 stores the details of the transformation and modelisation functions 140 that are to be used to encode the signal 40. The signal transform - 16 component 60 performs all of the transformations contained in the database on the digital signal 40.
The encoding process 10 is continuous (although the S analogue input signal 30 can be processed in discrete digital signal 40 components as described in Figure 3) and synchronisation of the process is maintained by the bit allocation component 70, which generates a synchronisation bit 75 for each discrete signal component provided to signal processor 35. As the digital signal 40 is transformed using the transformation and decomposition functions 140 a continuous stream of output parameters are generated by the signal transform 60 component.
Next, the quantisation component 80 quantises the signal provided by the signal transform component 60 thereby reducing the precision of the output parameters of that signal and further reducing the volume of data generated by the encoding process 10. The quantisation process 80 produces quantised parameters 85 by removing a predetermined number of least significant data from the output parameters, i.e. introducing a predetermined rounding error to the output parameters.
Next, the extraction of significant parameters component 90 determines the set of parameters that are to be extracted from the quantised parameters 85 to describe the analogue input signal 30 and form a set of extracted parameters 95. The precision of the signal is determined by choice of these extracted parameters 95.
- 17 - The external transform for exception handling component retrieves or generates variable and adaptive functions 108 when the extracted parameters 95 cannot adequately describe the analogue input signal 30. These variable and adaptive functions 108 are added to the extracted parameters to form signal 105. This component 100 is described in more detail below.
Next, the extracted parameters and any added variable and adaptive function data 108 are compressed using known compression techniques familiar to the person skilled in the art. This compression step is performed in the entropy coding component 110 to form compressed data 115. Following this, the transmitter 120 formats the compressed data 115 into an output stream 130 that may be read by a suitable decoding method 200, as described below. The synchronisation bit 75 described above is added to the output data stream 130 by the transmitter 120. The synchronisation bit 75 is added to the header of the data stream 130. The bit allocation component 70 runs in parallel to all of the processes from the quantisation component 80 to the entropy encoding step 110.
Figure 2 shows a flow diagram describing the process of generating an analogue output signal 280 by decoding the data stream 130 generated by the encoding process 10 described in Figure 1. As was mentioned above in relation to the encoding process, the decoding process relates to the decoding of a 1-D signal. However, it is possible to perform the following decoding process on many l-D streams of data in parallel and recombine each decoded 1-D signal to form the original multi-dimensional signal. In essence this - 18 - decoding process 200 works as a reverse of the encoding process 10. The decoding process 200 begins at the foot of Figure 2 where a receiver 210 receives the data stream 130.
The synchronisation bit, 75 contained within the header of the data stream 130, is used by the bit allocation component 240 to maintain synchronisation between the input data stream and the resultant analogue output signal 280.
In parallel, the compressed data 215 within the data stream 130 is decompressed during the entropy decoding component 220 forming decompressed data 225.
Any variable and adaptive functions 108 contained with the input data stream 130 are removed by the external transform for exception handling component 230. This component is discussed below.
Next, the signal transform process 250 reconstructs a digital signal 270 from the input parameters contained within the data stream 130. Database 260 contains an identical set of transformation and decomposition functions to that of the encoding database 50 as shown in Figure 1. The signal transform process 250 uses the transformation and decomposition functions 140 retrieved from the database 260 to generate the digital signal 270 from the parameters contained with the input data stream 130.
During the encoding process 10 each function 140 within the database 50 is performed on the digital signal 40. In a similar way the output digital signal 270 is generated by performing in reverse each of those same functions 140 on the input parameters contained within the data stream 130 to - 19 - reconstruct the digital signal 270 corresponding to the original digital signal 40.
The output digital signal 270 may be optionally converted into an analogue output signal 280, corresponding to the original analogue signal 30, and directed to an output device. This may be the original COD sensor 20 or another device using data in the digital domain 290.
Figure 3 shows the group of components 45 enclosed within the dashed line 45 of Figure 1. These components are concerned with the initial processing of the analogue input signal 30 forming each of the discrete digital signals 40.
The analogue signal 30 is processed by the signal processor 35 that has the following components. First, the analogue signal 30 is filtered usingan anti-aliasing filter 300. This filters out higher frequency components of the analogue signal 30 that may distort when sampled by the analogue to digital converter 320. The anti-aliasing filter 300 works according to the Nyquist criterion and in accordance with Shannon's sampling theorem. These criteria require the analogue to digital converter 320 to sample the analogue input signal 30 at at least twice the frequency of the maximum frequency component of the analogue signal 30.
These criteria are well-known to the person skilled in the art of digital electronics and so require no further explanation here.
The signal 315 produced by the anti-aliasing filter 300 is acquired by the sample and hold device 310 before being - 20 - converted into the digital domain by the analogue to digital converter 320.
A digital signal processing unit (DSP) 330 contains at least two memory banks 340 (0 and 1). The digital data representing the analogue signal are stored in discrete time interval components 325 within these memory banks 340. In other words the digital signal is interleaved and each discrete time interval component 325 is processed while the remaining components are stored in memory within the DSP 330. A clock speed for the DSP 330 is chosen to be significantly faster than the sample rate of the digital signal. Therefore, there is the potential for several clock cycles to be used to process each discrete- time interval component 325 of the digital signal 40. In this way the analogue input signal 30 may be processed in real time with very little delay between the analogue input signal 30 entering the encoder 10 and the production of the output data stream 130.
The signal processor 35 may be a single discrete device. Alternatively, a single device may provide both the signal processor 35 and signal transform component 60.
The database 50 as shown in Figure 1 comprises all of the modelisation functions and transformations performed on the digital signal 40. The results of these transformations are quantised by the quantisation component 80 and are then extracted by the extraction of significant parameters component 90 forming a set of extracted parameters 95.
These functions are contained within the database 50 and form a core kernel of functions 140. There is a second - 21 - category of functions and these are the variable and adaptive functions 108 used by the external transform for exception handling component 100. These too may be stored within database 50 or in a separate database or file (not shown) . The variable and adaptive functions 108 comprise the adaptive kernel of functions. Therefore, the overall kernel of functions 420 comprises both the core kernel and the adaptive kernel of functions.
The database 50 contains both the modelisation functions and their associated transformations used to generate the extracted parameters 95.
Figure 4 shows a schematic diagram describing the choice of properties of the kernel of functions. Each of these functions is described schematically by equation 420.
The composition of the kernel of functions 420 is determined from the properties of the audio visual material 400 producing the analogue input signal 30 that is to be encoded. When determining the actual composition of these functions 420, it is also necessary to consider the underlying device that will generate this audio video material 400.
The core kernel of functions may include many well known functions and their associated transformations. These may include wavelet transformations such as discrete wavelength transforms (]JWT), fractal decomposition functions, discrete cosine transforms (DCT) and other filter based functions. These functions are well-known in the field of data compression but will be used to extract the parameters 95 that will be used to recompose the signal - 22 - during the decoder process and do not require any further discussion here.
The variable and adaptive functions 108 are additional modelisation functions with associated transformation functions that may be dynamically generated or retrieved in response to the input signal 40. These variable and adaptive functions 108 may be stored within a separate database (shown in dashed lines) or only generated when required.
Alternatively, these variable and adaptive functions 108 may be stored within database 50. The signal processor 35 may be a neural network processor, which may determine whether or not to store particular variable and adaptive functions 108 based upon the properties of the analogue input signal 30 or the resources of the intended decoding processor or a combination of both.
The variable and adaptive functions 108 are generated and retrieved by the external transform for exception handling component 100 shown in Figure 1. These variable and adaptive functions 108 are not continuously performed on the digital signal 40, but are only created when the core kernel functions produce an output which is outside predetermined limits. This may be determined by reconstructing the digital signal 270 from the core kernel functions (within the encoder 10) and comparing the reconstructed digital 270 signal with the original digital signal 40 provided through line 109 and calculating the difference between these two signals. If the error is outside predetermined limits, one or more variable and adaptive functions 108 are created. This quality checking procedure is performed within the external transform for - 23 - exception handling component 100. Details concerning the variable and adaptive functions 108 are added to the data output stream 130 header so that they may be interpreted by the decoder process 200.
The quality checking procedure is repeated using the parameters generated by the variable and adaptive functions 108 in the same way as the quality checking procedure is performed for parameters generated from the core kernel functions 420. If the error is still outside of the predetermined limits, the portion of the digital signal 325 that failed to be parameterised within acceptable limits (an artefact) is added to the output data 130 as raw data or can be compressed by the usual means, as is known from the prior art (entropy encoding, etc.) Equation 1 represents a typical core kernel function as contained within databases 50 and 260: {fi}kEK Equation 1 Equation 2 represents a variable and adaptive function as added to the data output stream of the encoder 10: {fi}v Equation 2 where V is a variable and adaptive function.
Equation 3 describes generation of core kernel functions 140 contained with the decoder function database 260. The decoder functions must generate a reconstructed - 24 - digital signal 270 from the input parameters. Where system resources are limited (such as in a mobile phone or other similar device) the decoder functions may work with a lower precision to the encoder functions and so generate a lower quality reconstructed signal compared with the original digital signal 40.
= {fi} > {f'i} Equation 3 where F represents quality of required reconstructed signal. For minimal degradation in signal quality, F = id.
Figure 5 shows a schematic diagram describing the deconstruction of a signal (St). The mapper 500 carries out the signal transformation by using each core kernel function 420 from the database as well as any required variable and adaptive functions 108. The data output is represented by Equation 4: fv,c Equation 4 where is a vector describing the extracted parameters 95 extracted from the core kernel of functions 420 as well as any variable and adaptive functions 108 used.
fv represents the variable and adaptive functions 108 that may have been used.
- 25 - represents additional parameters included within the data output 130. These parameters may include an indication of the presence of a variable and adaptive function 130 and any flags or parameters describing the original audio/video signal such as the original format (e.g. PAL, NTSC, etc) Figure 6 shows a schematic diagram describing the reconstruction of the signal from the data output of the mapper 500. The data output (now data input) is reconstructed within the builder module 600 which provides a reconstructed data output as described by Equation 5: s'(t)= f'Q,c) Equation 5 ckuv with f'=r'(f) and S'(t) corresponding to the reconstructed digital signal 270.
The builder module 600 uses the core kernel functions 420 as well as the variable and adaptive functions 108 (if present) to reconstruct the digital signal 270.
Unlike data compression techniques such as those used by the MPEG standard, the quality of the reconstructed digital 270 signal does not rely on the volume of data contained within the bit stream 130 (output or input parameters) but is instead determined by the functions 140 contained within the encoding and decoding databases and any variable and adaptive functions 108 used. In general, these functions are not transmitted within the bit stream (except for occasional variable and adaptive functions 108 that are transmitted under exceptional circumstances) - 26 Synchronisation between the variable and adaptive functions stored in the encoder 10 and decoder 200 is achieved by incorporating instructions to update, amend or delete the variable and adaptive functions within the header S of the data stream 130 forming the data output from the mapper 500. Not every variable and adaptive function 108 is retained in the encoder and decoder. The encoder system may be limited in memory or processing power, and so it may be more efficient to generate these new functions on an ad hoc basis as and when required. The encoder will therefore assign a priority co-efficient to each new function 108 with the highest priority functions kept in preference over the lower priority functions. These priority co-efficients are also transmitted in the header of the data steam 130 and used by the decoder system 200 in determining whether or not to retain the new functions 108 within the decoder 200.
The generation and updating of the adaptive and variable functions 108 may be performed by a neural network or other suitable processor. The neural network will use the digital signal 40 to determine which, if any function to retrieve or generate and will also use a set of criteria describing the required output quality (e.g. HDTV or mobile phone displays). The input analogue signal 30 may be any signal describing data but in particular, this technique is specifically suitable for video and/or audio signals. The variable and adaptive functions 108 may be generated by the neural network or suitable processor from a set of function libraries separate to the encoding database 50.
A further efficiency saving in processing power and data volume may be made by buffering a certain volume of the - 27 - most recent data 130 in memory. This allows repetitive signal portions 325 to be identified and referenced during the encoding process. Instead of generating parameters 95 each time for each subsequent repetitive signal portion 325 (i.e. repeating the full encoding process for each repetitive signal portion 325, as described above), the encoder will add to the output data 130 a reference to the set of parameters 95 previously buffered. The decoding process will also buffer the input data 130 in memory and will regenerate the repeated signal portion 325 each time a reference to a set of parameters 95 is encountered in the data 130. This results in improved throughput during the decoding process as modelisation functions will not be required for the repetitive signal portions 325. The buffered output signal 270 will be retrieved instead.
The description above describes the processing of an analogue signal 30 generated by a sensor 20. The encoding and decoding 200 process are also suitable for signals generated in other formats such as television signals (NTSC, PAL, SECAN, etc.), for instance. Such television signals contain non-video or audio components. Therefore, additional processing is required during both the encoding and decoding 200 processes to process the non-video or audio components.
As an example, the pre-processing of an NTSC signal will now be described. This pre-processing will be performed instead of obtaining an analogue input signal 30 from the sensor 20, as described above in relation to Figure 1.
- 28 - An NTSC signal contains a set of signals before the active video signal is encountered. This set of signals include a blank level, a vertical sync pulse, a horizontal sync pulse, colour burst information and may also contain teletext information. These signals and their use are well known to the person skilled in the art and do not require any further description. Other non-video or audio signals may be present and these signals may be processed in a similar way. As these non-video or audio signals have a well defined structure and content it is possible to reconstruct them from a minimum amount of information as their structure and location within the signal is set as a standard known format. Therefore, these signals are removed from the analogue input signal 30 within a pre-processing stage (prior to the encoding process 10). The properties of these removed signals are added to the data output 130 as a simple flag or reference parameter. In the case of teletext, the data are simply added as raw or compressed text.
During the decoding process 200 a further step is required to regenerate the non-video or audio signals (if required). This additional step regenerates the analogue output signal 280 in the original format using the flag or reference parameters contained with the input data 130. The input data 130 may also contain an additional flag or data element indicating the signal type (NTSC, PAL, SECAM, etc.) that is to be regenerated. Finally, any teletext data are added to the output signal, if required.
- 29 - Figure 7 illustrates a second embodiment of the present invention that is similar to the first embodiment except for the following differences described below. Identical features will be referenced with the same reference numerals
in the following description.
Figure 7 shows a schematic diagram of an analogue input signal 30 such'as one of the l-D signals previously described. The analogue input signal 30 is separated into a set 720 of individual analogue signals 740 by a filter bank 710. Each individual signal 740 corresponds to a different frequency range of the original analogue input signal 30.
Each individual signal 740 is processed, as described above with reference to Figure 1, resulting in a set 730 of individual 130 data containing header and output data for each individual signal 740 along with an additional item of data corresponding to the frequency band of the individual signal 740. All of the output data 130 are combined in a single file (not shown). On reconstruction, the data 130 corresponding to each separate frequency band is reconstructed separately by the reconstruction process described above with reference to Figure 2. The reconstructed signals are then combined to form a single analogue output signal 280, as shown on Figure 2.
The effect of the filter bank 710 is to provide a level of analogue processing before any parameters are extracted.
This also allows a certain amount of parallel processing to take place and so improve throughput. This parallel processing may be in addition to parallel processing or - 30 - multiple 1-D signals. The extraction process may be either in the analogue or digital domain.
As will be appreciated by the skilled person, details of the above embodiments may be varied without departing from the scope of the present invention, as defined by the appended claims.
For example, the database 50 may contain additional transformation or modelisation functions that are not performed on each segment of the digital signal 40. The adaptive and variable functions 108 may also be stored within database 50. Other types of transformation and modelisation functions may be performed during the encoding 10 and decoding 200 processes. The header of the data stream 130 may contain additional information regarding the analogue input signal 30 such as the date and time of the acquisition of the signal, the type of equipment used and the level of quantisation used. The discrete segments of the digital signal 325 may be stored in any number of memory banks 340 and these memory banks 340 may be located outside or inside the DSP 330. Any type of data may be encoded and decoded by this technique such as data stored in a database or within a computer system, for instance. The entropy coding 120 and decoding 220 processes may use any well know lossless compression techniques such as Huffman, simple run- line encoding and other arithmetic techniques.
The synchronisation signal may be a clock signal generated within the analogue or digital processor or may be a synchronisation signal contained within the signal itself, such as an NTSC, PAL or SECAN synchronisation signal, for - 31 - instance. In this case it may not be necessary to add a synchronisation signal as this may be present already.

Claims (31)

  1. - 32 - CLAIMS: 1. A method for encoding a signal comprising the steps of:
    performing at least one transformation on the signal using at least one transformation function (140) thereby generating at least one output parameter from the transformation; associating a synchronisation signal (75) with the at least one output parameter; and providing a data output (130) comprising the synchronisation signal (75) and the at least one output parameter.
  2. 2. The method of claim 1 where the signal is an analogue signal and the method further comprises the step of: converting the analogue signal into a digital signal before performing the at least one transformation (140) on the digital signal.
  3. 3. The method of claim 1 or claim 2 further comprising the step of: dividing the signal into discrete time segments (325) before performing the at least one transformation (140) and wherein the at least one transformation (140) is performed on each discrete time segment (325) of the signal.
  4. 4. The method of claim 1 wherein one or more additional transformations (108) are generated and performed on the signal in response to at least one property of the signal.
  5. 5. The method of claim 4 further comprising the step of: - 33 - adding the details of the additional one or more transformations (108) to the data output (130)
  6. 6. The method of any previous claim wherein at least a portion of the data output (130) is compressed.
  7. 7. The method of any previous claim further comprising the step of: separating the signal into a set of separate signals each corresponding to a different frequency band before performing the at least one transformation on each of the separate signals.
  8. 8. The method of any previous claim further comprising the steps of: digitising a portion of the signal; and adding the digitised portion to the data output (130)
  9. 9. The method of any previous claim further comprising the step of: storing a portion of the data output (130) as referenced data in a memory buffer so that repetitive signal portions can be identified and duplicated in the data output (130)
  10. 10. A method for decoding an encoded signal comprising the steps of: receiving a data input containing a synchronisation signal (75) and at least one output parameter (95) derived from at least one transformation (140) performed on the signal during encoding; - 34 - maintaining synchronisation of the decoding method using the synchronisation signal (75); and generating a signal by performing at least one function corresponding to the at least one transformation (140) on the at least one output parameter.
  11. 11. The method of claim 10 further comprising the step of: performing a function (108) contained within the data input (130) on the at least one output parameter to generate the signal.
  12. 12. The method of claim 10 further comprising the step of: at least partially decompressing the data contained within the data stream (130)
  13. 13. The method of claim 10 further comprising the step of: converting the generated signal into a digital signal (270)
  14. 14. The method of any previous claim wherein any of the functions are stored as data describing the functions in a database (50, 260) and wherein the method further comprises the step of: retrieving data describing the functions from the database (50, 260)
  15. 15. The method of claim 14 further comprising the step of: maintaining synchronisation between the functions contained within an encoder database (50) and the functions contained within a decoder database (260) - 35 -
  16. 16. The method of any of claims 10, 11, 12 or 13 wherein the encoded signal is decoded with a precision that may be varied.
  17. 17. An apparatus for encoding a signal comprising: a database (50) containing details of at least one transformation (140); means for maintaining synchronisation (70); and a processor (330) adapted to perform at least one transformation (140) on the signal using the at least one transformation (140) retrieved from the database (50) thereby generating an output parameter (95) and form a data output (130) containing the synchronisation signal (75) and the output parameter.
  18. 18. The apparatus of claim 17 further comprising: means to compress (110) at least some of the data output (130)
  19. 19. The apparatus of claim 17 or claim 18 wherein the signal is an analogue signal (30) and further comprising: an analogue to digital converter (320) for converting the analogue signal into a digital signal.
  20. 20. The apparatus of claim 19 further comprising memory (340) to store discrete segments of the digital signal (325)
  21. 21. An apparatus for decoding a signal comprising: a database (260) containing details of at least one function; means for maintaining synchronisation (240); and - 36 - a processor (330) arranged to receive a data input (130) containing a synchronisation signal (75) and an output parameter produced from a transformation (140) on the signal, retrieve the details of at least one function corresponding to the transformation (140) on the signal from the database (260) and generate a signal from the output parameter using the retrieved details of the at least one function.
  22. 22. The apparatus of claim 19 further comprising: means for decompressing (220) at least a portion of the data contained in the data input (130).
  23. 23. A computer program comprising program instructions that, when executed on a computer cause the computer to perform the method of any of claims 1 to 16.
  24. 24. A computer-readable medium carrying a computer program according to claim 23.
  25. 25. A computer programmed to perform the method of any of claims 1 to 16.
  26. 26. A method for encoding a signal substantially as herein described with respect to any of Figures 1, 3, 4 and 5.
  27. 27. A method for decoding a signal substantially as herein described with respect to any of Figures 2, 4 and 6.
  28. 28. Apparatus for encoding a signal substantially as herein described with respect to any of Figures 1, 3, 4 and 5.
    - 37 -
  29. 29. Apparatus for decoding a signal substantially as herein described with respect to any of Figures 2, 4 and 6.
  30. 30. A computer program substantially as described herein, S with reference to any of Figures 1 to 6.
  31. 31. A computer substantially as described herein, with reference to any of Figures 1 to 6.
    222965; HSS; DJB
GB0507092A 2005-04-07 2005-04-07 Encoding video data using a transformation function Withdrawn GB2425011A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB0507092A GB2425011A (en) 2005-04-07 2005-04-07 Encoding video data using a transformation function
PCT/GB2006/001296 WO2006106356A1 (en) 2005-04-07 2006-04-07 Encoding and decoding a signal
GB0607067A GB2425013A (en) 2005-04-07 2006-04-07 Encoding video data using operating parameters of the image capture device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0507092A GB2425011A (en) 2005-04-07 2005-04-07 Encoding video data using a transformation function

Publications (2)

Publication Number Publication Date
GB0507092D0 GB0507092D0 (en) 2005-05-11
GB2425011A true GB2425011A (en) 2006-10-11

Family

ID=34586876

Family Applications (2)

Application Number Title Priority Date Filing Date
GB0507092A Withdrawn GB2425011A (en) 2005-04-07 2005-04-07 Encoding video data using a transformation function
GB0607067A Withdrawn GB2425013A (en) 2005-04-07 2006-04-07 Encoding video data using operating parameters of the image capture device

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB0607067A Withdrawn GB2425013A (en) 2005-04-07 2006-04-07 Encoding video data using operating parameters of the image capture device

Country Status (2)

Country Link
GB (2) GB2425011A (en)
WO (1) WO2006106356A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721463A (en) * 2022-03-08 2022-07-08 盖玉梅 Method for regenerating signal by model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0482888A2 (en) * 1990-10-25 1992-04-29 Matsushita Electric Industrial Co., Ltd. Video signal recording/reproducing apparatus
EP0517256A2 (en) * 1991-06-07 1992-12-09 Sony Corporation High efficiency data compressed image encoding
EP0539155A2 (en) * 1991-10-21 1993-04-28 Canon Kabushiki Kaisha Image transmitting method
EP0554871A2 (en) * 1992-02-04 1993-08-11 Sony Corporation Method and apparatus for encoding a digital image signal
EP0790741A2 (en) * 1995-10-27 1997-08-20 Texas Instruments Incorporated Video compression method and system

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5926209A (en) * 1995-07-14 1999-07-20 Sensormatic Electronics Corporation Video camera apparatus with compression system responsive to video camera adjustment
DE19615657A1 (en) * 1996-04-19 1997-08-21 Siemens Ag Image data compression method for video images
GB9919381D0 (en) * 1999-08-18 1999-10-20 Orad Hi Tech Systems Limited Narrow bandwidth broadcasting system
US6771823B2 (en) * 2000-03-21 2004-08-03 Nippon Hoso Kyokai Coding and decoding of moving pictures based on sprite coding
MXPA03000418A (en) * 2000-07-13 2003-07-14 Belo Company System and method for associating historical information with sensory data and distribution thereof.
EP1319289A1 (en) * 2000-09-11 2003-06-18 Fox Digital Apparatus and method for using adaptive algorithms to exploit sparsity in target weight vectors in an adaptive channel equalizer
KR100392384B1 (en) * 2001-01-13 2003-07-22 한국전자통신연구원 Apparatus and Method for delivery of MPEG-4 data synchronized to MPEG-2 data
JPWO2003079692A1 (en) * 2002-03-19 2005-07-21 富士通株式会社 Hierarchical encoding apparatus and decoding apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0482888A2 (en) * 1990-10-25 1992-04-29 Matsushita Electric Industrial Co., Ltd. Video signal recording/reproducing apparatus
EP0517256A2 (en) * 1991-06-07 1992-12-09 Sony Corporation High efficiency data compressed image encoding
EP0539155A2 (en) * 1991-10-21 1993-04-28 Canon Kabushiki Kaisha Image transmitting method
EP0554871A2 (en) * 1992-02-04 1993-08-11 Sony Corporation Method and apparatus for encoding a digital image signal
EP0790741A2 (en) * 1995-10-27 1997-08-20 Texas Instruments Incorporated Video compression method and system

Also Published As

Publication number Publication date
GB0607067D0 (en) 2006-05-17
WO2006106356A1 (en) 2006-10-12
GB0507092D0 (en) 2005-05-11
GB2425013A (en) 2006-10-11

Similar Documents

Publication Publication Date Title
KR100664928B1 (en) Video coding method and apparatus thereof
US7512180B2 (en) Hierarchical data compression system and method for coding video data
US9723318B2 (en) Compression and decompression of reference images in a video encoder
JP2004531924A (en) Signal compression apparatus and method
US20050157794A1 (en) Scalable video encoding method and apparatus supporting closed-loop optimization
JP2005176383A (en) Coding framework of color space
JP2009302638A (en) Information processor and method
WO2010048524A1 (en) Method and apparatus for transrating compressed digital video
WO2006006764A1 (en) Video decoding method using smoothing filter and video decoder therefor
US11924470B2 (en) Encoder and method of encoding a sequence of frames
JP3466080B2 (en) Digital data encoding / decoding method and apparatus
JPH10155153A (en) Coding method, coder, decoding method, decoder, digital camera, database management system, computer and storage medium
KR100643269B1 (en) Video/Image coding method enabling Region-of-Interest
US20060159168A1 (en) Method and apparatus for encoding pictures without loss of DC components
WO2004064405A1 (en) Encoding method, decoding method, encoding device, and decoding device
JP4975223B2 (en) Compressed image processing method
US6754433B2 (en) Image data recording and transmission
GB2425011A (en) Encoding video data using a transformation function
US6819800B2 (en) Moving image compression/decompression apparatus and method which use a wavelet transform technique
KR100675392B1 (en) Transmitting/ receiving device for transmitting/receiving a digital information signal and method thereof
JP2004048212A (en) Digital image coder, coding method and program
KR0171749B1 (en) A compatible encoder
JPH09182074A (en) Image signal encoding method and device therefor and image signal decoding method and device therefor
WO2007116207A1 (en) Encoding and decoding a signal
JP3024785B2 (en) Orthogonal transformation method

Legal Events

Date Code Title Description
COOA Change in applicant's name or ownership of the application

Owner name: BEAMUPS LIMITED

Free format text: FORMER APPLICANT(S): MALKIN, ELY J

WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)