WO2006106356A1 - Codage et decodage de signaux - Google Patents

Codage et decodage de signaux Download PDF

Info

Publication number
WO2006106356A1
WO2006106356A1 PCT/GB2006/001296 GB2006001296W WO2006106356A1 WO 2006106356 A1 WO2006106356 A1 WO 2006106356A1 GB 2006001296 W GB2006001296 W GB 2006001296W WO 2006106356 A1 WO2006106356 A1 WO 2006106356A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
data
transformation
output
functions
Prior art date
Application number
PCT/GB2006/001296
Other languages
English (en)
Inventor
Ely Jay Malkin
Philippe Selve
Guillaume Pierre Planquette
Original Assignee
Beamups Limited
Frost, Alex, John
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beamups Limited, Frost, Alex, John filed Critical Beamups Limited
Publication of WO2006106356A1 publication Critical patent/WO2006106356A1/fr
Priority to PCT/GB2007/001247 priority Critical patent/WO2007116207A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/99Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals involving fractal coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32128Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title attached to the image data, e.g. file header, transmitted message header, information on the same page or in the same computer file as the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation

Definitions

  • the present invention relates to a method and apparatus for encoding and decoding a signal .
  • the storage and transmission of data requires sufficient permanent memory to store these data and sufficient bandwidth to transmit the data.
  • these finite resources become ever more stretched.
  • the situation is particularly acute when the data are audio or visual information.
  • Ever increasing generation and use of such data will further stretch the bandwidth requirements of data transmission services such as those that use fixed line telephony, mobile telephony, satellite, or other radio communication techniques.
  • the situation will only get worse with the introduction of high definition television (HDTV) services, and the increasing demands of users of mobile devices to view live motion video.
  • HDMI high definition television
  • an encoded signal is to be transmitted it is convenient to transmit the signal in a standard format that may be interpreted and decoded by a standard technique on a number of different platforms, such as a mobile phone or desktop computers, for instance.
  • This allows a single encoded signal to be transmitted or stored as a single, unique file that may be read by many types of output device.
  • Each of these output devices may reconstruct the signal to a different resolution or quality.
  • the resolution of the reconstructed signal is, therefore, only dependant on the resources of the reconstructing device.
  • the process is effectively independent of any particular platform or environment.
  • an analogue camcorder as an example of a device for generating such data, a moving image is recorded electrically by focussing the images on a CCD sensor. The light intensity falling on each pixel is converted into an electrical charge. The electrical charge from each pixel is read out from the CCD forming an analogue signal which represents the image recorded on the CCD. This signal is processed and stored on a magnetic tape or other suitable recording medium. Digital camcorders work in a similar way except that the output signal is either converted into a digital signal or recorded as a digital signal from the CCD sensor itself. The colour component is achieved by either splitting the image into more than one filtered image and directing each of these components to a separate CCD, or applying a colour filter mask directly to a single CCD. It should be noted that the data are recorded "raw" on to the magnetic tape or other suitable recording medium and this represents a vast amount of information for each second of video .
  • Each colour component can be considered as a 1- Dimensional (1-D) signal.
  • the combined colour audio / video signal therefore, comprises several 1-D signals, which are typically three individual colour components and a set of audio channels (e.g. two channels for stereo sound) .
  • the output from a digital camcorder represents approximately 3.6 MBs "1 . Therefore, when storing or transmitting such data (on a DVD or other suitable medium, for instance) it is advantageous to compress the data using a ubiquitous compression technique that may be read my many different devices. Compression of data is often necessary where the encoded data is to be transmitted, via a wireless connection, e.g. to a personal computer.
  • the most popular video compression standard used is MPEG.
  • MPEG- 1 is suitable for storing about one hour of video on a CD- ROM, but this is at the expense of video quality which results from a reduced sample rate as well as other compression artefacts.
  • MPEG-2 supports interlace and High Definition TV (HDTV) whereas MPEG-I does not.
  • MPEG-2 has become very important because it has been chosen as the compression scheme for both DVB (Digital Video Broadcasting) and DVD (Digital Video Disk) .
  • MPEG-4 achieves further compression than MPEG-2 and is used in such applications as storing video from a digital camera onto solid state memory devices.
  • a method for encoding a signal comprising the steps of performing at least one transformation on the signal using at least one transformation function thereby generating at least one output parameter from the transformation function; associating a synchronisation signal with the at least one output parameter; and providing a data output comprising the synchronisation signal and the at least one output parameter.
  • a signal such as a 1-D signal
  • the same file may be used by many different devices to decode and regenerate the original signal (that may be a 1-D signal or a combination of several 1-D signals) to whatever level of precision or quality is required.
  • the output quality or the regenerated signal is only dependent on the processing or memory resources of the decoding device. Furthermore, the signal may be transmitted or stored as this unique file and in a form that requires much lower bandwidth or storage volume than would be required to transmit or store the original signal .
  • the signal may be modelled by one or more modelisation functions. These modelisation functions correspond to the transformation used to generate the output parameters during the encoding process. The same modelisation function may be used within the decoding process to regenerate the signal.
  • the output parameters are associated with a reference to the modelisation function used to generate the output parameter during the encoding process and to regenerate the signal during the decoding process. These references may be added to the data output .
  • the signal is an analogue signal and the method further comprises the step of: converting the analogue signal into a digital signal before performing the at least one transformation on the digital signal.
  • At least some or all of the processing may be performed using analogue techniques such as band pass filters, for instance.
  • analogue techniques such as band pass filters, for instance.
  • Digital processing requires analogue to digital converters that require anti-aliasing filters that remove high frequency information from the original signal. Such anti-alias filtering is not required with analogue processing.
  • analogue processing is more efficient in terms of processing speed than digital processing.
  • a portion of the processing will be performed in the analogue domain and a portion will be performed in the digital domain. More preferably, digital processing techniques will be used to extract the modelisation function parameters.
  • the method further comprises the step of dividing the signal into discrete time segments before performing the at least one transformation and wherein the at least one transformation is performed on each discrete time segment of the signal.
  • This allows the continuous signal to be processed in discrete sections using one or more clock cycles of a digital processor.
  • one or more additional transformation functions may be generated and performed on the signal in response to at least one property of the signal.
  • additional transformation functions and associated modelisation functions may be retrieved or generated and used to encode the signal to improve the resulting decoded signal.
  • These additional modelisation functions and associated transformation functions may be stored within the database and retrieved or generated as and when required.
  • the method may further comprise the step of: adding the details of the additional transformation functions to the data output.
  • the output parameters are associated with the transformation function used to generate them, within the data output .
  • the details of the corresponding modelisation function may also be added to the data output.
  • the decoder database may not contain the modelisation function used.
  • the modelisation function may be retained, along with a reference to it, within the database of the decoder device for subsequent reference and use.
  • An existing modelisation function contained within the database of the decoder may be updated in response to the presence of instructions contained within the output data from the encoder device.
  • the output data may also contain instructions to delete particular modelisation functions from the database. These instructions may be in the form or flags or other data. In this way, the encoder and decoder databases may be kept up to date with each other. They may also be maintained so as to contain the most suitable modelisation functions for the properties of the particular signal being encoded and decoded.
  • At least a portion of the data content of the data stream is compressed. This is in contrast to the prior art method of compressing the data content of the signal itself. This further reduces the storage volume and bandwidth required by the signal .
  • the compression technique used would be any suitable lossless compression technique.
  • the method further comprises the step of: separating the signal into a set of separate signals each corresponding to a different frequency band before performing the at least one transformation on each of the separate signals.
  • This separation of signals may be performed with a filter bank where each filter isolates a particular frequency band from the original signal. This allows each signal to be processed in parallel by modelisation functions suitable for that particular frequency band.
  • the signal may be separated into separate signals based on other properties of the signal, such as amplitude or phase, using suitable filters.
  • the method further comprises the steps of: digitising a portion of the signal; and adding the digitised portion to the data output.
  • This step allows artefacts to be encoded as either raw or compressed data. It allows portions of the signal that cannot be encoded using any of the available modelisation functions or additional functions to be encoded and added to the data output directly in digital form.
  • the method further comprises the step of: storing a portion of the data output as referenced data in a memory buffer so that repetitive signal portions can be identified and duplicated in the data output.
  • This further optimises the output data by referencing repetitive signals and encoding a reference to the repeated portion of the signal instead of re-encoding the complete set of parameters extracted for that particular signal portion. This allows further efficiency improvements as fewer parameters need to be transmitted or stored.
  • the method may further comprise the step of applying a predetermined multiplication factor to a portion of the signal.
  • a predetermined multiplication factor When used with images or video this allows areas of the images or video perceived by a viewer to be more important to be encoded more precisely or more accurately. This allows system resources to be used more efficiently.
  • the multiplication factor applied is dependent on the frequency of the portion of the signal .
  • This relates to the spatial frequency of signals used to form images or video frames .
  • the method may further comprise the step of applying a quantisation filter to the output parameter.
  • the range that the output parameters may fall within is divided into discrete windows. It is then determined which window each output parameter falls within and it is the window identifier that is added to the data stream rather than the exact value of the output parameter. Therefore, the precision of the encoding process may be selected by varying the size of the windows; smaller windows represent a higher precision. The precision may be varied dynamically and in response to properties present in the signal.
  • the quantisation filter is a logarithmic ⁇ quantisation filter. This provides a greater dynamic range, whilst preserving encoding quality.
  • An alternative quantisation filter may be that of a linear division of the range .
  • the signal may be an image and the method may further comprise the step of generating an additional signal corresponding to an area outside of the image.
  • Border optimisation may be used to generate "missing" data required to more accurately encode portions of the signal corresponding to areas close to an image border.
  • the method may further comprise the step of sorting the at least one output parameter.
  • the data may be sorted by the type of transformation used to generate each parameter. This improves the efficiency of the process used to decode the signal .
  • a method for decoding an encoded signal comprising the steps of: receiving a data input containing a synchronisation signal and at least one output parameter derived from at least one transformation performed on the signal during encoding; maintaining synchronisation of the decoding method using the synchronisation signal; and generating a signal by performing at least one function corresponding to the at least one transformation on the at least one output parameter.
  • the function corresponding to the at least one transformation is a modelisation function used to regenerate the signal from the output parameter or parameters contained within the data input.
  • the method may further comprise the step of: performing a function contained within the data input on the at least one output parameter to generate the signal .
  • This allows additional modelisation functions to be sent with the data stream when the decoded signal does not match the encoded signal within predetermined limits (as discussed above) .
  • a transformation function is used to generate extracted parameters. This transformation function may be associated with the parameters in the data input .
  • the corresponding modelisation function used to regenerate the signal within the decoder may be associated with parameters in the data input.
  • the method further comprises the step of at least partially decompressing the data contained within the data stream. This allows compressed data containing the output parameters to be decompressed and so an even smaller volume of data is required to describe the signal .
  • the method may further comprise the step of converting the generated signal into a digital signal.
  • This has the advantage of enabling the output signal to be used by digital equipment and processes .
  • the decode process may be conducted in the digital or analogue domains or a combination of both. Therefore, the output signal may be analogue or digital depending on its intended purpose.
  • the at least one transformation is stored as data describing the at least one transformation in a database and wherein the method further comprises the step of: retrieving data describing the at least one transformation from the database.
  • the use of a database within the encoding and decoding processes allow the transformation and modelisation functions to be stored in a convenient manner.
  • the method further comprises the step of maintaining synchronisation between the functions contained within an encoder database (50) and the functions contained within a decoder database (260) .
  • This allows the set of transformation and modelisation functions to be optimised in response to the content of the signal.
  • This also allows additional functions to be added to the databases within the encoder and decoder when the existing functions cannot be used to regenerate the signal to within predefined limits.
  • This also allows commonly used additional functions to be stored within the databases and so avoids the need to repeatedly store or transmit these additional functions within the data output file.
  • the data output may contain additional information for amending, adding or deleting the functions within the decoder database. These may be in the form of flags or similar data types.
  • the encoded signal is decoded with a precision that may be varied.
  • This has the advantage of decoding a single encoded file using different decoding devices.
  • the output signal is, therefore, independent of the data file and is only dependent on the resources of the decoding device .
  • the method may further comprise the step of applying a predetermined multiplication factor to a portion of the data input corresponding to a portion of the encoded signal.
  • a multiplication factor may be used to increase the precision of the encoded data for areas of an image or video signal that is perceived by a viewer to be more important.
  • a corresponding multiplication factor is used to restore each portion of the signal to its correct value.
  • the multiplication factor may be applied to the output parameters.
  • the multiplication factor applied is dependent on the frequency of the encoded signal .
  • the multiplication factors used in the encoding and decoding steps match each other and are associated with a particular spatial frequency or range of frequencies of the signal when the signal corresponds to an image or video stream.
  • the data stream may include information relating to the particular multiplication factor used or this may be inferred by the particular frequency of the reconstructed signal .
  • an apparatus for encoding a signal comprising: a database containing details of at least one transformation; means for maintaining synchronisation; and a processor adapted to perform at least one transformation on the signal using the at least one transformation retrieved from the database thereby generating an output parameter and form a data output containing the synchronisation signal and the output parameter.
  • the means for maintaining synchronisation may be a clock signal or a synchronisation signal contained within the signal itself, such as an NTSC, PAL or SECAM synchronisation signal, for instance.
  • the apparatus may further comprise: means to compress at least some of the data output .
  • means to compress at least some of the data output This has the advantage of further reducing the storage and bandwidth requirements of the signal.
  • the signal is an analogue signal and the apparatus further comprises : an analogue to digital converter for converting the analogue signal into a digital signal .
  • an analogue to digital converter for converting the analogue signal into a digital signal .
  • the apparatus may contain analogue processing means such as filters, for instance. Therefore, some or all of the processing may take place in the analogue or digital domains .
  • the apparatus further comprises memory to store discrete segments of the digital signal. This enables the continuous signal to be split up and stored in discrete signal segments whilst they await processing.
  • an apparatus for decoding a signal comprising: a database containing details of at least one function; means for maintaining synchronisation; and a processor arranged to receive a data input containing a synchronisation signal and an output parameter produced from a transformation on the signal, retrieve the details of at least one function corresponding to the transformation on the signal from the database and generate a signal from the output parameter using the retrieved details of the at least one function.
  • This apparatus corresponds with the method for decoding the signal and, therefore, has at least the same advantages.
  • the apparatus may comprise means for decompressing at least a portion of the data contained in the data input. This allows the signal to be described by an even smaller volume of data.
  • the present invention also extends to a computer program comprising instructions that, when executed on a computer cause the computer to perform the method for encoding a signal, as described above.
  • the present invention also extends to a computer program comprising instructions that, when executed on a computer cause the computer to perform the method for decoding a signal, as described above.
  • the present invention also extends to a computer programmed to perform the method for encoding a signal, as described above.
  • the present invention also extends to a computer programmed to perform the method for decoding a signal, as described above.
  • Figure 1 shows a flow diagram of a method of encoding an analogue signal from a CCD sensor and forming a data stream, including a database containing decomposition and transformation functions, to produce an encoded data stream in accordance with a first embodiment of the present invention
  • Figure 2 shows a flow diagram of a method for decoding the data stream produced by the method of Figure 1 and for reconstructing an analogue signal in accordance with the first embodiment of the present invention
  • Figure 3 shows a flow diagram of the initial steps in processing the signal of encoded by the method of Figure 1 including converting the analogue signal into a digital signal and storing the digital signal in a set of memory banks ;
  • Figure 4 shows a schematic diagram of the generation of decomposition and transformation functions from properties of the analogue signal in accordance with the first embodiment of the present invention
  • Figure 5 shows a schematic diagram of the generation of the data stream of produced by the method of Figure 1 from the analogue signal
  • Figure 6 shows a schematic diagram of the reconstruction of an output signal from the data stream of produced by the method of Figure 1 ;
  • Figure 7 shows a schematic diagram of a method of encoding an analogue signal, including a filter bank, in accordance with a second embodiment of the present invention
  • Figure 8 shows a flow diagram of a method of encoding an analogue signal from a CCD sensor and forming a data stream, including a perception filter step and a logarithmic adaptive quantisation step, in accordance with a third embodiment of the present invention.
  • Figure 9 shows a flow diagram of a method for decoding the data stream produced by the method of Figure 8 and for reconstructing an analogue signal in accordance with the third embodiment of the present invention.
  • Figure 1 shows a flow chart describing a method for encoding a signal according to a first aspect of the present invention.
  • the encoding process starts when a sensor 20 generates an analogue input signal 30.
  • This analogue input signal 30 may comprise one or more 1-D signals.
  • Each 1-D signal may correspond to a particular colour signal, a composite signal or an audio channel, for instance. Where multiple 1-D signals are present, each 1-D is processed separately.
  • the following description relates to the processing of individual 1-D but it will be appreciated by the skilled person that it is possible to process multiple 1-D signals in parallel.
  • the sensor 20 may be a CCD or a microphone or other electronic device suitable for detecting a sound or image and generating an electronic signal from it.
  • the device generating the electronic signals may be a video camera, for example.
  • Such devices generate an analogue input signal 30 and this analogue input signal 30 is processed in a signal processor 35 to form a digital signal 40.
  • the analogue input signal 30 is filtered and digitised before being sampled in discrete time interval components. This process is shown by the group of components 45 enclosed by dashed lines and is described in further detail and with reference to Figure 3.
  • the digital signal 40 is passed to signal transform component 60.
  • Signal transfer component 60 also retrieves details from database 50.
  • Database 50 stores the details of the transformation and modelisation functions 140 that are to be used to encode the signal 40.
  • the signal transform component 60 performs all of the transformations contained in the database on the digital signal 40.
  • the encoding process 10 is continuous (although the analogue input signal 30 can be processed in discrete digital signal 40 components as described in Figure 3) and synchronisation of the process is maintained by the bit allocation component 70, which generates a synchronisation bit 75 for each discrete signal component provided to signal processor 35.
  • the bit allocation component 70 which generates a synchronisation bit 75 for each discrete signal component provided to signal processor 35.
  • the quantisation component 80 quantises the signal provided by the signal transform component 60 thereby reducing the precision of the output parameters of that signal and further reducing the volume of data generated by the encoding process 10.
  • the quantisation process 80 produces quantised parameters 85 by removing a predetermined number of least significant data from the output parameters, i.e. introducing a predetermined rounding error to the output parameters.
  • the extraction of significant parameters component 90 determines the set of parameters that are to be extracted from the quantised parameters 85 to describe the analogue input signal 30 and form a set of extracted parameters 95.
  • the precision of the signal is determined by choice of these extracted parameters 95.
  • the external transform for exception handling component 100 retrieves or generates variable and adaptive functions 108 when the extracted parameters 95 cannot adequately describe the analogue input signal 30. These variable and adaptive functions 108 are added to the extracted parameters 95 to form signal 105. This component 100 is described in more detail below.
  • the extracted parameters and any added variable and adaptive function data 108 are compressed using known compression techniques familiar to the person skilled in the art.
  • This compression step is performed in the entropy coding component 110 to form compressed data 115.
  • the transmitter 120 formats the compressed data 115 into an output stream 130 that may be read by a suitable decoding method 200, as described below.
  • the synchronisation bit 75 described above is added to the output data stream 130 by the transmitter 120.
  • the synchronisation bit 75 is added to the header of the data stream 130.
  • the bit allocation component 70 runs in parallel to all of the processes from the quantisation component 80 to the entropy encoding step 110.
  • Figure 2 shows a flow diagram describing a process of generating an analogue output signal 280 by decoding the data stream 130 generated by the encoding process 10 described in Figure 1.
  • the decoding process relates to the decoding of a 1-D signal.
  • this decoding process 200 works as a reverse of the encoding process 10.
  • the decoding process 200 begins at the foot of Figure 2 where a receiver 210 receives the data stream 130.
  • the synchronisation bit, 75 contained within the header of the data stream 130, is used by the bit allocation component 240 to maintain synchronisation between the input data stream and the resultant analogue output signal 280.
  • the compressed data 215 within the data stream 130 is decompressed during the entropy decoding component 220 forming decompressed data 225.
  • variable and adaptive functions 108 contained with the input data stream 130 are removed by the external transform for exception handling component 230. This component is discussed below.
  • the signal transform process 250 reconstructs a digital signal 270 from the input parameters contained within the data stream 130.
  • Database 260 contains an identical set of transformation and decomposition functions 140 to that of the encoding database 50 as shown in Figure 1.
  • the signal transform process 250 uses the transformation and decomposition functions 140 retrieved from the database 260 to generate the digital signal 270 from the parameters contained with the input data stream 130.
  • each function 140 within the database 50 is performed on the digital signal 40.
  • the output digital signal 270 is generated by performing in reverse each of those same functions 140 on the input parameters contained within the data stream 130 to reconstruct the digital signal 270 corresponding to the original digital signal 40.
  • the output digital signal 270 may be optionally converted into an analogue output signal 280, corresponding to the original analogue signal 30, and directed to an output device. This may be the original CCD sensor 20 or another device using data in the digital domain 290.
  • Figure 3 shows the group of components 45 enclosed within the dashed line 45 of Figure 1. These components are concerned with the initial processing of the analogue input signal 30 forming each of the discrete digital signals 40.
  • the analogue signal 30 is processed by the signal processor 35 that has the following components.
  • the analogue signal 30 is filtered using an anti-aliasing filter 300. This filters out higher frequency components of the analogue signal 30 that may distort when sampled by the analogue to digital converter 320.
  • the anti-aliasing filter 300 works according to the Nyquist criterion and in accordance with Shannon's sampling theorem. These criteria require the analogue to digital converter 320 to sample the analogue input signal 30 at at least twice the frequency of the maximum frequency component of the analogue signal 30. These criteria are well-known to the person skilled in the art of digital electronics and so require no further explanation here.
  • the signal 315 produced by the anti-aliasing filter 300 is acquired by the sample and hold device 310 before being converted into the digital domain by the analogue to digital converter 320.
  • a digital signal processing unit (DSP) 330 contains at least two memory banks 340 (0 and 1) .
  • the digital data representing the analogue signal are stored in discrete time interval components 325 within these memory banks 340.
  • the digital signal is interleaved and each discrete time interval component 325 is processed while the remaining components are stored in memory within the DSP 330.
  • a clock speed for the DSP 330 is chosen to be significantly faster than the sample rate of the digital signal. Therefore, there is the potential for several clock cycles to be used to process each discrete-time interval component 325 of the digital signal 40. In this way the analogue input signal 30 may be processed in real time with very little delay between the analogue input signal 30 entering the encoder 10 and the production of the output data stream 130.
  • the signal processor 35 may be a single discrete device. Alternatively, a single device may provide both the signal processor 35 and signal transform component 60.
  • the database 50 as shown in Figure 1 comprises all of the modelisation functions and transformations performed on the digital signal 40.
  • the results of these transformations 140 are quantised by the quantisation component 80 and are then extracted by the extraction of significant parameters component 90 forming a set of extracted parameters 95.
  • variable and adaptive functions 108 used by the external transform for exception handling component 100. These too may be stored within database 50 or in a separate database or file (not shown) .
  • the variable and adaptive functions 108 comprise the adaptive kernel of functions. Therefore, the overall kernel of functions 420 comprises both the core kernel and the adaptive kernel of functions.
  • the database 50 contains both the modelisation functions and their associated transformations used to generate the extracted parameters 95.
  • Figure 4 shows a schematic diagram describing the choice of properties of the kernel of functions. Each of these functions is described schematically by equation 420.
  • the composition of the kernel of functions 420 is determined from the properties of the audio visual material 400 producing the analogue input signal 30 that is to be encoded. When determining the actual composition of these functions 420, it is also necessary to consider the underlying device that will generate this audio video material 400.
  • the core kernel of functions may include many well known functions and their associated transformations. These may include wavelet transformations such as discrete wavelength transforms (DWT) , fractal decomposition functions, discrete cosine transforms (DCT) and other filter based functions. These functions are well-known in the field of data compression but will be used to extract the parameters 95 that will be used to recompose the signal during the decoder process and do not require any further discussion here.
  • wavelet transformations such as discrete wavelength transforms (DWT) , fractal decomposition functions, discrete cosine transforms (DCT) and other filter based functions.
  • variable and adaptive functions 108 are additional modelisation functions with associated transformation functions that may be dynamically generated or retrieved in response to the input signal 40. These variable and adaptive functions 108 may be stored within a separate database
  • variable and adaptive functions 108 may be stored within database 50.
  • the signal processor 35 may be a neural network processor, which may determine whether or not to store particular variable and adaptive functions 108 based upon the properties of the analogue input signal 30 or the resources of the intended decoding processor or a combination of both.
  • variable and adaptive functions 108 are generated and retrieved by the external transform for exception handling component 100 shown in Figure 1. These variable and adaptive functions 108 are not continuously performed on the digital signal 40, but are only created when the core kernel functions produce an output which is outside predetermined limits. This may be determined by reconstructing the digital signal 270 from the core kernel functions (within the encoder 10) and comparing the reconstructed digital 270 signal with the original digital signal 40 provided through line 109 and calculating the difference between these two signals. If the error is outside predetermined limits, one or more variable and adaptive functions 108 are created. This quality checking procedure is performed within the external transform for exception handling component 100. Details concerning the variable and adaptive functions 108 are added to the data output stream 130 header so that they may be interpreted by the decoder process 200.
  • the quality checking procedure is repeated using the parameters generated by the variable and adaptive functions 108 in the same way as the quality checking procedure is performed for parameters generated from the core kernel functions 420. If the error is still outside of the predetermined limits, the portion of the digital signal 325 that failed to be parameterised within acceptable limits (an artefact) is added to the output data 130 as raw data or can be compressed by the usual means, as is known from the prior art (entropy encoding, etc.).
  • Equation 1 represents a typical core kernel function as contained within databases 50 and 260:
  • Equation 2 represents a variable and adaptive function as added to the data output stream of the encoder 10 :
  • J is a variable and adaptive function.
  • Equation 3 describes generation of core kernel functions 140 contained with the decoder function database 260.
  • the decoder functions must generate a reconstructed digital signal 270 from the input parameters. Where system resources are limited (such as in a mobile phone or other similar device) the decoder functions may work with a lower precision to the encoder functions and so generate a lower quality reconstructed signal compared with the original digital signal 40.
  • FIG. 5 shows a schematic diagram describing the deconstruction of a signal (St) .
  • the mapper 500 carries out the signal transformation by using each core kernel function 420 from the database as well as any required variable and adaptive functions 108.
  • the data output is represented by Equation 4 :
  • £ 2k is a vector describing the extracted parameters 95 extracted from the core kernel of functions 420 as well as any variable and adaptive functions 108 used.
  • Jv represents the variable and adaptive functions 108 that may have been used.
  • £ represents additional parameters included within the data output 130. These parameters may include an indication of the presence of a variable and adaptive function 130 and any flags or parameters describing the original audio/video signal such as the original format (e.g. PAL, NTSC, etc) .
  • the original format e.g. PAL, NTSC, etc
  • Figure 6 shows a schematic diagram describing the reconstruction of the signal from the data output of the mapper 500.
  • the data output (now data input) is reconstructed within the builder module 600 which provides a reconstructed data output as described by Equation 5 :
  • the builder module 600 uses the core kernel functions 420 as well as the variable and adaptive functions 108 (if present) to reconstruct the digital signal 270.
  • the quality of the reconstructed digital 270 signal does not rely on the volume of data contained within the bit stream 130 (output or input parameters) but is instead determined by the functions 140 contained within the encoding and decoding databases and any variable and adaptive functions 108 used. In general, these functions are not transmitted within the bit stream (except for occasional variable and adaptive functions 108 that are transmitted under exceptional circumstances) .
  • Synchronisation between the variable and adaptive functions stored in the encoder 10 and decoder 200 is achieved by incorporating instructions to update, amend or delete the variable and adaptive functions within the header of the data stream 130 forming the data output from the mapper 500. Not every variable and adaptive function 108 is retained in the encoder and decoder.
  • the encoder system may be limited in memory or processing power, and so it may be more efficient to generate these new functions on an ad hoc basis as and when required.
  • the encoder will therefore assign a priority co-efficient to each new function 108 with the highest priority functions kept in preference over the lower priority functions. These priority co-efficients are also transmitted in the header of the data ⁇ steam 130 and used by the decoder system 200 in determining whether or not to retain the new functions 108 within the decoder 200.
  • the generation and updating of the adaptive and variable functions 108 may be performed by a neural network or other suitable processor.
  • the neural network will use the digital signal 40 to determine which, if any function to retrieve or generate and will also use a set of criteria describing the required output quality (e.g. HDTV or mobile phone displays) .
  • the input analogue signal 30 may be any signal describing data but in particular, this technique is specifically suitable for video and/or audio' signals.
  • the variable and adaptive functions 108 may be generated by the neural network or suitable processor from a set of function libraries separate to the encoding database 50.
  • a further efficiency saving in processing power and data volume may be made by buffering a certain volume of the most recent data 130 in memory. This allows repetitive signal portions 325 to be identified and referenced during the encoding process. Instead of generating parameters 95 each time for each subsequent repetitive signal portion 325 (i.e. repeating the full encoding process for each repetitive signal portion 325, as described above), the encoder will add to the output data 130 a reference to the set of parameters 95 previously buffered. The decoding process will also buffer the input data 130 in memory and will regenerate the repeated signal portion 325 each time a reference to a set of parameters 95 is encountered in the data 130. This results in improved throughput during the decoding process as modelisation functions will not be required for the repetitive signal portions 325. The buffered output signal 270 will be retrieved instead.
  • the description above describes the processing of an analogue signal 30 generated by a sensor 20.
  • the encoding 10 and decoding 200 process are also suitable for signals generated in other formats such as television signals (NTSC, PAL, SECAM, etc.), for instance.
  • Such television signals contain non-video or audio components. Therefore, additional processing is required during both the encoding 10 and decoding 200 processes to process the non-video or audio components.
  • An NTSC signal contains a set of signals before the active video signal is encountered.
  • This set of signals include a blank level, a vertical sync pulse, a horizontal sync pulse, colour burst information and may also contain teletext information.
  • These signals and their use are well known to the person skilled in the art and do not require any further description.
  • Other non-video or audio signals may be present and these signals may be processed in a similar way. As these non-video or audio signals have a well defined structure and content it is possible to reconstruct them from a minimum amount of information as their structure and location within the signal is set as a standard known format. Therefore, these signals are removed from the analogue input signal 30 within a pre-processing stage (prior to the encoding process 10) . The properties of these removed signals are added to the data output 130 as a simple flag or reference parameter. In the case of teletext, the data are simply added as raw or compressed text .
  • a further step is required to regenerate the non-video or audio signals (if required) .
  • This additional step regenerates the analogue output signal 280 in the original format using the flag or reference parameters contained with the input data 130.
  • the input data 130 may also contain an additional flag or data element indicating the signal type (NTSC, PAL, SECAM, etc.) that is to be regenerated.
  • any teletext data are added to the output signal, if required.
  • FIG. 7 illustrates a second embodiment of the present invention that is similar to the first embodiment except for the following differences described below. Identical features will be referenced with the same reference numerals in the following description.
  • Figure 7 shows a schematic diagram of an analogue input signal 30 such as one of the 1-D signals previously described.
  • the analogue input signal 30 is separated into a set 720 of individual analogue signals 740 by a filter bank 710.
  • Each individual signal 740 corresponds to a different frequency range of the original analogue input signal 30.
  • Each individual signal 740 is processed, as described above with reference to Figure 1, resulting in a set 730 of individual 130 data containing header and output data for each individual signal 740 along with an additional item of data corresponding to the frequency band of the individual signal 740. All of the output data 130 are combined in a single file (not shown) . On reconstruction, the data 130 corresponding to each separate frequency band is reconstructed separately by the reconstruction process described above with reference to Figure 2. The reconstructed signals are then combined to form a single analogue output signal 280, as shown on Figure 2.
  • the effect of the filter bank 710 is to provide a level of analogue processing before any parameters are extracted. This also allows a certain amount of parallel processing to take place and so improve throughput .
  • This parallel processing may be in addition to parallel processing or multiple 1-D signals.
  • the extraction process may be either in the analogue or digital domain.
  • Figure 8 shows a flow chart describing a method for encoding a signal according to a third embodiment of the present invention.
  • the encoding process 10' is similar to that shown in Figure 1 and like features have been given the same reference numbers.
  • the encoding process 10 ' additionally contains a perception filter 800 following the signal transform 60 step, a sorting of parameters 810 step following the extraction of significant parameters 90 step and border optimisation 55 performed before the signal transform 60 step. Additionally, the quantisation 80 step of Figure 1 is replaced by a logarithmic quantisation step 80'.
  • the perception filter 800 uses a set of heuristically determined coefficients, which are independent of the content to be encoded. Each coefficients relate to a specific frequency (or range of frequencies) that may be present within the signal and is used as a multiplication factor to increase or decrease the encoding effort used for parts of the signal having the corresponding frequency.
  • the signal is a video signal containing image information
  • certain parts of the image having particular spatial frequencies
  • the perception filter allows the more important parts of the signal to be more fully encoded than those parts of the signal that are perceived to be less important.
  • parts of the signal are encoded differently using varying levels of resources depending on the frequency of those portions of the signal . This allows bandwidth and computing power to be used more efficiently.
  • the perception filter step 800 may be applied directly to the signal or alternatively, to the output parameters generated after the signal is transformed.
  • the sorting of parameters 810 step groups together parameters formed from different types of transformations. By grouping similar parameters together they may be compressed by the entropy coding 110 step more efficiently. Exceptions are directed to the external transform for exception handing component 100.
  • Logarithmic quantisation is used in step 80' instead of linear quantisation as described with relation to Figure 1.
  • Border optimisation 55 is used to generate data required to perform reliable encoding. Typically, this occurs near to the border of an image frame, where data outside of the border are required to encode the signal. These "missing" data may be generated by using symmetry, where missing data are generated by interpolation from data contained within the border. Motion estimation may also be used. For instance, where a series of frames is panned from side to side or zoomed in, data from a previous frame may be used to generate new data and so extend the image beyond its border.
  • Figure 9 shows a flow diagram describing a process of generating an analogue output signal 280 by decoding the data stream 130 generated by the encoding process 10' described in Figure 8.
  • the decoding process 200' is similar to that shown in Figure 2 and like features have been given the same reference numbers .
  • Figure 9 additionally includes an inverse perception ' filter 900, which contains a corresponding set of coefficients to the perception filter 800 used to encode the signal and described with reference to Figure 8.
  • the coefficients are used as multiplication factors to return each frequency component of the signal to that of its original level, i.e, before the perception filter 800 applied any scaling factors.
  • the perception filter step 800 may be applied directly to the signal or alternatively, to the output parameters generated after the signal is transformed.
  • the database 50 may contain additional transformation or modelisation functions that are not performed on each segment of the digital signal 40.
  • the adaptive and variable functions 108 may also be stored within database 50.
  • Other types of transformation and modelisation functions may be performed during the encoding 10 and decoding 200 processes.
  • the header of the data stream 130 may contain additional information regarding the analogue input signal 30 such as the date and time of the acquisition of the signal, the type of equipment used and the level of quantisation used.
  • the discrete segments of the digital signal 325 may be stored in any number of memory banks 340 and these memory banks 340 may be located outside or inside the DSP 330. Any type of data may be encoded and decoded by this technique such as data stored in a database or within a computer system, for instance.
  • the entropy coding 120 and decoding 220 processes may use any well know lossless compression techniques such as Huffman, simple run- line encoding and other arithmetic techniques.
  • the synchronisation signal may be a clock signal generated within the analogue or digital processor or may be a synchronisation signal contained within the signal itself, such as an NTSC, PAL or SECAM synchronisation signal, for instance. In this case it may not be necessary to add a synchronisation signal as this may be present already. Alternatively, a synchronisation signal may be completely absent provided internal synchronisation or timing is kept when encoding or decoding the signal .
  • Figures 8 and 9 may be implemented together or separately within systems or methods to encode and decode signals.
  • the output and input data streams may be continuous signals or files or sets of files.
  • the signal may represent optical images, video or sound or music data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente invention se rapporte à un procédé et à un appareil permettant de coder un signal. Le procédé selon l'invention consiste : à effectuer une transformation sur le signal à l'aide d'une fonction de transformation (140) générant au moins un paramètre de sortie à l'issue de la transformation ; à associer un signal de synchronisation (75) au paramètre de sortie ; et à fournir une sortie de données (130) contenant le signal de synchronisation (75) et le paramètre de sortie. L'invention concerne également un procédé et un appareil permettant de décoder un signal codé. Le procédé selon l'invention consiste : à recevoir une entrée de données contenant un signal de synchronisation (75) et un paramètre (95) dérivé d'une transformation (140) effectuée sur le signal lors du codage ; à maintenir la synchronisation du procédé de décodage à l'aide du signal de synchronisation (75) ; et à générer un signal en exécutant une fonction correspondant à la transformation (140) sur le paramètre de sortie.
PCT/GB2006/001296 2005-04-07 2006-04-07 Codage et decodage de signaux WO2006106356A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/GB2007/001247 WO2007116207A1 (fr) 2006-04-07 2007-04-05 Codage et décodage de signal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0507092.5 2005-04-07
GB0507092A GB2425011A (en) 2005-04-07 2005-04-07 Encoding video data using a transformation function

Publications (1)

Publication Number Publication Date
WO2006106356A1 true WO2006106356A1 (fr) 2006-10-12

Family

ID=34586876

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2006/001296 WO2006106356A1 (fr) 2005-04-07 2006-04-07 Codage et decodage de signaux

Country Status (2)

Country Link
GB (2) GB2425011A (fr)
WO (1) WO2006106356A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023169332A1 (fr) * 2022-03-08 2023-09-14 盖玉梅 Procédé de régénération de signal par modèle

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0790741A2 (fr) * 1995-10-27 1997-08-20 Texas Instruments Incorporated Procédé et système pour la compression vidéo
US20020054634A1 (en) * 2000-09-11 2002-05-09 Martin Richard K. Apparatus and method for using adaptive algorithms to exploit sparsity in target weight vectors in an adaptive channel equalizer
US20040057457A1 (en) * 2001-01-13 2004-03-25 Sang-Woo Ahn Apparatus and method for transmitting mpeg-4 data synchronized with mpeg-2 data
US20050030205A1 (en) * 2002-03-19 2005-02-10 Fujitsu Limited Hierarchical encoding and decoding devices

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0482888B1 (fr) * 1990-10-25 1997-06-04 Matsushita Electric Industrial Co., Ltd. Dispositif d'enregistrement et de reproduction de signal vidéo
JPH0583696A (ja) * 1991-06-07 1993-04-02 Sony Corp 画像符号化装置
JPH05115007A (ja) * 1991-10-21 1993-05-07 Canon Inc 画像伝送方法
JP3360844B2 (ja) * 1992-02-04 2003-01-07 ソニー株式会社 ディジタル画像信号の伝送装置およびフレーム化方法
US5926209A (en) * 1995-07-14 1999-07-20 Sensormatic Electronics Corporation Video camera apparatus with compression system responsive to video camera adjustment
DE19615657A1 (de) * 1996-04-19 1997-08-21 Siemens Ag Verfahren zur Komprimierung von eine Bildfolge repräsentierenden Bilddaten
GB9919381D0 (en) * 1999-08-18 1999-10-20 Orad Hi Tech Systems Limited Narrow bandwidth broadcasting system
US6771823B2 (en) * 2000-03-21 2004-08-03 Nippon Hoso Kyokai Coding and decoding of moving pictures based on sprite coding
AU2001279304A1 (en) * 2000-07-13 2002-01-30 Christopher J. Feola System and method for associating historical information with sensory data and distribution thereof

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0790741A2 (fr) * 1995-10-27 1997-08-20 Texas Instruments Incorporated Procédé et système pour la compression vidéo
US20020054634A1 (en) * 2000-09-11 2002-05-09 Martin Richard K. Apparatus and method for using adaptive algorithms to exploit sparsity in target weight vectors in an adaptive channel equalizer
US20040057457A1 (en) * 2001-01-13 2004-03-25 Sang-Woo Ahn Apparatus and method for transmitting mpeg-4 data synchronized with mpeg-2 data
US20050030205A1 (en) * 2002-03-19 2005-02-10 Fujitsu Limited Hierarchical encoding and decoding devices

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023169332A1 (fr) * 2022-03-08 2023-09-14 盖玉梅 Procédé de régénération de signal par modèle

Also Published As

Publication number Publication date
GB2425013A (en) 2006-10-11
GB0607067D0 (en) 2006-05-17
GB2425011A (en) 2006-10-11
GB0507092D0 (en) 2005-05-11

Similar Documents

Publication Publication Date Title
KR100664928B1 (ko) 비디오 코딩 방법 및 장치
US8189932B2 (en) Image processing apparatus and image processing method
US8873871B2 (en) Image processing apparatus and method
JP4124792B2 (ja) 符号化方法及び復号化方法及び符号化装置及び復号化装置
US20130301711A1 (en) Compression and decompression of reference images in a video encoder
JP2010022006A (ja) 画像データ処理方法
US20050157794A1 (en) Scalable video encoding method and apparatus supporting closed-loop optimization
JPH0970044A (ja) 画像信号処理装置および方法
US20060013311A1 (en) Video decoding method using smoothing filter and video decoder therefor
US11924470B2 (en) Encoder and method of encoding a sequence of frames
JP3466080B2 (ja) デジタルデータの符号化/復号化方法及び装置
KR100643269B1 (ko) Roi를 지원하는 영상 코딩 방법 및 장치
WO2011072893A1 (fr) Codage vidéo au moyen des flux de pixels
US20060159168A1 (en) Method and apparatus for encoding pictures without loss of DC components
US20080031328A1 (en) Moving Picture Encoding Device, Method, Program, And Moving Picture Decoding Device, Method, And Program
JP2017535159A (ja) 向上した予測フィルタを用いてビデオ信号をエンコーディング、デコーディングする方法及び装置
WO2006106356A1 (fr) Codage et decodage de signaux
JP2002084541A (ja) 圧縮画像処理方法
US6819800B2 (en) Moving image compression/decompression apparatus and method which use a wavelet transform technique
JPH08163561A (ja) 画像データ圧縮装置
WO2007116207A1 (fr) Codage et décodage de signal
KR950008640B1 (ko) 비트 고정을 위한 영상 압축 부호화 방식 및 복호화 방식
JPH09182074A (ja) 画像信号符号化方法及び装置、画像信号復号方法及び装置
Motta Optimization methods for data compression
JP2002208860A (ja) データ圧縮装置とそのデータ圧縮方法及びデータ圧縮用プログラムを記録したコンピュータ読み取り可能な記録媒体、並びにデータ伸長装置とそのデータ伸長方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06726697

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 6726697

Country of ref document: EP