AU2009225320A1 - Method of decoding image using iterative DVC approach - Google Patents

Method of decoding image using iterative DVC approach Download PDF

Info

Publication number
AU2009225320A1
AU2009225320A1 AU2009225320A AU2009225320A AU2009225320A1 AU 2009225320 A1 AU2009225320 A1 AU 2009225320A1 AU 2009225320 A AU2009225320 A AU 2009225320A AU 2009225320 A AU2009225320 A AU 2009225320A AU 2009225320 A1 AU2009225320 A1 AU 2009225320A1
Authority
AU
Australia
Prior art keywords
image
point spread
spread function
error correction
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2009225320A
Inventor
Ka Ming Leung
Zhonghua Ma
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to AU2009225320A priority Critical patent/AU2009225320A1/en
Publication of AU2009225320A1 publication Critical patent/AU2009225320A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/3057Distributed Source coding, e.g. Wyner-Ziv, Slepian Wolf
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/395Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability involving distributed video coding [DVC], e.g. Wyner-Ziv video coding or Slepian-Wolf video coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Description

S&F Ref: 905849 AUSTRALIA PATENTS ACT 1990 COMPLETE SPECIFICATION FOR A STANDARD PATENT Name and Address Canon Kabushiki Kaisha, of 30-2, Shimomaruko 3 of Applicant : chome, Ohta-ku, Tokyo, 146, Japan Actual Inventor(s): Zhonghua Ma Ka Ming Leung Address for Service: Spruson & Ferguson St Martins Tower Level 35 31 Market Street Sydney NSW 2000 (CCN 3710000177) Invention Title: Method of decoding image using iterative DVC approach The following statement is a full description of this invention, including the best method of performing it known to me/us: 5845c(2339062_1) - 1 METHOD OF DECODING IMAGES USING ITERATIVE DVC APPROACH TECHNICAL FIELD The present invention relates generally to image encoding and decoding and, in particular, to a method and apparatus for performing image and video coding using 5 distributed source coding. BACKGROUND Various products, such as digital (still) cameras and digital video cameras, are used to capture images and videos. These products contain an image sensing device, such as a charge coupled device (CCD), which is used to capture light energy focussed 10 on the image sensing device that is indicative of a scene. The captured light energy is then processed to form a digital image. Various formats are used to represent such digital images, or videos. Formats used to represent video include JPEG, JPEG2000, Motion JPEG, Motion JPEG2000, MPEGI, MPEG2, MPEG4 and H.264. All the formats listed above are compression formats, although the manner in 15 which compression is performed is varied. While these formats offer high quality and improve the number of images that can be stored on a given media, they typically suffer from long encoding runtime. For conventional formats, such as JPEG, JPEG2000, Motion JPEG, Motion JPEG2000, MPEGI, MPEG2, MPEG4 and H.264, the encoding process is typically five to ten times more complex than the corresponding decoding 20 process. A complex encoder requires complex hardware. Complex encoding hardware in turn is disadvantageous in terms of design cost, manufacturing cost and physical size of the encoding hardware. Furthermore, a long encoding runtime can result in delays in the operation of the camera shutter, thus reducing the capture rate. Additionally, more 25 complex encoding hardware has higher energy consumption. Since an extended battery 2338620_1 905849_speci_lodge -2 life is desirable for a mobile device, then it is desirable that hardware complexity is minimized in mobile devices. To minimize the complexity of an encoder, "distributed video coding (DVC)", which is based on the well-known Wyner-Ziv coding paradigm, may be used. In a DVC 5 scheme, the complexity is shifted from the encoder to the decoder. DVC may be used for both still images and video, the later being essentially a regular sequence of still images. For still image applications, an input image is often down-sampled to a lower resolution and is transmitted to the decoder via a storage medium or a communication 10 channel. The down-sampled image is often compressed using a conventional coding scheme, such as JPEG, JPEG2000, and H.264 (Intra). The decoder conventionally decodes the down-sampled image and uses the resulting image as side information to perform error correction using channel coding methods, such as Turbo codes and LDPC codes. 15 For video applications, an input video sequence is often partitioned into two sets of frames (or images). The first set is named "key frames" which are often compressed using one of the aforementioned conventional coding schemes. The second set is named "non-key frames" which are compressed according to the Wyner-Ziv encoding paradigm. At the decoder, the key frames are conventionally decoded. Then, 20 motion compensation interpolation or extrapolation is performed to generate an estimation of the non-key frames, also known as side information in the area of distributed video coding, from the decoded key frames. Finally, error correction methods, such as Turbo codes and LDPC codes, is applied to correct the estimation errors in the side information using the non-key frame information and to form the 25 reconstructed non-key frames 2338620_1 905849_spedlodge -3 Several approaches have been proposed in literature to exploit spatial correlation in images at the decoder using distributed video coding scheme. One approach is to partition the input image into n-by-n blocks. Pixels in the corresponding positions in the n-by-n blocks are concatenated together to form sub 5 bands. The first sub-band is often entropy encoded and is sent to the decoder to produce the initial side information. The remaining sub-bands are Wyner-Ziv coded one-by-one. At the decoder, the first sub-band is conventionally decoded and is used as side information for the second sub-band. Wyner-Ziv decoding is then performed to reconstruct the second sub-band from the side information and the received Wyner-Ziv 10 bits. The reconstructions of the first and second sub-bands are then used to derive the side information for the third sub-bands. This process repeats to decode all subsequent sub-bands. Hence, the reconstruction of the decoded sub-bands contributes to the side information for subsequent sub-bands. Another approach is to perform Wyner-Ziv coding in the transform domain and 15 to reorder the transform coefficients using, for example, zero-tree entropy (ZTE) coding. In this approach, only some significant coefficients are actually Wyner-Ziv encoded and are recovered at the decoder to improve the visual quality. SUMMARY According to one aspect of the present disclosure, there is provided a method of 20 encoding image data in a distributed video coding system. The method generates a down-sampled image from an input image according to a point spread function and generates error correction bits based on the point spread function and the input image using a bitwise error correction method to support up-sampling during decoding. At least one of storing and transmitting the point spread function, the down-sampled image, 25 and the error correction bits as the encoded image data is then used. 2338620_1 905849_speci_lodge -4 The point spread function may be determined using a residual difference of the input image and a filtered representation of the input image. Desirably the input image is divided into blocks and the point spread function is determined independently for each block. Typically, at least one of a down-sampling scaling factor and a blur kernel are 5 determined by the point spread function. The down-sampling scaling factor may be any real number greater than zero. The method may further comprise determining encoding parameters from the point spread function, the encoding parameters being input to the bitwise error correction method with the input image to determine the error correction bits. The 10 bitwise error correction method may be non-linear and use turbo codes to form the error correction bits as parity bits. In this case, the encoding parameters include generator polynomials and puncturing schemes. Alternatively, the bitwise error correction method may be linear and use linear codes to form the error correction bits as syndromes. In this case, the linear codes are selected from the group consisting of LDPC codes and BCH 15 codes and the encoding parameters include information representing a sparse parity check matrix. According to another aspect, there is disclosed is a method of decoding encoded image data in a distributed video coding system. This method generates a first approximation image by up-sampling a portion of the encoded image data; and enhances 20 the approximation image iteratively using a point spread function of the encoded image data, the enhancing comprising correcting the approximation image during each iteration using error correction bits of the encoded image data. Preferably the first approximation image is generated by up-sampling the portion using the point spread function. Also, the approximation image may be enhanced by 25 performing an image deconvolution operation including minimizing an energy function 2338620_1 905849.speci_lodge -5 using the point spread function. Desirably, the deconvolution operation is iterative, and the method comprises refining the point spread function during the deconvolution operation, and using the refined point spread function for subsequent iterations of the deconvolution operation. The energy function desirably includes the point spread 5 function and an image gradient field. Other aspects are also disclosed. BRIEF DESCRIPTION OF THE DRAWINGS At least one embodiment of the present invention will now be described hereinafter with reference to the drawings, in which: 10 Fig. I is a schematic block diagram of an exemplary configuration of an image/video coding system; Fig. 2 is a schematic block diagram of a syndrome encoder; Fig. 3 is a schematic block diagram of a syndrome decoder; Fig. 4 is a flow diagram of a PSF generator module; 15 Fig. 5 is a flow diagram of the deconvolution module; Figs. 6A and 6B collectively form a schematic block diagram of a computer system in which the arrangements shown in Figs. 1, 2, 3, 4, and 5 may be implemented. DETAILED DESCRIPTION INCLUDING BEST MODE Methods, apparatus, and computer program products are disclosed for processing 20 digital images each comprising a plurality of pixels. In the following description, numerous specific details, including image/video compression/encoding formats and the like, are set forth. However, from this disclosure, it will be apparent to those skilled in the art that modifications and/or substitutions may be made without departing from the scope and spirit of the invention. In other circumstances, specific details may be omitted 25 so as not to obscure the invention. 2338620_1 905849_spedlodge -6 Where reference is made in any one or more of the accompanying drawings to steps and/or features, which have the same reference numerals, those steps and/or features have for the purposes of this description the same function(s) or operation(s), unless the contrary intention appears. 5 The present inventors observe that the generation of side information at the decoder in a DVC system is based primarily on spatial correlation. However, structural information contained in the original image due to the image formation process has never been fully exploited. One example of the structural information is the blurring characteristics of the acquired image which is highly associated to the point spread 10 function (PSF) of the acquiring optical system (e.g. lens), and to the properties of any pre-processing filters which forms a part of the image acquiring pipeline. As a consequence, the rate distortion performance of such image coding systems is often sub optimal. In addition, DVC systems generally require a feedback channel to optimize the rate distortion performance. This imposes serious restrictions upon the potential 15 application scenarios, and introduces latency to the systems. Fig. I shows a distributed video coding (DVC) system 100 for encoding an acquired image 1010, for storing or transmitting the encoded image, and for decoding the encoded image. The system 1000 includes an encoder 1000 and a decoder 1200 connected through a storage and/or transmission medium 1100. 20 The components 1000 and 1200 of the system 100 may be implemented using a computer system 900, such as that shown in Figs. 6A and 6B, where the encoder 1000 and the decoder 1200 may be implemented as software, such as one or more application programs executable within the computer system 900. The software may be stored in a computer readable medium, including the storage devices described hereinafter, for 25 example. The software is loaded into the computer system 900 from the computer 2338620_1 905849_speci_lodge -7 readable medium, and then executed by the computer system 900. A computer readable medium having such software or computer program recorded on the medium is a computer program product. As seen in Fig. 6A, the computer system 900 is formed by a computer module 5 901, input devices such as a keyboard 902, a mouse pointer device 903, a scanner 926, a camera 927, and a microphone 980, and output devices including a printer 915, a display device 914 and loudspeakers 917. An external Modulator-Demodulator (Modem) transceiver device 916 may be used by the computer module 901 for communicating to and from a communications network 920 via a connection 921. The network 920 may 10 be a wide-area network (WAN), such as the Internet or a private WAN. Where the connection 921 is a telephone line, the modem 916 may be a traditional "dial-up" modem. Alternatively, where the connection 921 is a high capacity (eg: cable) connection, the modem 916 may be a broadband modem. A wireless modem may also be used for wireless connection to the network 920. 15 The computer module 901 typically includes at least one processor unit 905, and a memory unit 906 for example formed from semiconductor random access memory (RAM) and semiconductor read only memory (ROM). The computer module 901 also includes an number of input/output (1/0) interfaces including an audio-video interface 907 that couples to the video display 914, loudspeakers 917 and microphone 980, an 1/0 20 interface 913 for the keyboard 902, mouse 903, scanner 926, camera 927 and optionally a joystick (not illustrated), and an interface 908 for the external modem 916 and printer 915. In some implementations, the modem 916 may be incorporated within the computer module 901, for example within the interface 908. The computer module 901 also has a local network interface 911 which, via a connection 923, permits coupling of 25 the computer system 900 to a local computer network 922, known as a Local Area 2338620_1 905849_speci_lodge -8 Network (LAN). As also illustrated, the local network 922 may also couple to the wide network 920 via a connection 924, which would typically include a so-called "firewall" device or device of similar functionality. The interface 911 may be formed by an Ethernet circuit card, a BluetoothTm wireless arrangement or an IEEE 802.11 wireless 5 arrangement. The interfaces 908 and 913 may afford either or both of serial and parallel connectivity, the former typically being implemented according to the Universal Serial Bus (USB) standards and having corresponding USB connectors (not illustrated). Storage devices 909 are provided and typically include a hard disk drive (HDD) 910. 10 Other storage devices such as a floppy disk drive and a magnetic tape drive (not illustrated) may also be used. An optical disk drive 912 is typically provided to act as a non-volatile source of data. Portable memory devices, such optical disks (eg: CD-ROM, DVD), USB-RAM, and floppy disks for example may then be used as appropriate sources of data to the system 900. 15 The components 905 to 913 of the computer module 901 typically communicate via an interconnected bus 904 and in a manner that results in a conventional mode of operation of the computer system 900 known to those in the relevant art. Examples of computers on which the described arrangements can be practised include IBM-PC's and compatibles, Sun Sparcstations, Apple MacTM or a like computer systems evolved 20 therefrom. The method of distributed video coding which images comprise of a plurality of pixels may be implemented using the computer system 900 wherein the processes may be implemented as one or more software application programs 933 executable within the computer system 900. In particular, the steps of the method of processing of digital 25 images each comprising a plurality of pixels are effected by instructions 931 in the 2338620_1 905849_speci_lodge -9 software 933 that are carried out within the computer system 900. The software instructions 931 may be formed as one or more code modules, each for performing one or more particular tasks. The software may also be divided into two separate parts, in which a first part and the corresponding code modules performs the methods of 5 processing of digital images each comprising a plurality of pixels and a second part and the corresponding code modules manage a user interface between the first part and the user. The software 933 is generally loaded into the computer system 900 from a computer readable medium and is then typically stored in the HDD 910, as illustrated in 10 Fig. 6A, or the memory 906, after which the software 933 can be executed by the computer system 900. In some instances, the application programs 933 may be supplied to the user encoded on one or more CD-ROM 925 and read via the corresponding drive 912 prior to storage in the memory 910 or 906. Alternatively the software 933 may be read by the computer system 900 from the networks 920 or 922 or loaded into the 15 computer system 900 from other computer readable media. Computer readable storage media refers to any storage medium that participates in providing instructions and/or data to the computer system 900 for execution and/or processing. Examples of such storage media include floppy disks, magnetic tape, CD-ROM, a hard disk drive, a ROM or integrated circuit, USB memory, a magneto-optical disk, or a computer readable card 20 such as a PCMCIA card and the like, whether or not such devices are internal or external of the computer module 901. Examples of computer readable transmission media that may also participate in the provision of software, application programs, instructions and/or data to the computer module 901 include radio or infra-red transmission channels as well as a network connection to another computer or networked device, and the 2338620_1 905849_speci_lodge -10 Internet or Intranets including e-mail transmissions and information recorded on Websites and the like. The second part of the application programs 933 and the corresponding code modules mentioned above may be executed to implement one or more graphical user 5 interfaces (GUIs) to be rendered or otherwise represented upon the display 914. Through manipulation of typically the keyboard 902 and the mouse 903, a user of the computer system 900 and the application may manipulate the interface in a functionally adaptable manner to provide controlling commands and/or input to the applications associated with the GUI(s). Other forms of functionally adaptable user interfaces may 10 also be implemented, such as an audio interface utilizing speech prompts output via the loudspeakers 917 and user voice commands input via the microphone 980. Fig. 6B is a detailed schematic block diagram of the processor 905 and a memoryr" 934. The memory 934 represents a logical aggregation of all the memory devices (including the HDD 910 and semiconductor memory 906) that can be accessed 15 by the computer module 901 in Fig. 6A. When the computer module 901 is initially powered up, a power-on self-test (POST) program 950 executes. The POST program 950 is typically stored in a ROM 949 of the semiconductor memory 906. A program permanently stored in a hardware device such as the ROM 949 is sometimes referred to as firmware. The POST program 20 950 examines hardware within the computer module 901 to ensure proper functioning, and typically checks the processor 905, the memory (909, 906), and a basic input-output systems software (BIOS) module 951, also typically stored in the ROM 949, for correct operation. Once the POST program 950 has run successfully, the BIOS 951 activates the hard disk drive 910. Activation of the hard disk drive 910 causes a bootstrap loader 25 program 952 that is resident on the hard disk drive 910 to execute via the processor 905. 2338620_1 905849_speci lodge -11 This loads an operating system 953 into the RAM memory 906 upon which the operating system 953 commences operation. The operating system 953 is a system level application, executable by the processor 905, to fulfil various high level functions, including processor management, memory management, device management, storage 5 management, software application interface, and generic user interface. The operating system 953 manages the memory (909, 906) to ensure that each process or application running on the computer module 901 has sufficient memory in which to execute without colliding with memory allocated to another process. Furthermore, the different types of memory available in the system 900 must be used 10 properly so that each process can run effectively. Accordingly, the aggregated memory 934 is not intended to illustrate how particular segments of memory are allocated (unless otherwise stated), but rather to provide a general view of the memory accessible by the computer system 900 and how such is used. The processor 905 includes a number of functional modules including a control 15 unit 939, an arithmetic logic unit (ALU) 940, and a local or internal memory 948, sometimes called a cache memory. The cache memory 948 typically includes a number of storage registers 944 - 946 in a register section. One or more internal buses 941 functionally interconnect these functional modules. The processor 905 typically also has one or more interfaces 942 for communicating with external devices via the system bus 20 904, using a connection 918. The application program 933 includes a sequence of instructions 931 that may include conditional branch and loop instructions. The program 933 may also include data 932, which is used in execution of the program 933. The instructions 931 and the data 932 are stored in memory locations 928-930 and 935-937 respectively. Depending 25 upon the relative size of the instructions 931 and the memory locations 928-930, a 2338620_1 905849_specilodge - 12 particular instruction may be stored in a single memory location as depicted by the instruction shown in the memory location 930. Alternately, an instruction may be segmented into a number of parts each of which is stored in a separate memory location, as depicted by the instruction segments shown in the memory locations 928-929. 5 In general, the processor 905 is given a set of instructions which are executed therein. The processor 905 then waits for a subsequent input, to which it reacts to by executing another set of instructions. Each input may be provided from one or more of a number of sources, including data generated by one or more of the input devices 902, 903, data received from an external source across one of the networks 920, 922, data 10 retrieved from one of the storage devices 906, 909 or data retrieved from a storage medium 925 inserted into the corresponding reader 912. The execution of a set of the instructions may in some cases result in output of data. Execution may also involve storing data or variables to the memory 934. The disclosed arrangements use input variables 954, which are stored in the 15 memory 934 in corresponding memory locations 955-958. The arrangements produce output variables 961, which are stored in the memory 934 in corresponding memory locations 962-965. Intermediate variables may be stored in memory locations 959, 960, 966 and 967. The register section 944-947, the arithmetic logic unit (ALU) 940, and the 20 control unit 939 of the processor 905 work together to perform sequences of micro operations needed to perform "fetch, decode, and execute" cycles for every instruction in the instruction set making up the program 933. Each fetch, decode, and execute cycle comprises: (a) a fetch operation, which fetches or reads an instruction 931 from a 25 memory location 928; 2338620_1 905849_spedlodge - 13 (b) a decode operation in which the control unit 939 determines which instruction has been fetched; and (c) an execute operation in which the control unit 939 and/or the ALU 940 execute the instruction. 5 Thereafter, a further fetch, decode, and execute cycle for the next instruction may be executed. Similarly, a store cycle may be performed by which the control unit 939 stores or writes a value to a memory location 932. Each step or sub-process in the processes of Figs. 1, 2, 3, and 4 is associated with one or more segments of the program 933, and is performed by the register section 944 10 947, the ALU 940, and the control unit 939 in the processor 905 working together to perform the fetch, decode, and execute cycles for every instruction in the instruction set for the noted segments of the program 933. The encoder 1000 and the decoder 1200 of Fig. I may alternatively be implemented in dedicated hardware such as one or more integrated circuits. Such 15 dedicated hardware may include Field-programmable gate array (FPGA), application specific integrated circuit (ASIC), graphic processors, digital signal processors, or one or more microprocessors and associated memories. In one implementation, the encoder 1000 and the decoder 1200 are implemented within a camera 927, where the encoder 1000 and the decoder 1200 may be 20 implemented as software being executed by a processor of the camera 927, or may be implemented using dedicated hardware within the camera 927. In another implementation, only the encoder 1000 is implemented within a camera. The encoder 1000 may be implemented as software executing in a processor of the camera 927, or implemented using dedicated hardware within the camera 927. 2338620_1 905849_specilodge -14 Referring again to Fig. 1, the encoder 1000 includes a PSF generator module 1020, a down-sampler module 1030, an intraframe encoder 1040, and a syndrome encoder 1050. The PSF generator module 1020, which is described later in detail, receives the acquired image 1010 and selects an appropriate point spread function for 5 use by the down-sampler module 1030 for performing a down-sampling operation. The PSF determines the type of the down-sampling filter and the scaling factor to be used by the down-sampling operation. The PSF is provided by the PSF generator 1020 in the form of a bit stream 1110. The scaling factor may be any integer or floating point number, and thus any real number greater than zero. The down-sampler module 1030 10 also takes the acquired image 1010 and the PSF 1110 from the PSF generator module 1020 to generate a reduced resolution representation 1032 of the input image. The intraframe encoder module 1040 that follows performs the step of encoding this down sampled image 1032 using a conventional compression method to form an encoded bit stream 1120. The compression method may be baseline mode JPEG compression, 15 compression according to the JPEG2000 standard, or compression according to the H.264 standard. The PSF generator module 1020 also computes encoding parameters 1025 (i.e., generator polynomials and puncture patterns for the case of turbo codes) to be used by the syndrome encoder module 1050 for later error correction. The syndrome encoder 20 module 1050 partitions the pixel values of the acquired image 1010 into a number of cosets. Each coset includes of a set of pixel values which are distinct to each other in terms of Hamming distance (i.e., bitwise differences). Consequently, the pixel values of the acquired image 1010 are represented by the indices of their cosets, also known as syndromes, and are sent to the decoder for further processing. In an exemplary 25 implementation, the syndrome 1130 output from the syndrome encoder module 1050 are 2338620_1 905849_speci_lodge -15 generated using block based linear error correction method, such as LDPC codes or BCH codes, to form syndromes 1130, based on the computed encoding parameters 1025 from the PSF generator module 1020. In an alternative implementation, the syndrome encoder module 1050 may employ a non-linear error correction method such as turbo 5 codes, to generate the output bit stream 1130, which may be referred to as parity bits. The bit streams 1110, 1120, and 1130 are the outputs of PSF generator module 1020, the intraframe encoder module 1040, and the syndrome encoder 1050 respectively, and collectively form encoded image data of the acquired image 1010. These bit streams 1110, 1120, and 1130 may be concatenated for at least one of transmission over, 10 or storage in, the storage and/or transmission medium 1100, for subsequent decompression/decoding by the decoder 1200. The decoder 1200 receives as inputs the PSF represented by the bit stream 1110, the encoded down-sampled image 1120, and the syndromes 1130. An intraframe decoder module 1210 decodes that portion of the encoded image data represented by the 15 encoded down-sampled image 1120 conventionally by performing the inverse operation of the intraframe encoder module 1040. A down-sampled image 1212 output from the intraframe decoder module 1210 is then restored to original resolution by an up-sampler module 1220. The corresponding original resolution image 1225 forms an initial approximation of the acquired image 1010 and represents initial side information to be 20 input to a syndrome decoder module 1230 that follows. The decoder 1200 works iteratively. A loop is formed starting from the syndrome decoder module 1230, to a de-convolution module 1240, then to a re-convolution module 1250 and eventually back to the syndrome decoder module 1230. Initially, the syndrome decoder module 1230 uses the received syndromes from 25 the bit stream 1130 and the side information from the up-sampler module 1220 to reduce 2338620_1 905849_specilodge - 16 approximation errors in the side information. This results in the syndrome decoder 1230 outputting an image 1235, which is an improved version of the image 1225. Based on the image 1235 from the syndrome decoder module 1230, the deconvolution module 1240 uses the received PSF from the bit stream 1110 to recover fine image details (e.g., 5 the high frequency components) in the original acquired image 1010. The deconvolution process performed by the module 1240 is iterative and is subject to an energy minimization criterion to minimize the residual between the input 1235 and an output 1245 of the module 1240. There are two main sub-steps in the deconvolution process. The first sub-step is to estimate the acquired image 1010 based on the received 10 PSF from the bit stream 1110. In the second sub-step, this estimated image is used to refine the PSF to account for distortions introduced by the quantization and/or the de blocking operations in the intraframe encoder 1040. This deconvolution process generates the output 1245 as an unblurred image 1245 representing both the current approximation of the acquired image 1010 and a refined estimate of the PSF received 15 from bit stream 1110. In the next iterative step, this current approximation unblurred image 1245 is re convoluted by the reconvolution module 1250 using the refined PSF computed during deconvolution by the deconvolution module 1240. The reconvolution module 1250 generates a blurred image 1255. The blurred image 1255 is then compared against the 20 image 1235, of the first iteration, within the syndrome decoder 1230, to determine the degree of estimation errors in the unblurred image 1245 of the first iteration. Ideally, the blurred image 1255 is very close to the image 1235. Otherwise, the image 1255 must contain incorrectly filled pixel values, and these errors are minimized by the syndrome decoder module 1230. This completes the first decoding iteration of the decoder 1200. 2338620_1 905849_speci_lodge -17 For all subsequent decoding iterations, the syndrome decoder module 1230 ignores the image 1225 (the initial side information) and uses the blurred image 1255 as the side information, together with the previously received syndromes 1130, to further refine the approximation of the acquired image 1010. This decoding process repeats 5 until the difference between the images 1235 and 1255 becomes negligible or a predetermined maximum number of decoding iterations is reached. The decoder 1200 outputs the current approximation 1245 as the output image 1260, which is the final approximation of the acquired image 1010. In an exemplary implementation, if the decoder 1200 fails to converge (e.g., the 10 difference between images 1235 and 1255 remains large after a fixed number of decoding iterations), the decoder 1200 outputs the initial approximation 1235 as the final output image 1260, as denoted by the dashed line 1265 in Fig. 1. Having described system 100 for encoding an input image to form three independent bit streams, and jointly decoding the bit streams to reconstruct the output 15 image, components of those system 100 can now described in more detail, starting with the PSF generator module 1020. A preferred configuration 400 of the PSF generator module 1020 is shown in more detail in Fig. 4. The configuration 400 may be a method implemented using software code, for example stored on the HDD 910 and read and executed by the 20 processor 905. The acquired image 1010 forms an input to the PSF generator module 1020. In the first step 410, the processor 905 executes to apply a low-pass filter with a cut-off frequency determined based on the image context to the acquired image 1010. The image context is determined by a frequency analysis of the acquired image 1010. Based on one or more results of the frequency analysis (e.g., frequency coefficients), the 25 cut-off frequency is then determined. In general, a high cut-off frequency preserves fine 2338620_1 905849_speci_lodge - 18 image details, whereas a low cut-off frequency achieves better compression efficiency. To improve rate-distortion performance of the PSF module 400, the low pass filter 410 has a high initial cut-off frequency to remove only a small amount of high frequency information. The result of low pass filtering is a blurred image 415 representation that is 5 used to estimate the PSF and the parameters for error correction. Desirably, the low pass filtering operation is performed by dividing the acquired image 1010 on a block-by block basis and processing each block in the discrete cosine transform (DCT) domain. For each DCT block, the low pass filter 410 eliminates the n highest AC DCT coefficients contributed to a certain percentage x (e.g., x = 5%) of the overall energy of 10 the signal. In the next step 420, a residual difference between the acquired image 1010 and the low pass representation, formed by the blurred image 415, is computed. This residual difference is desirably determined on a block-by-block basis. The mean p and the standard deviation o of the residual are also determined. The standard deviation o is 15 a measure of the variability of the residual distribution and is computed by taking the square root of the variance of the residual. In following step 430, the computed standard deviation a is compared against a predetermined threshold value T. This threshold value T determines the desired visual quality of the down-sampled image to be produced by the down-sampler module 1030 20 and is preferably obtained heuristically. If the computed standard deviation o- is smaller than the threshold T, the processing returns to step 410, and the cut-off frequency of the low-pass filter 410 is reduced. In an exemplary implementation, the cut-off frequency is reduced by the same amount (e.g., a further 5% of the overall energy of the signal) on each occasion the low pass filtering is invoked. In alternative implementations, the 25 adjustment of the cut-off frequency is not fixed and may vary from that of the previous 2338620_1 905849_specilodge -19 steps. The process of steps 410, 420 and 430 repeats until the standard deviation o becomes larger than or equal to the threshold T, whereupon the method 400 proceeds to step 440. In step 440, the method 400 determines the characteristics of the point spread 5 function, which may consist of a down-sampling scaling factor and the shape of the blur kernel, based on the standard deviation o and the final cut-off frequency of the low-pass filter computed in step 410. The output of the step 440 is the PSF represented by the bit stream 1110 in Fig. 1. In a preferred implementation of step 440, the downs-sampling scaling factor is 10 chosen as the average ratio of the cut-off frequencies vs. the block-based frequency spectrum over the entire image. The shape of the blur kernel is determined according to the average of the block-based standard deviation over the entire image, for example, if the average standard deviation is high (which implies the image contains a lot of texture information), a bi-cubic kernel is selected as the blur kernel of the PSF, otherwise, a 15 bilinear kernel is assigned to the PSF. In an alternative implementation of step 440, and in view of the block-by-block processing of the preceding steps, the point spread function is determined independently for each block. if the acquired image 1010 consists mainly of smooth intensity regions, the down-scaling factor will be large, and the acquired image 1010 can be encoded 20 rapidly and efficiently using a bilinear kernel. On the other hand, if the acquired image 1010 will contain texture information which needs to be preserved, the down-scaling factor will be relatively small, and the acquired image 1010 can be more effectively encoded using a bi-cubic kernel. In the next step 460, the method 400 extracts bit planes from both the acquired 25 image 1010 and the blurred image 415. Preferably, scanning starts on the most 2338620_1 905849_speci_lodge - 20 significant bit plane of each of the two images. The most significant bits of each image are concatenated to form a bit stream containing only the most significant bits. In a second pass, the scanning concatenates the second most significant bits of all image pixels. This process repeats in this manner until the least significant bit plane is 5 completed. This generates two bit streams, one for each bit plane of the two images. In step 470, the method 400 compares each pair of the corresponding bit planes from the two images 1010 and 415 and computes the expected bit error rate (BER) of each bit plane. The BER represents the probability of bitwise difference between the two bit planes. Based on the BER, the method 400 then determines in step 470 the 10 encoding parameters 1025 to be used by the error correction code generator module 230 to encode the bit planes. Notably from Fig. 4, the encoding parameters 1025 are ultimately determined based on the PSF 1110. In a specific implementation, where the error correction code generator 230 is a turbo encoder, the encoding parameters 1025 are the appropriate generator polynomials and/or puncture patterns to optimize the rate 15 distortion performance. In an alternative implementation, where a LDPC encoder is employed, the encoding parameters 1025 are the adequate parity-check matrix to achieve the desired level of bit error protection. This completes the detail descriptions of the PSF generator module 1020. Referring again to Fig. 1, the down-sampler module 1030 aims at reducing the 20 spatial resolution of the acquired image 1010. The down-sampling operation is performed according to the selected blur kernel and down-scaling factor, determined by the PSF generator module 1020. In an exemplary implementation, bilinear and bi-cubic down-sampling methods are employed. Alternatively, other down-sampling methods such as nearest neighbour and quadratic using various kernels such as Gaussian, Bessel, 25 Hamming, Mitchell or Blackman kernels, can also be used. 2338620_1 905849_specilodge -21 The intraframe encoder module 1040 can now be described in more detail. Intraframe coding refers to the various lossless and lossy compression techniques that exploit spatial correlation in images. Common intraframe compression techniques include baseline mode JPEG, JPEG-LS, JPEG 2000, and H.264 (Intra). In an exemplary 5 implementation, an implementation of lossy JPEG compression is used for intraframe encoder module 1040. JPEG quality factor is set to 85 by default and can be re-defined between 0 (low quality) and 100 (high quality) by a user operating the computer system 900. The higher the JPEG quality factor, the smaller the quantization step size, and hence the approximation of the original image frame after decompression is better but at 10 the cost of a larger compressed file. The syndrome encoder module 1050 is now described with reference to Fig. 2. A preferred module 200, representing the syndrome encoder module 1050 in Fig. 1, includes a quantizer module 210, a bit plane extractor module 220, and an error correction code generator module 230. The acquired image 1010 and the encoding 15 parameters 1025 from Fig. I are input to the syndrome encoder module 200, the encoding parameters 1025 being derived from the PSF 1110 (see Fig. 4 as described above). The module 200 may also be implemented as a method represented by software code, stored on the HDD 910 and executable by the processor 905. In a first step, the processor 905 executes the quantizer module 210 to reduce the 20 bit depth of the acquired image 1010 to generate the quantized image 215. The exact number of bits to encode one pixel is determined by the encoding parameters 1025 received from the PSF generator module 1020 in Fig. 1. Desirably, a uniform quantizer is employed by performing bitwise right shift operations. The quantization step depends on the standard deviation a in the encoding parameters 1025 and a Just-noticeable 25 difference (JND), known per se in the art of perceptual image compression, of the 2338620_1 905849_specilodge - 22 acquired image 1010. From the encoding parameters 1025, where the standard deviation is represented as -, the mean of the input image as p,, then the quantization step 210 may be mathematically represented by q,={LrJND '*P a <rJND ' P (0- otherwise 5 where rJND = 0.1. The quantized pixel values, represented in a quantized image 215 output from the quantizer 210, are then extracted by a bit plane extractor module 220. The bit plane extractor module 220 forms bit streams from the pixel values in the image 215 on a bit-plane-by-bit-plane basis. The operation of the module 220 is substantially identical to that of step 460 described earlier. Scanning starts on the most 10 significant bit plane of the image 215. The most significant bits of the image 215 are concatenated to form a bit stream containing only the most significant bits. In a second pass, the scanning concatenates the second most significant bits of all image pixels. This process repeats in this manner until the least significant bit plane is completed. This generates one bit stream for each bit plane of the image 215. 15 In the next step of the method 200, the error correction code (ECC) generator module 230 generates error correction bits for each bit plane individually, starting with the most significant bit plane. The error correction bits support later up-sampling operations during decoding. The error correction bits from each bit plane are concatenated together, along with the encoding parameters 1025, to form the output bit 20 stream 1130. The encoding parameters 1025 are used in the ECC generator 230 to determine the bit rate for each bit plane. In an exemplary implementation, the error correction code generator 230 uses LDPC codes. The encoding parameters 1025 define the sparse parity-check matrix to be used to achieve the desired level of bit error protection. This 2338620_1 905849_specilodge - 23 method provides rate flexibility which is essential in adapting to the different statistics for different bit planes. The output of the error correction code generator 230 is syndromes 1130. In an alternative implementation, the error correction code generator 230 uses 5 turbo codes. The encoding parameters 1025 include the standard deviation of which is generated by the PSF generator module 1020, and selected generator polynomials and/or puncture patterns of the turbo code to be used to generate the error correction bits 1130. In this case, the error correction bits 1130 are often referred as parity bits. This completes the detail descriptions of the syndrome encoder module 1050. 10 The intraframe decoder module 1210 performs the inverse operation to the intraframe encoder module 1040. The output 1212 of the intraframe decoder is an approximation of the down-sampled version of the acquired image 1010. The up-sampler module 1220 performs the inverse operation of the down sampler module 1030 in the encoder 1000 and operates upon the down-sampled image 15 1212 from the intraframe decoder module 1210. Desirably, an up-sampling kernel and an up-scaling factor used for up-sampling are obtained from the bit stream 1110. The output 1225 of the module 1220 is an initial or first approximation of the acquired image 1010. In an alternative implementation, the up-sampling kernel may be different from that of the down-sampler module 1030. For example, the nearest neighbour or quadratic 20 up-sampling filters may be employed. A preferred implementation 300 of the syndrome decoder module 1230 is now described in more detail with reference to Figs. 1 and 3. The syndrome decoder module 300 is seen to include a quantizer module 320, a bit plane extractor module 330, an error correction code decoder module 340, an image reconstructor module 350, and a 25 syndrome checker module 360. The syndrome decoder module 300 receives two inputs. 2338620_1 905849_speci_lodge -24 A first input 310 is an approximation of the acquired image 1010 to be used as side information. For the first decoding iteration, the input image 310 is the output image 1225 of the up-sampler module 1220 in Fig. 1. For all the subsequent iterations, the input image 310 is the blurred image 1255 from the reconvolution module 1250. The 5 second input of the syndrome decoder module 1230 is the syndrome bit stream 1130, which represents the error correction bits and the encoding parameters 1025 from the syndrome encoder module 1050 of Fig. 1. The syndrome decoder module 300 is preferably implemented using software code, stored on the HDD 910 and executed by the processor 905. In a first decoding 10 step, the processor 905 executes the quantizer module 320 to reduce the bit depth of the image 310 to generate an image 325. Desirably, the quantizer module 320 is substantially identical to the module 210 in Fig. 2. The quantized pixel values are then extracted by a bit plane extractor module 330. In the bit plane extractor module 330, the processor 905 executes to form bit 15 streams from the pixel values in the image 325 on a bit-plane-by-bit-plane basis. Again, the bit plane extractor module 330 is desirably substantially identical to the module 220 in Fig. 2. The processing begins from the most significant bit plane to the least significant bit plane and generates one bit stream for each bit plane of the image 325. In the next step, the error correction code decoder module 340 performs the 20 operation of bitwise error correction to correct the estimation errors in the output of the bit plane extractor module 330. This operation is performed on a bit-plane-by-bit-plane basis, starting with the most significant bit plane. In an exemplary implementation, a turbo decoder is employed. The turbo decoder performs iterative decoding on each bit plane separately using belief propagation techniques such as Soft-Output Viterbi 25 Algorithm (SOVA), Maximum A-Posteriori Algorithm (MAP), and a variant of MAP. 2338620_1 905849_speci lodge -25 Alternatively, an error correction code decoder may employ LDPC, Reed-Solomon codes, or a combination of these error correction codes. The next step in the decoding process is the image reconstructor module 350. In the image reconstructor module 350, the processor 905 takes the decoded bit planes from 5 the error correction code decoder module 340, and the decoded bits corresponding to the same spatial location are concatenated together to reconstruct the quantized version 355 of the acquired image 1010. Each element of this quantized image 355 is a coset index and is used to correct the approximation errors in the input image 310. The syndrome checker module 360 then compares the pixel values of the input 10 image 310 against the coset indices from the image reconstructor module 350 to generate the decoded image 1235. When implementing the syndrome checker module 360, the processor 905 desirably performs the operation on a pixel-by-pixel basis. For a given pixel location (i, j) in image 310, if the pixel value Yg is within the coset X, from the quantized image 355, then the final pixel value Y'y 1 of the output image 1235 takes 15 the value of Yg. If Ygj lies outside the coset Xy, then the syndrome checker module 360 clips the reconstruction towards the boundary of the coset Xyj closest to Yg. This process repeats until all pixels in the output image 1235 are determined. This completes the detail descriptions of the syndrome decoder module 1230. Before describing the deconvolution module 1240 in detail, some background is 20 desirable to properly frame the operations to be described. Denote the acquired image 1010 input to the down-sampler module 1030 in Fig. I as H(ij), where (ij) represents the pixel coordinates of the acquired image 1010. The operation of the down-sampler module 1030 can then be given by: H(i,j) = (H(i,j)@ PSF(i,j)) ' , 2338620_1 905849_speci_lodge -26 where H(i, j) is the down-sampled image, PSF(i,j) is the point spread function, which may be calculated as a space-invariant blur kernel (i.e., the blur kernel is fixed over the entire image or a sub-region of the image), 0 and I represent a convolution and a pixel decimation operation, respectively where the pixel-decimation operation functions by 5 deleting every second pixel value. Given the PSF information extracted from the bit stream 1110, the deconvolution module 1240 estimates the acquired image 1010 from the blurred image 1235. Ideally, if the blurred image 1235 is identical to the down-sampled image H(i, j)output from the down-sampler module 1030, then the operation of the deconvolution module 1240 can 10 be expressed mathematically by: H(i,j) = ff(N(u,v) T)-PSF(u,v)dudv where R(u,v) and PSF(u,v) are, respectively, the Fourier representations of the down sampled image H(ij) and the point spread function PSF(i,j), 1 represents a up scaling operation, and . represents a dot product operation in the Fourier domain. 15 However, in practice, the reconstructed blur image 1235 is not identical to H(ij) ? due to the distortions introduced by the quantization and/or the de-blocking operations in the intraframe encoder 1050 and decoding errors in the syndrome decoder module 1230. Therefore, the image 1245, represented by H, is only an approximation of the acquired image H(ij) and has to be estimated by blind deconvolution methods, 20 where pixel values of the acquired image (which is not known a priori) are estimated from its blurred version (i.e., image 1235) based on prior knowledge of the underlying image characteristics. Fig. 5 shows the flow diagram of an iterative optimization operation 500 which is performed in a preferred implementation of the deconvolution module 1240. The 2338620_1 905849_speci lodge -27 operation 500 takes two inputs: the blurred image 1235 output from the syndrome decoder 1230, and the PSF from bit stream I110 in Fig. 1. The operation 500 performs blind deconvolution that alternates between blur kernel refinement and image restoration. The outputs of the operation 500 are the deblurred image and the refined PSF 1245. As 5 before, the operation 500 can be expressed as software code, stored in the HDD 910 and executed by the processor 905. In step 510, the operation 500 enhances the first approximation by performing deblurring of the image 1235 based on the given PSF 1110 to give a deblurred image 515. In the exemplary implementation, this deblurring operation is performed using a 10 Richardson-Lucy (RL) deconvolution method. Alternatively, other non-blind deconvolution methods such as Wiener filter and partial differential equation (PDE) based methods may be employed. In yet another implementation, the blurred image 1235 from the syndrome decoder 1230 can be used directly as the initial estimate of the deblurred image 515. 15 In steps 520 - 540, the deblurred image 515 is refined iteratively using the PSF 1110. The deblurred image 515 is refined by minimizing the following nonlinear energy model: E(H)oc PSF 0 - (4D(at) + (8,5) , where H is the current estimate of the original image H , H is the blurred image, 20 PSF is the blur kernel, a,5 and 8,5 respectively denote the values of the x-and y direction gradients, , is a weight. c1(x) is an image prior function which represents the prior knowledge of the underlying image characteristics. In the exemplary implementation, c1(x) is a two piece-wise continuous function which defines the prior gradient density distribution of the acquire image. It is given by 2338620_1 905849_specilodge -28 (2(x) - k~x|, x<1 -(ax 2 +b), x>I, where x denotes the image gradient field, 1, indicates the point of discontinuity of (D(x), and k, a, and b are parameters whose values are determined experimentally based on a set of natural images. 5 In step 520, the image gradient field is optimized based on the current estimate of the deblurred image H. Preferably, the optimization of the image gradient field is performed by minimizing the following energy function: E'(T)= (|(T x) 1 + 1(T,) )+y( , -fi2 + OT , ), 2 11 where T = (T',T,) represents an approximation of the image gradient field, i.e., 10 T ~ OH. The energy function is then decomposed into a sum of sub-energy terns, i.e., E'(T)= E(E'(k,,,)+ E'( ,)), k where k is the pixel index, and E'(y/k,)|v e {x,y} is given by: E'(7k)= 4I I( )|+_(V7,, -8,5)2 Since each of these sub-energy terms consists of only convex functions, and 15 contain only a single variable y, , the terms can be solved independently. In the exemplary implementation, the minimization of each sub-energy term E'(kV)Iv e {x,y) is conducted using Least Mean Square (LMS) method. After updating the image gradient field T in step 520, the deblurred image H is subject to further refinement in step 530 based on the updated image gradient field. In 20 the exemplary implementation, the refinement of the deblurred image H is performed by minimizing the following energy function: 2338620_1 905849_speci_lodge - 29 E'($)= PSF @ $ - H + 7 (T, _-a,$ + T, -a$ ), 11 2 0 2 2 Such a minimization can be performed efficiently in the Fourier domain, where the energy function is given by: E'(F(H)) = F(PSF) -F(H) - F(H)1 + y( F(T,) - F(.,) -F($) + F(T,) - F(8,) -F(H)() 52 22 5 where F(.) denotes a forward Fourier transform. The optimal deblurred image H* can then be computed by H* = F-'(argminE'(F(k))), F(H) where F-'(.) denotes an inverse Fourier transform. Since E'(F(H)) is a sum of 10 quadratic energies of unknown F(H), it is a convex function and can be solved by classic linear minimization approach. Desirably, an LMS optimization method is used to find the minimum for E'(F(H)). The two parameters A and y used in steps 520 and 530 enable the trade-off between minimizing the estimation error and preserving image structures. Desirably, 15 A e[0.02,0.5], and y = 2" for all iterations where n is the number of the present iteration. In step 540 of the method 500, the processor 905 determines whether the image gradient field T and the deblurred image H* actually converge. Preferably, the convergences of the image gradient field T and the deblurred image H* are considered 20 to be sufficient if the change of the image gradient field All|1 2 between two consecutive iterations is smaller than 10-2 and the change of the deblurred image AGH' 2 is smaller than 1%. In this case, the method 500 proceeds to step 550. Otherwise, the method 500 2338620_1 905849_specilodge -30 returns to step 520 to further refine the image gradient field ' and the deblurred image H* . In step 550, the blur kernel of the PSF 1110 is refined to take into account the distortions and artefacts introduced by the intraframe encoder 1040 according to the 5 current estimate of the deblurred image H'. Desirably, the blur kernel, represented by PSF, is refined by minimizing the following energy function: E(PSF) =PSF ® H* - R +IIPSFII. This energy minimization operation can be performed efficiently in the Fourier domain using linear estimation methods, such as LMS. 10 In step 560, the processor 905 determines whether the refined blur kernel PSF* converges to a stable solution. In the exemplary implementation, if the difference of the blur kernel PSF* is smaller than 10-', the convergence of the kernel is considered to be sufficient. The system 500 outputs the optimal blur kernel PSF and deblurred image H*. Otherwise, the method 500 returns to step 520 to refine both the deblurred image 15 and the blur kernel. This concludes the description of the preferred method 500 for implementing the deconvolution module 1240. It is noted that the system 500 is iterative, and on each iteration of the system 500, the blur kernel of the input PSF 1110 is refined and is applicable to the subsequent deconvolution operations. However, when deconvolution is re-commenced on the next iteration of the decoder 1200, the blur kernel 20 of the input PSF 1110 is used to commence the deconvolution iterations. The reconvolution module 1250 is next to be described in detail with reference to Fig. 1. The reconvolution module 1250 performs an image blur operation using the outputs of the deconvolution module 1240. This operation can be expressed precisely by the following: 2338620_1 905849_specilodge -31 H(i, j) = H*(i, j)@PSF'(i, j), where (ij) is the pixel coordinates of the blurred image 1255. This blurred image 1255 is ideally very close to the image 1235. However, in practice, the deconvolution module 1240 tends to exaggerate the estimation errors in image 1235, and hence the images 5 1235 and 1255 are generally different. The syndrome decoder module 1230 aims at attenuating the difference in these two images 1235 and 1255 and to generate a better approximation of the acquired image 1010 for the subsequent decoding iterations. The output image 1260 is generated when the difference between the images 1235 and 1255 is smaller than a threshold T, or when the maximum number of decoding iterations is 10 reached. In an exemplary implementation, the threshold Tr and the maximum number of decoding iterations are set to 10- and 16 respectively. In addition, when the decoder 1200 fails to converge, the difference between the images 1235 and 1255 remains relatively large after some decoding iterations, the initial output image 1235 of the 15 syndrome decoder is output as the final output image 1260. The foregoing describes only some embodiments of the present invention, and modifications and/or changes can be made thereto without departing from the scope and spirit of the invention, the embodiments being illustrative and not restrictive. In the context of this specification, the word "comprising" means "including 20 principally but not necessarily solely" or "having" or "including", and not "consisting only of'. Variations of the word "comprising", such as "comprise" and "comprises" have correspondingly varied meanings. 2338620_1 905849_speci_lodge

Claims (20)

1. A method of encoding image data in a distributed video coding system, said method comprising the steps of: 5 generating a down-sampled image from an input image according to a point spread function; generating error correction bits based on said point spread function and said input image using a bitwise error correction method to support up-sampling during decoding; and 10 at least one of storing and transmitting said point spread function, said down sampled image, and said error correction bits as the encoded image data.
2. The method according to claim 1, wherein said point spread function is determined using a residual difference of said input image and a filtered representation 15 of said input image.
3. The method according to claim 1, wherein said input image is divided into blocks and said point spread function is determined independently for each block. 20
4. The method according to claim 1, wherein at least one of a down-sampling scaling factor and a blur kernel are determined by said point spread function.
5. The method according to claim 4, wherein the down-sampling scaling factor is any real number greater than zero. 2338620_1 905849_speciJodge -33
6. The method according to claim 1, further comprising determining encoding parameters from said point spread function, said encoding parameters being input to said bitwise error correction method with said input image to determine the error correction bits. 5
7 The method according to claim 6, said bitwise error correction method is non linear and uses turbo codes to form the error correction bits as parity bits.
8. The method according to claim 7, wherein the encoding parameters include 10 generator polynomials and puncturing schemes.
9. The method according to claim 6, wherein said bitwise error correction method is linear and uses linear codes to form the error correction bits as syndromes. 15
10. The method according to claim 9 wherein said linear codes are selected from the group consisting of LDPC codes and BCH codes.
11. The method according to claim 10, wherein the encoding parameters include information representing a sparse parity-check matrix. 20
12. A method of decoding encoded image data in a distributed video coding system, said method comprising the steps of: generating a first approximation image by up-sampling a portion of the encoded image data; and 2338620_1 905849_speci_lodge -34 enhancing said approximation image iteratively using a point spread function of the encoded image data, said enhancing comprising correcting said approximation image during each iteration using error correction bits of the encoded image data. 5
13. A method according to claim 12, wherein said first approximation image is generated by up-sampling the portion using the point spread function.
14. A method according to claim 12, said approximation image is enhanced by performing an image deconvolution operation including minimizing an energy function 10 using said point spread function.
15. According to the method of claim 14, wherein said deconvolution operation is iterative, the method comprising refining said point spread function during said deconvolution operation, and using the refined point spread function for subsequent 15 iterations of said deconvolution operation.
16. A method according to claim 14, wherein said energy function includes said point spread function and an image gradient field. 20
17. An encoder for image data in a distributed video coding system, said encoder comprising: a down sampler for generating configured to generate a down-sampled image from an input image according to a point spread function; 2338620_1 905849_specilodge -35 a generator for generating error correction bits based on said point spread function and said input image using a bitwise error correction method to support up sampling during decoding; and at least one of a store and a transmitter for respectively storing and transmitting 5 said point spread function, said down-sampled image, and said error correction bits as encoded image data.
18. A decoder encoded image data in a distributed video coding system, said decoder comprising: 10 an up sampler a first approximation image by up-sampling a portion of the encoded image data; and an enhancer arrangement configured to iteratively enhance said approximation image using a point spread function of the encoded image data, said enhancing comprising correcting said approximation image during each iteration using error 15 correction bits of the encoded image data.
19. A distributed video coding system comprising: at least one encoder according to claim 17; at least one decoder according to claim 18; and 20 at least one of a store and a transmission network interconnecting the at least one encoder and the at least one encoder.
20. At least one of an encoder for a distributed video coding system, a decoder for a distributed video coding system, and a distributed video coding system, substantially as 2338620_1 905849_speci-lodge - 36 described herein with reference to any one of the embodiments as that embodiment is illustrated in the drawings. 5 Dated this 14th day of October 2009 CANON KABUSHIKI KAISHA Patent Attorneys for the Applicant Spruson & Ferguson 2338620_1 905849_specilodge
AU2009225320A 2009-10-14 2009-10-14 Method of decoding image using iterative DVC approach Abandoned AU2009225320A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2009225320A AU2009225320A1 (en) 2009-10-14 2009-10-14 Method of decoding image using iterative DVC approach

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2009225320A AU2009225320A1 (en) 2009-10-14 2009-10-14 Method of decoding image using iterative DVC approach

Publications (1)

Publication Number Publication Date
AU2009225320A1 true AU2009225320A1 (en) 2011-04-28

Family

ID=43939870

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2009225320A Abandoned AU2009225320A1 (en) 2009-10-14 2009-10-14 Method of decoding image using iterative DVC approach

Country Status (1)

Country Link
AU (1) AU2009225320A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274445A (en) * 2017-05-19 2017-10-20 华中科技大学 A kind of image depth estimation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107274445A (en) * 2017-05-19 2017-10-20 华中科技大学 A kind of image depth estimation method and system
CN107274445B (en) * 2017-05-19 2020-05-19 华中科技大学 Image depth estimation method and system

Similar Documents

Publication Publication Date Title
AU2006204634B2 (en) Runlength encoding of leading ones and zeros
AU2006230691B2 (en) Video Source Coding with Decoder Side Information
KR102165155B1 (en) Adaptive interpolation for spatially scalable video coding
AU2008246243B2 (en) DVC as generic file format for plenoptic camera
US9014278B2 (en) For error correction in distributed video coding
AU2008240343B2 (en) Rate-distortion control in DVC with no feedback
WO2007040765A1 (en) Content adaptive noise reduction filtering for image signals
CN115606179A (en) CNN filter for learning-based downsampling for image and video coding using learned downsampling features
WO2016040255A1 (en) Self-adaptive prediction method for multi-layer codec
US10832383B2 (en) Systems and methods for distortion removal at multiple quality levels
US8243821B2 (en) For spatial Wyner Ziv coding
JP5882450B2 (en) Method and apparatus for adaptive loop filter with constrained filter coefficients
AU2006204632B2 (en) Parallel concatenated code with bypass
CN115552905A (en) Global skip connection based CNN filter for image and video coding
US9407293B2 (en) Wyner ziv coding
US8594196B2 (en) Spatial Wyner Ziv coding
AU2010202963A1 (en) Frame rate up-sampling for multi-view video coding using distributing video coding principles
AU2009225320A1 (en) Method of decoding image using iterative DVC approach
US20100309988A1 (en) Error correction in distributed video coding
Li et al. Video Error‐Resilience Encoding and Decoding Based on Wyner‐Ziv Framework for Underwater Transmission
Geetha et al. Design and implementation of distributed video coding architecture
Okuda et al. Raw image encoding based on polynomial approximation
CN115988201A (en) Method, apparatus, electronic device and storage medium for encoding film grain
AU2006252250A1 (en) Improvement for spatial wyner ziv coding
Sevcenco et al. Combined adaptive and averaging strategies for JPEG-based low bit-rate image coding

Legal Events

Date Code Title Description
MK4 Application lapsed section 142(2)(d) - no continuation fee paid for the application