EP2687011A1 - Method for reconstructing and coding an image block - Google Patents

Method for reconstructing and coding an image block

Info

Publication number
EP2687011A1
EP2687011A1 EP12707349.2A EP12707349A EP2687011A1 EP 2687011 A1 EP2687011 A1 EP 2687011A1 EP 12707349 A EP12707349 A EP 12707349A EP 2687011 A1 EP2687011 A1 EP 2687011A1
Authority
EP
European Patent Office
Prior art keywords
parameter
prediction
current block
neighbouring area
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP12707349.2A
Other languages
German (de)
French (fr)
Inventor
Edouard Francois
Dominique Thoreau
Fabien Racape
Aurélie Martin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Publication of EP2687011A1 publication Critical patent/EP2687011A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/19Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the invention relates to the general domain of image coding.
  • the invention relates to a method for coding a block of an image and a corresponding method for reconstructing such a block.
  • the majority of coding/decoding methods use prediction between images (inter-image prediction) or prediction in the image (intra-image prediction).
  • a prediction is used to improve the compression of a sequence of images. It consists in generating a prediction image for a current image having to be coded and in coding the difference between the current image and the prediction image, also called residual image.
  • residual image also called residual image.
  • the prediction loses its efficiency when there is a variation in luminosity between the images of the sequence or inside an image. Such a luminosity variation is for example due to a modification of illumination, to fade effects, to flashes, etc.
  • weighted prediction parameters for a current block from a causal neighbouring area of the block, i.e. comprising coded/reconstructed pixels before the current block.
  • This method is less costly in terms of bitrate as the weighted prediction parameters are not coded explicitly in the stream.
  • Such parameters are called implicit parameters.
  • the purpose of the invention is to overcome at least one of the disadvantages of the prior art.
  • the invention relates to a method for reconstruction of a current block of an image predicted from at least one reference block, the current block being in the form of coded data.
  • the method for reconstruction comprises steps for:
  • the method for reconstruction according to the invention enables a permanent adaptation to the local variations of signal properties, in particular illumination variations.
  • the limits of the explicit coding of illumination variation parameters can thus be compensated. Indeed they can be locally sub-optimal due to the search for the best bitrate distortion compromise.
  • the placing into competition of a second implicit model enables this fault to be corrected in certain cases, without introducing an extra coding cost.
  • the first distortion respectively the second distortion is calculated according to the following steps for:
  • the first causal neighbouring area is different from the second causal neighbouring area.
  • the term "different" does not signify that the two neighbouring areas are necessarily separate.
  • the selection of parameters on a second causal neighbouring area different from the first causal neighbouring area enables the advantage that the implicit model calculated on the first causal neighbour area would have with respect to the explicit model for which the parameters are assigned not to the first causal neighbouring area but to the current block.
  • the first causal neighbouring area and the second causal neighbouring area are separate.
  • the first causal neighbouring area is determined while selecting pixels belonging to neighbouring blocks of the current block for each of which the explicit parameter decoded is similar to the explicit parameter of the current block.
  • a decoded explicit parameter is similar to the explicit parameter of the current block when the norm of the difference between the explicit parameters is less than a threshold value.
  • the reference block belongs to the same image as the current block.
  • the reference block belongs to an image different from the image to which the current block belongs.
  • the invention also relates to a method for coding a current block of an image sequence by prediction using at least one reference block.
  • the coding method comprises the following steps for: - calculating at least one weighted prediction parameter, called the explicit parameter, using the current block and the reference block,
  • the explicit parameter is systematically coded.
  • FIG. 1 shows a current image lc to which belongs a current block Be to coded (respectively to be reconstructed) and a reference image Ir,
  • FIG. 2 shows a method for reconstruction of an image block according to the invention
  • FIG. 3 shows a method for coding of an image block according to the invention
  • FIG. 4 shows an image block coding device according to the invention
  • FIG. 5 shows a device for decoding a stream of coded data for the reconstruction of an image block according to the invention.
  • the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof.
  • the present principles may be implemented as a combination of hardware and software.
  • the software is preferably implemented as an application program tangibly embodied on a program storage device.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s).
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform also includes an operating system and microinstruction code.
  • various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof) that is executed via the operating system.
  • various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device.
  • the invention may be realized on any electronic device comprising coding or decoding means correspondingly adapted.
  • the invention may be realized in a television, a mobile videophone, a personal computer, a digital video camera, a navigation system or a car video system.
  • An image sequence is a series of several images. Each image comprises pixels or image points, with each of which is associated at least one item of image data.
  • An item of image data is for example an item of luminance data or an item of chrominance data.
  • motion data is to be understood in the widest sense. It comprises the motion vectors and possibly the reference image indexes enabling a reference image to be identified in the image sequence. It can also comprise an item of information indicating the interpolation type used to determine the prediction block. In fact, in the case where the motion vector associated with a block Be does not have integer coordinates, the image data must be interpolated in the reference image Iref to determine the prediction block.
  • the motion data associated with a block are generally calculated by a motion estimation method, for example by block matching. However, the invention is in no way limited by the method enabling a motion vector to be associated with a block.
  • residual data signifies data obtained after extraction of other data.
  • the extraction is generally a subtraction pixel by pixel of prediction data from source data. However, the extraction is more general and comprises notably a weighted subtraction.
  • residual data is synonymous with the term “residue”.
  • a residual block is a block of pixels with which residual data is associated.
  • prediction data signifies data used to predict other data.
  • a prediction block is a block of pixels with which prediction data is associated.
  • a prediction block is obtained from a block or several blocks of the same image as the image to which belongs the block that it predicts (spatial prediction or intra-image prediction) or from one (mono-directional prediction) or several blocks (bi-directional or bi-predicted prediction) of a different image (temporal prediction or inter-image prediction) of the image to which the block that it predicts belongs.
  • prediction mode specifies the way in which the block is coded.
  • the prediction modes there is the INTRA mode that corresponds to a spatial prediction and the INTER mode that corresponds to a temporal prediction.
  • the prediction mode can specify the way in which the block is partitioned to be coded.
  • the 8x8 INTER prediction mode associated with a block of size 16x16 signifies that the 16x16 block is partitioned into 4 8x8 blocks and predicted by temporal prediction.
  • reconstructed data signifies data obtained after merging of residual data with prediction data.
  • the merging is generally a sum pixel by pixel of prediction data to residual data. However, merging is more general and comprises notably the weighted sum.
  • a reconstructed block is a block of pixels with which reconstructed image data is associated.
  • a neighbouring block or neighbouring template of a current block is a block respectively a template situated in a more or less large neighbourhood of the current block but not necessarily adjacent to this current block.
  • the term coding is to be taken in the widest sense.
  • the coding can possibly but not necessarily comprise the transformation and/or the quantization of image data.
  • FIG 1 a current image lc in which is located a current block Be with which is associated a first causal neighbouring area Lc1 and a second causal neighbouring area Lc2.
  • the current block Be With the current block Be is associated a reference block Br of a reference image Ir (temporal prediction or INTER).
  • To this reference block Br is associated a first neighbouring area Lr1 corresponding to the neighbouring area Lc1 of Be and a second neighbouring area Lr2 corresponding to the neighbouring area Lc2 of Be.
  • the reference block Br is identified using a motion vector mv associated with the current block.
  • the neighbouring areas Lr1 and Lr2 occupy the same positions with respect to Br as the neighbouring areas Lc1 and Lc2 with respect to Be.
  • the reference block Br belongs to the same image as the current block (spatial prediction or INTRA).
  • the method for reconstructing (respectively for coding) according to the invention enables the use of the two types of weighted prediction parameters, i.e. implicit parameters and explicit parameters.
  • the method for reconstruction advantageously enables not having to signal in the stream of coded data the type of parameter used.
  • the selection of one or other type of weighted prediction parameters is carried out both via the coding method and via the reconstruction method according to a causal neighbouring area of the current block to be reconstructed (respectively to be coded) and of the corresponding neighbouring area associated with a reference block.
  • explicit parameter is synonymous with the term explicit weighted prediction parameter and the term implicit parameter is synonymous with implicit weighted prediction parameter.
  • the invention relates to a method for reconstruction of a current block Be predicted from at least one reference block Br, the current block being in the form of coded data.
  • the reference block Br belongs to a reference image Ir other than the current image lc or to the current image lc.
  • a weighted prediction parameter wp1 is decoded from coded data.
  • the decoding implements for example the H.264 standard described in the document ISO/IEC 14496-10. However the invention is in no way limited by the method for coding the explicit parameter wp1 .
  • a weighted prediction parameter wp2 is calculated from a first causal neighbouring area Lc1 of the current block Be and a first corresponding neighbouring area Lr1 of the reference block Br.
  • This parameter is determined according to a known method for example according to that described in the patent application WO/2007/094792 or according to the method described in the patent application WO/2010/086393.
  • wp2 is equal to the average of luminance values associated with the pixels of Lc1 divided by the average of luminance values associated with the pixels of Lr1 .
  • the pixels of the first neighbouring area Lc1 are determined from the explicit parameter wp1 .
  • wp1_c the explicit parameter of the current block. Only the pixels of neighbouring blocks of Be for which the explicit parameters wp1 are similar to that wp1_c of the current block are taken into account in Lc1 .
  • the neighbouring blocks of explicit parameters of very different illumination variations can in fact be considered as non-coherent with the current block.
  • a weighted prediction parameter wp1 is similar to the weighted prediction parameter wp1_c of said current block when the norm, for example the absolute value, of the difference between the weighted prediction parameters is less than a threshold value ⁇ , i.e.
  • first and second distortions are calculated.
  • the first distortion D1 results from the prediction of a second causal neighbouring area Lc2 of the current block taking into account the implicit parameter wp2.
  • the second distortion D1 results from the prediction, taking account of the explicit parameter wp1 , of the second causal neighbouring area Lc2 of the current block.
  • Ic2 represents the luminance values and/or chrominance values associated with the pixels of Lc2
  • Ir2 represents the luminance and/or chrominance values associated with the pixels of Lr2
  • Ip represents the luminance and/or chrominance values associated with the pixels of the prediction signal.
  • the second neighbouring area Lc2 of the current block is thus predicted using a second corresponding neighbouring area Lr2 of the reference block while taking account of the explicit prediction parameter respectively of the implicit prediction parameter.
  • the first distortion respectively the second distortion is thus calculated between the prediction Ip and the second neighbouring area of the current block.
  • the first causal neighbouring area Lc1 and the second causal neighbouring area Lc2 are separate, i.e. having no pixel in common.
  • the two neighbouring areas are different but not separate.
  • the first and second distortions are compared.
  • a weighted prediction parameter is selected according to the result of the comparison. For example, if a.D2 + b ⁇ D1 , the parameter wp2 is selected otherwise wp1 is selected.
  • the parameters a and b are either predefined or determined according to the signal.
  • a and b are determined according to the signal.
  • i p (i r2 , wp i ) i c2 + e with e ⁇ ⁇ /(0, ⁇ 2 ), where ⁇ /(0, ⁇ 2 ) is a normal distribution of null expectation and of variance a 2 .
  • D1 / ⁇ 2 and D2/ ⁇ 2 are both distributed according to a distribution of ⁇ 2 with the same degree of freedom n, n corresponding to the number of pixels of the neighbouring area Lc2 (or Lr2).
  • D2/D1 then follows a Fischer distribution of n degrees of freedom.
  • the parameter b is always null. The parameter will depend on the error rate that it is accorded (i.e. acceptance of the hypothesis though it is false) and on the number of samples of the neighbouring area Lc2. Typically, for an error rate of 5%, the following is obtained:
  • the current block is reconstructed according to a weighted prediction parameter selected in step 28. For example, if the explicit parameter is selected then B is reconstructed merging a residual block associated with Be and a prediction block Bp where the prediction block is determined according to the reference block Br and the parameter wp1 .
  • the invention relates to a method for coding a current block Be predicted from at least one reference block Br.
  • a weighted prediction parameter wp1 is calculated.
  • the explicit parameter is for example calculated using image data (luminance and/or chrominance) of the current block and image data of the reference block Br.
  • wp1 is equal to the average of luminance values associated with the pixels of Be divided by the average of luminance values associated with the pixels of Br.
  • This parameter if it is selected, must be coded.
  • the decoder when it reconstructs a current block Be does not have available to it the image data of the current block and thus can not calculate wp1 . It is for this reason that it is called explicit weighted prediction parameter.
  • the parameter wp1 is systematically coded even in the case where it is not selected. It enables notably the neighbouring area Lc1 to be determined.
  • a weighted prediction parameter wp2 is calculated from a first causal neighbouring area Lc1 of the current block Be and a first corresponding neighbouring area Lr1 of the reference block Br.
  • This parameter is determined according to a known method for example according to that described in the patent application WO/2007/094792 or according to the method described in the patent application WO/2010/086393.
  • the pixels of the first neighbouring area Lc1 are determined from the explicit parameter wp1 .
  • wp1_c the explicit parameter of the current block. Only the pixels of neighbouring blocks of Be for which the explicit parameters wp1 are similar to that wp1_c of the current block are taken into account in Lc1 .
  • the neighbouring blocks of explicit parameters of very different illumination variations can in fact be considered as non-coherent with the current block.
  • a weighted prediction parameter wp1 is similar to the weighted prediction parameter wp1_c of said current block when the norm, for example the absolute value, of the difference between the weighted prediction parameters is less than a threshold value ⁇ , i.e.
  • first and second distortions are calculated.
  • the first distortion D1 results from the prediction of a second causal neighbouring area Lc2 of the current block taking into account the implicit parameter wp2.
  • the second distortion D1 results from the prediction, taking account of the explicit parameter wp1 , of the second causal neighbouring area Lc2 of the current block.
  • Ic2 represents the luminance values and/or chrominance values associated with the pixels of Lc2
  • Ir2 represents the luminance and/or chrominance values associated with the pixels of Lr2
  • Ip represents the luminance and/or chrominance values associated with the pixels of the prediction signal.
  • the second neighbouring area Lc2 of the current block is thus predicted using a second corresponding neighbouring area Lr2 of the reference block while taking account of the explicit prediction parameter respectively of the implicit prediction parameter.
  • the first distortion respectively the second distortion is thus calculated between the prediction Ip and the second neighbouring area of the current block.
  • the first causal neighbouring area Lc1 and the second causal neighbouring area Lc2 are separate.
  • the two neighbouring areas are different but not separate.
  • the first and second distortions are compared.
  • a weighted prediction parameter is selected according to the result of the comparison. For example, if a.D2 + b ⁇ D1 , the parameter wp2 is selected otherwise wp1 is selected.
  • a and b are either predefined or determined according to the signal.
  • a and b are determined according to the signal. Starting with the hypothesis according to which the parameters wp1 and wp2 are similar, it is then considered that the prediction signal by one or other implicit or explicit method corresponds to the signal of Lc2 to which is added a non-correlated Gaussian white noise of the same variance ⁇ 2 , i.e.
  • / (i r2 , wp, , ) i c2 + e with e ⁇ ⁇ /(0, ⁇ 2 ), where N(0, ⁇ 2 ) is a normal distribution of null expected value and of variance a 2 .
  • D1 / ⁇ 2 and D2/ ⁇ 2 are both distributed according to a distribution of ⁇ 2 the same degree of freedom n, n corresponding to the number of pixels of the neighbouring area Lc2 (or Lr2).
  • D2/D1 then follows a Fischer distribution of n degrees of freedom.
  • the parameter b is always null. The parameter will depend on the error rate that it is accorded (i.e. acceptance of the hypothesis though it is false) and on the number of samples of the neighbouring area Lc2. Typically, for an error rate of 5%, the following is obtained:
  • the current block Be is coded according to the weighted prediction parameter selected in step 40. For example, if the explicit parameter wp1 is selected then Be is coded by extracting from the current block Be a prediction block Bp. The differential block or residual block thus obtained in then coded.
  • the prediction block Bp is determined according to the reference block Br and the parameter wp1 .
  • the invention further relates to a coding device 12 described with reference to figure 4.
  • the coding device 12 receives at the input pictures I belonging to a sequence of pictures. Each image is divided into blocks of pixels each of which is associated with at least one item of image data, e.g. luminance and/or chrominance data.
  • the coding device 12 notably implements a coding with temporal prediction.
  • the coding device 12 notably comprises a calculation module ADD1 capable of subtracting a prediction block Bp pixel by pixel from a current block Be to generate a residual image data block or residual block noted res. It also comprises a module TQ able to transform then quantize the residual block res into quantized data.
  • the transform T is for example a discrete cosine transform (or DCT).
  • the coding device 12 further comprises an entropy coding module COD able to code the quantized data into a stream F of coded data.
  • the module ITQ implements an inverse quantization Q "1 followed by an inverse transform T "1 .
  • the module ITQ is connected to a calculation module ADD2 able to add pixel by pixel the block of data from the ITQ module and the prediction block Bp to generate a block of reconstructed image data that is stored in a memory MEM.
  • the coding device 12 further comprises a motion estimation module ME capable of estimating at least one motion vector between the block Be and a block of a reference image Ir stored in the memory MEM, this image having previously been coded then reconstructed.
  • the motion estimation can be carried out between the current block Be and the original reference image lc in which case the memory MEM is not connected to the motion estimation module ME.
  • the motion estimation module searches the reference picture Ir for a motion vector in such a manner as to minimize an error calculated between the current block Be and a block in the reference image Ir identified by means of said motion vector.
  • the motion data are transmitted by the motion estimation module ME to a decision module DECISION able to select a coding mode for the block Be in a predefined set of coding modes.
  • the coding mode retained is for example that which minimizes a bitrate-distortion type criterion.
  • the invention is not restricted to this selection method and the mode retained can be selected according to another criterion for example an a priori type criterion.
  • the coding mode selected by the decision module DECISION as well as the motion data, for example the motion vector(s) in the case of the temporal prediction mode or INTER mode are transmitted to a prediction module PRED.
  • the motion vector(s) and the coding mode selected are also transmitted to the entropy coding module COD to be coded in the stream F. If an INTER prediction mode is retained by the decision module DECISION, the prediction module PRED then determines in the reference image Ir previously reconstructed and stored in the memory MEM, the prediction block Bp from the motion vector determined by the motion estimation module ME and the coding mode determined by the decision module DECISION. If an INTRA prediction mode is retained by the decision module DECISION, the prediction module PRED determines in the current image, from among the blocks previously coded and stored in the memory MEM, the prediction block Bp.
  • the prediction module PRED is able to determine the prediction block Bp taking into account a luminosity variation model defined by weighted prediction parameters representative of a luminosity variation between the images of the sequence or inside an image.
  • the coding device 12 comprises a module for selection of weighted prediction parameters SELECT1 .
  • the selection module SELECT 1 implements the steps 32 to 40 of the method described in reference to figure 3.
  • the invention further relates to a decoding device 13 described with reference to figure 5.
  • the decoding device 13 receives at the input a stream F of coded data representative of a sequence of images.
  • the stream F is for example transmitted by a coding device 12.
  • the decoding device 13 comprises an entropy decoding module DEC able to generate decoded data, for example coding modes and decoded data relating to the content of images.
  • the decoding device 13 also comprises a motion data reconstruction module.
  • the motion data reconstruction module is the entropy decoding module DEC that decodes a part of the stream F representative of said motion vectors.
  • the motion data reconstruction module is a motion estimation module.
  • This solution for reconstructing motion data via the decoding device 13 is known as "template matching".
  • the decoded data relating to the content of images is then transmitted to a module ITQ able to carry out an inverse quantization followed by an inverse transform.
  • the ITQ module is identical to the ITQ module of the coding device 12having generated the coded stream F.
  • the ITQ module is connected to a calculation module ADD able to add pixel by pixel the block from the ITQ module and the prediction block Bp to generate a block of reconstructed image data that are stored in a memory MEM.
  • the decoding device 13 also comprises a prediction module PRED identical to the prediction module PRED of the coding device 12.
  • the prediction module PRED determines in a reference image Ir previously reconstructed and stored in the memory MEM, the prediction block Bp from the motion vector MV and the coding mode decoded for the current block Be by the entropy decoding module DEC. If an INTRA prediction mode is decoded, the prediction module PRED determines in the current image, from among the blocks previously reconstructed and stored in the memory MEM, the prediction block Bp.
  • the prediction module PRED is able to determine the prediction block Bp taking into account a luminosity variation model defined by weighted prediction parameters representative of a luminosity variation between the images of the sequence or inside an image.
  • the decoding device 13 comprises a module for selection of weighted prediction parameters SELECT2.
  • the selection module SELECT2 implements the steps 22 to 28 of the method for reconstruction described in reference to figure 2.
  • Step 20 for decoding of wp1 is preferably implemented by the entropy decoding module DEC.
  • J1 can thus comprise a multiplicative weighted parameter wp1 and an offset o1 .
  • J2 More complex parameter sets can be implemented.
  • the parameter sets are not necessarily the same.
  • the methods for coding and reconstruction are described for a single reference block Br. However, the invention can apply to a current block with which several reference blocks are associated (case of bi-directional prediction for example).

Abstract

A method for reconstruction of a current block predicted from at least one reference block is described. The method comprises the steps for: - decoding (20) at least one weighted prediction parameter, called an explicit parameter (wp1), from coded data, - calculating (22) at least one weighted prediction parameter, called an implicit parameter (wp2), from a first causal neighbouring area (Lc1) of the current block (Be) and a first corresponding neighbouring area (Lr1) of the reference block (Br), - calculating (24) a first distortion (D1) resulting from the prediction of a second causal neighbouring area (Lc2) of the current block with the explicit parameter and a second distortion (D2) resulting from the prediction of said second causal neighbouring area (Lc2) with the implicit parameter, - comparing (26) the first and second distortions, - selecting (28) a parameter between the explicit parameter and the implicit parameter according to the result of the comparison, and - reconstructing (30) the current block using the selected parameter.

Description

METHOD FOR RECONSTRUCTING AND CODING
AN IMAGE BLOCK
1 . Domain of the invention
The invention relates to the general domain of image coding.
More specifically, the invention relates to a method for coding a block of an image and a corresponding method for reconstructing such a block.
2. Prior art
The majority of coding/decoding methods use prediction between images (inter-image prediction) or prediction in the image (intra-image prediction). Such a prediction is used to improve the compression of a sequence of images. It consists in generating a prediction image for a current image having to be coded and in coding the difference between the current image and the prediction image, also called residual image. The more the image is correlated with the current image, the lower is the number of bits required to code the current image and therefore the more effective is the compression. However, the prediction loses its efficiency when there is a variation in luminosity between the images of the sequence or inside an image. Such a luminosity variation is for example due to a modification of illumination, to fade effects, to flashes, etc.
Methods for coding/decoding image sequences are known that take into account an overall variation in luminosity. Hence, within the framework of the standard H.264 described in the document ISO/IEC 14496-10, it is known to use a weighted prediction method in order to improve the compression in the case of a variation in luminosity. The weighted prediction parameters are explicitly transmitted per image slice. The illumination correction is applied identically for all of the blocks of the slice. According to this method, it is possible to transmit weighted prediction parameters for each block. Such parameters are called explicit parameters.
It is also known to such weighted prediction parameters for a current block from a causal neighbouring area of the block, i.e. comprising coded/reconstructed pixels before the current block. This method is less costly in terms of bitrate as the weighted prediction parameters are not coded explicitly in the stream. Such parameters are called implicit parameters.
The efficiency of the two methods, i.e. that using parameters explicitly coded in the stream and that using implicit parameters strongly depends on the content processed specifically on the spatial-temporal local variations and decoding configurations such as the bitrate targeted.
Summary of the invention
The purpose of the invention is to overcome at least one of the disadvantages of the prior art.
The invention relates to a method for reconstruction of a current block of an image predicted from at least one reference block, the current block being in the form of coded data. The method for reconstruction comprises steps for:
- decoding at least one weighted prediction parameter, called an explicit parameter, from coded data,
- calculating at least one weighted prediction parameter, called an implicit parameter, from a first causal neighbouring area of the current block and the first neighbouring area corresponding to the reference block,
- calculating a first distortion resulting from the prediction of a second causal neighbouring area of the current block with the explicit parameter and a second distortion resulting from the prediction of the second causal neighbouring area with the implicit parameter,
- comparing the first and second distortions,
- selecting a parameter between the explicit parameter and the implicit parameter according to the result of the comparison, and
- - reconstructing the current block using the selected parameter.
Advantageously, the method for reconstruction according to the invention enables a permanent adaptation to the local variations of signal properties, in particular illumination variations. The limits of the explicit coding of illumination variation parameters can thus be compensated. Indeed they can be locally sub-optimal due to the search for the best bitrate distortion compromise. The placing into competition of a second implicit model enables this fault to be corrected in certain cases, without introducing an extra coding cost. According to an aspect of the invention, the first distortion respectively the second distortion is calculated according to the following steps for:
- predicting the second neighbouring area of the current block using a second corresponding neighbouring area of the reference block while taking account of the explicit prediction parameter respectively of the implicit prediction parameter, and
- calculating the first distortion respectively the second distortion between the prediction and the second neighbouring area of the current block.
According to a particular characteristic of the invention, the first causal neighbouring area is different from the second causal neighbouring area. The term "different" does not signify that the two neighbouring areas are necessarily separate. The selection of parameters on a second causal neighbouring area different from the first causal neighbouring area enables the advantage that the implicit model calculated on the first causal neighbour area would have with respect to the explicit model for which the parameters are assigned not to the first causal neighbouring area but to the current block.
According to another particular characteristic of the invention, the first causal neighbouring area and the second causal neighbouring area are separate.
Advantageously, the first causal neighbouring area is determined while selecting pixels belonging to neighbouring blocks of the current block for each of which the explicit parameter decoded is similar to the explicit parameter of the current block. A decoded explicit parameter is similar to the explicit parameter of the current block when the norm of the difference between the explicit parameters is less than a threshold value.
According to a particular characteristic of the invention, the reference block belongs to the same image as the current block.
According to a variant, the reference block belongs to an image different from the image to which the current block belongs.
The invention also relates to a method for coding a current block of an image sequence by prediction using at least one reference block. The coding method comprises the following steps for: - calculating at least one weighted prediction parameter, called the explicit parameter, using the current block and the reference block,
- calculating at least one weighted prediction parameter, called an implicit parameter, from a first causal neighbouring area of the current block and the first neighbouring area corresponding to the reference block,
- calculating a first distortion resulting from the prediction of a second causal neighbouring area of the current block with the explicit parameter and a second distortion resulting from the prediction of the second causal neighbouring area with the implicit parameter,
- comparing the first and second distortions, and
- selecting a parameter between the explicit parameter and the implicit parameter according to the result of the comparison, and
- coding the current block using the selected parameter.
Advantageously, the explicit parameter is systematically coded.
3. List of figures
The invention will be better understood and illustrated by means of embodiments and advantageous implementations, by no means limiting, with reference to the figures in the appendix, wherein:
- figure 1 shows a current image lc to which belongs a current block Be to coded (respectively to be reconstructed) and a reference image Ir,
- figure 2 shows a method for reconstruction of an image block according to the invention,
- figure 3 shows a method for coding of an image block according to the invention,
- figure 4 shows an image block coding device according to the invention, and
- figure 5 shows a device for decoding a stream of coded data for the reconstruction of an image block according to the invention.
4. Detailed description of the invention It is to be understood that the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or a combination thereof. Preferably, the present principles may be implemented as a combination of hardware and software. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage device. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (CPU), a random access memory (RAM), and input/output (I/O) interface(s). The computer platform also includes an operating system and microinstruction code. The various processes and functions described herein may either be part of the microinstruction code or part of the application program (or a combination thereof) that is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform such as an additional data storage device and a printing device.
The invention may be realized on any electronic device comprising coding or decoding means correspondingly adapted. For instance, the invention may be realized in a television, a mobile videophone, a personal computer, a digital video camera, a navigation system or a car video system.
An image sequence is a series of several images. Each image comprises pixels or image points, with each of which is associated at least one item of image data. An item of image data is for example an item of luminance data or an item of chrominance data.
The term "motion data" is to be understood in the widest sense. It comprises the motion vectors and possibly the reference image indexes enabling a reference image to be identified in the image sequence. It can also comprise an item of information indicating the interpolation type used to determine the prediction block. In fact, in the case where the motion vector associated with a block Be does not have integer coordinates, the image data must be interpolated in the reference image Iref to determine the prediction block. The motion data associated with a block are generally calculated by a motion estimation method, for example by block matching. However, the invention is in no way limited by the method enabling a motion vector to be associated with a block.
The term "residual data" signifies data obtained after extraction of other data. The extraction is generally a subtraction pixel by pixel of prediction data from source data. However, the extraction is more general and comprises notably a weighted subtraction. The term "residual data" is synonymous with the term "residue". A residual block is a block of pixels with which residual data is associated.
The term "prediction data" signifies data used to predict other data. A prediction block is a block of pixels with which prediction data is associated. A prediction block is obtained from a block or several blocks of the same image as the image to which belongs the block that it predicts (spatial prediction or intra-image prediction) or from one (mono-directional prediction) or several blocks (bi-directional or bi-predicted prediction) of a different image (temporal prediction or inter-image prediction) of the image to which the block that it predicts belongs.
The term "prediction mode" specifies the way in which the block is coded. Among the prediction modes, there is the INTRA mode that corresponds to a spatial prediction and the INTER mode that corresponds to a temporal prediction. The prediction mode can specify the way in which the block is partitioned to be coded. Thus, the 8x8 INTER prediction mode associated with a block of size 16x16 signifies that the 16x16 block is partitioned into 4 8x8 blocks and predicted by temporal prediction.
The term "reconstructed data" signifies data obtained after merging of residual data with prediction data. The merging is generally a sum pixel by pixel of prediction data to residual data. However, merging is more general and comprises notably the weighted sum. A reconstructed block is a block of pixels with which reconstructed image data is associated.
A neighbouring block or neighbouring template of a current block is a block respectively a template situated in a more or less large neighbourhood of the current block but not necessarily adjacent to this current block.
The term coding is to be taken in the widest sense. The coding can possibly but not necessarily comprise the transformation and/or the quantization of image data. On figure 1 is shown a current image lc in which is located a current block Be with which is associated a first causal neighbouring area Lc1 and a second causal neighbouring area Lc2. With the current block Be is associated a reference block Br of a reference image Ir (temporal prediction or INTER). To this reference block Br is associated a first neighbouring area Lr1 corresponding to the neighbouring area Lc1 of Be and a second neighbouring area Lr2 corresponding to the neighbouring area Lc2 of Be. In this figure, the reference block Br is identified using a motion vector mv associated with the current block. The neighbouring areas Lr1 and Lr2 occupy the same positions with respect to Br as the neighbouring areas Lc1 and Lc2 with respect to Be. According to a variant not shown the reference block Br belongs to the same image as the current block (spatial prediction or INTRA).
In order to not select a priori one prediction method over another, the method for reconstructing (respectively for coding) according to the invention enables the use of the two types of weighted prediction parameters, i.e. implicit parameters and explicit parameters. In addition, the method for reconstruction (respectively for coding) advantageously enables not having to signal in the stream of coded data the type of parameter used. The selection of one or other type of weighted prediction parameters is carried out both via the coding method and via the reconstruction method according to a causal neighbouring area of the current block to be reconstructed (respectively to be coded) and of the corresponding neighbouring area associated with a reference block.
Hereafter the term explicit parameter is synonymous with the term explicit weighted prediction parameter and the term implicit parameter is synonymous with implicit weighted prediction parameter.
In reference to figure 2, the invention relates to a method for reconstruction of a current block Be predicted from at least one reference block Br, the current block being in the form of coded data. According to the invention, the reference block Br belongs to a reference image Ir other than the current image lc or to the current image lc. During a step 20, a weighted prediction parameter wp1 , called the explicit parameter, is decoded from coded data. The decoding implements for example the H.264 standard described in the document ISO/IEC 14496-10. However the invention is in no way limited by the method for coding the explicit parameter wp1 .
During a step 22, a weighted prediction parameter wp2, called the implicit parameter, is calculated from a first causal neighbouring area Lc1 of the current block Be and a first corresponding neighbouring area Lr1 of the reference block Br. This parameter is determined according to a known method for example according to that described in the patent application WO/2007/094792 or according to the method described in the patent application WO/2010/086393. As a simple example, wp2 is equal to the average of luminance values associated with the pixels of Lc1 divided by the average of luminance values associated with the pixels of Lr1 .
According to a particular embodiment, the pixels of the first neighbouring area Lc1 (and consequently those of Lr1 ) are determined from the explicit parameter wp1 . Note for example wp1_c the explicit parameter of the current block. Only the pixels of neighbouring blocks of Be for which the explicit parameters wp1 are similar to that wp1_c of the current block are taken into account in Lc1 . The neighbouring blocks of explicit parameters of very different illumination variations can in fact be considered as non-coherent with the current block. A weighted prediction parameter wp1 is similar to the weighted prediction parameter wp1_c of said current block when the norm, for example the absolute value, of the difference between the weighted prediction parameters is less than a threshold value ε, i.e. |wp1 -wp1_c|<£.
During a step 24, first and second distortions are calculated. The first distortion D1 results from the prediction of a second causal neighbouring area Lc2 of the current block taking into account the implicit parameter wp2. The second distortion D1 results from the prediction, taking account of the explicit parameter wp1 , of the second causal neighbouring area Lc2 of the current block.
Purely by way of example the distortions D1 and D2 are calculated as follows: where: - i=1 or 2,
- Ic2 represents the luminance values and/or chrominance values associated with the pixels of Lc2, Ir2represents the luminance and/or chrominance values associated with the pixels of Lr2, Ip represents the luminance and/or chrominance values associated with the pixels of the prediction signal.
The second neighbouring area Lc2 of the current block is thus predicted using a second corresponding neighbouring area Lr2 of the reference block while taking account of the explicit prediction parameter respectively of the implicit prediction parameter. The first distortion respectively the second distortion is thus calculated between the prediction Ip and the second neighbouring area of the current block.
Ip is a function of Ir2 and wpi. Purely as an example, Ip=wpi*lr2, in the case of a multiplicative weighted prediction rameter. According to a variant, = ∑
According to another variant, only the pixels of Lc2 (and thus Lr2) belonging to neighbouring blocks of the current block for which the implicit parameters of illumination variations are similar to those of the current block are taken into account in the calculation of distortions D1 and D2.
According to a preferred embodiment, the first causal neighbouring area Lc1 and the second causal neighbouring area Lc2 are separate, i.e. having no pixel in common.
According to another variant, the two neighbouring areas are different but not separate.
During a step 26, the first and second distortions are compared.
During a step 28, a weighted prediction parameter is selected according to the result of the comparison. For example, if a.D2 + b < D1 , the parameter wp2 is selected otherwise wp1 is selected.
The parameters a and b are either predefined or determined according to the signal. For example, a and b are predefined as follows: a=1 and b=0. According to another example, a and b are determined according to the signal. Starting with the hypothesis according to which the parameters wp1 and wp2 are similar, it is then considered that the prediction signal by one or other implicit or explicit method corresponds to the signal of Lc2 to which is added a non-correlated Gaussian white noise of the same variance σ 2, i.e. ip (ir2 , wpi ) = ic2 + e with e ~ Λ/(0,σ2), where Λ/(0, σ 2) is a normal distribution of null expectation and of variance a 2. In this case, D1 / σ 2 and D2/ σ 2 are both distributed according to a distribution of χ2 with the same degree of freedom n, n corresponding to the number of pixels of the neighbouring area Lc2 (or Lr2). D2/D1 then follows a Fischer distribution of n degrees of freedom. In this example, the parameter b is always null. The parameter will depend on the error rate that it is accorded (i.e. acceptance of the hypothesis though it is false) and on the number of samples of the neighbouring area Lc2. Typically, for an error rate of 5%, the following is obtained:
- for n = 8, a = 3.44
- tor n = 16, a = 2.4
- for n = 32, a = 1 .85
- for n = 64, a = 1 .5
During a step 30, the current block is reconstructed according to a weighted prediction parameter selected in step 28. For example, if the explicit parameter is selected then B is reconstructed merging a residual block associated with Be and a prediction block Bp where the prediction block is determined according to the reference block Br and the parameter wp1 .
In reference to figure 3, the invention relates to a method for coding a current block Be predicted from at least one reference block Br.
During a step 32, a weighted prediction parameter wp1 , called the explicit parameter, is calculated. The explicit parameter is for example calculated using image data (luminance and/or chrominance) of the current block and image data of the reference block Br. Purely as an example, wp1 is equal to the average of luminance values associated with the pixels of Be divided by the average of luminance values associated with the pixels of Br. This parameter, if it is selected, must be coded. In fact, the decoder when it reconstructs a current block Be does not have available to it the image data of the current block and thus can not calculate wp1 . It is for this reason that it is called explicit weighted prediction parameter. According to a variant, the parameter wp1 is systematically coded even in the case where it is not selected. It enables notably the neighbouring area Lc1 to be determined.
During a step 34, a weighted prediction parameter wp2, called the implicit parameter, is calculated from a first causal neighbouring area Lc1 of the current block Be and a first corresponding neighbouring area Lr1 of the reference block Br. This parameter is determined according to a known method for example according to that described in the patent application WO/2007/094792 or according to the method described in the patent application WO/2010/086393.
According to a particular embodiment, the pixels of the first neighbouring area Lc1 (and consequently those of Lr1 ) are determined from the explicit parameter wp1 . Note for example wp1_c the explicit parameter of the current block. Only the pixels of neighbouring blocks of Be for which the explicit parameters wp1 are similar to that wp1_c of the current block are taken into account in Lc1 . The neighbouring blocks of explicit parameters of very different illumination variations can in fact be considered as non-coherent with the current block. A weighted prediction parameter wp1 is similar to the weighted prediction parameter wp1_c of said current block when the norm, for example the absolute value, of the difference between the weighted prediction parameters is less than a threshold value ε, i.e. |wp1 -wp1_c|<£.
During a step 36, first and second distortions are calculated. The first distortion D1 results from the prediction of a second causal neighbouring area Lc2 of the current block taking into account the implicit parameter wp2. The second distortion D1 results from the prediction, taking account of the explicit parameter wp1 , of the second causal neighbouring area Lc2 of the current block.
Purely by way of example the distortions D1 and D2 are calculated as follows:
where: - i=1 or 2,
- Ic2 represents the luminance values and/or chrominance values associated with the pixels of Lc2, Ir2represents the luminance and/or chrominance values associated with the pixels of Lr2, Ip represents the luminance and/or chrominance values associated with the pixels of the prediction signal.
The second neighbouring area Lc2 of the current block is thus predicted using a second corresponding neighbouring area Lr2 of the reference block while taking account of the explicit prediction parameter respectively of the implicit prediction parameter. The first distortion respectively the second distortion is thus calculated between the prediction Ip and the second neighbouring area of the current block.
I p is a function of Ir2 and wpi. Purely as an example, Ip=wpi*lr2, in the case of a multiplicative weighted prediction parameter.
According to a variant, = ∑ Ic2 ~ Ip i l » wPi )
According to another variant, only the pixels of Lc2 (and thus Lr2) belonging to neighbouring blocks of the current block for which the implicit parameters of illumination variations are similar to those of the current block are taken into account in the calculation of distortions D1 and D2.
According to a preferred embodiment, the first causal neighbouring area Lc1 and the second causal neighbouring area Lc2 are separate.
According to another variant, the two neighbouring areas are different but not separate.
During a step 38, the first and second distortions are compared.
During a step 40, a weighted prediction parameter is selected according to the result of the comparison. For example, if a.D2 + b < D1 , the parameter wp2 is selected otherwise wp1 is selected.
The parameters a and b are either predefined or determined according to the signal. For example, a and b are predefined as follows: a=1 and b=0.
According to another example, a and b are determined according to the signal. Starting with the hypothesis according to which the parameters wp1 and wp2 are similar, it is then considered that the prediction signal by one or other implicit or explicit method corresponds to the signal of Lc2 to which is added a non-correlated Gaussian white noise of the same variance σ 2, i.e.
/ (ir2 , wp, , ) = ic2 + e with e ~ Λ/(0,σ2), where N(0, σ 2) is a normal distribution of null expected value and of variance a 2. In this case, D1 / σ 2 and D2/ σ 2 are both distributed according to a distribution of χ2 the same degree of freedom n, n corresponding to the number of pixels of the neighbouring area Lc2 (or Lr2). D2/D1 then follows a Fischer distribution of n degrees of freedom. In this example, the parameter b is always null. The parameter will depend on the error rate that it is accorded (i.e. acceptance of the hypothesis though it is false) and on the number of samples of the neighbouring area Lc2. Typically, for an error rate of 5%, the following is obtained:
- for n = 8, a = 3.44
- tor n = 16, a = 2.4
- for n = 32, a = 1 .85
- for n = 64, a = 1 .5
During a step 42, the current block Be is coded according to the weighted prediction parameter selected in step 40. For example, if the explicit parameter wp1 is selected then Be is coded by extracting from the current block Be a prediction block Bp. The differential block or residual block thus obtained in then coded. The prediction block Bp is determined according to the reference block Br and the parameter wp1 . The invention further relates to a coding device 12 described with reference to figure 4. The coding device 12 receives at the input pictures I belonging to a sequence of pictures. Each image is divided into blocks of pixels each of which is associated with at least one item of image data, e.g. luminance and/or chrominance data. The coding device 12 notably implements a coding with temporal prediction. Only the modules of the coding device 12 relating to the coding by temporal prediction or INTER coding are shown in figure 12. Other modules not shown and known by those skilled in the art of video coders implement the INTRA coding with or without spatial prediction. The coding device 12 notably comprises a calculation module ADD1 capable of subtracting a prediction block Bp pixel by pixel from a current block Be to generate a residual image data block or residual block noted res. It also comprises a module TQ able to transform then quantize the residual block res into quantized data. The transform T is for example a discrete cosine transform (or DCT). The coding device 12 further comprises an entropy coding module COD able to code the quantized data into a stream F of coded data. It also comprises a module ITQ implementing the inverse operation to the module TQ. The module ITQ implements an inverse quantization Q"1 followed by an inverse transform T"1. The module ITQ is connected to a calculation module ADD2 able to add pixel by pixel the block of data from the ITQ module and the prediction block Bp to generate a block of reconstructed image data that is stored in a memory MEM.
The coding device 12 further comprises a motion estimation module ME capable of estimating at least one motion vector between the block Be and a block of a reference image Ir stored in the memory MEM, this image having previously been coded then reconstructed. According to a variant, the motion estimation can be carried out between the current block Be and the original reference image lc in which case the memory MEM is not connected to the motion estimation module ME. According to a method well known to those skilled in the art, the motion estimation module searches the reference picture Ir for a motion vector in such a manner as to minimize an error calculated between the current block Be and a block in the reference image Ir identified by means of said motion vector. The motion data are transmitted by the motion estimation module ME to a decision module DECISION able to select a coding mode for the block Be in a predefined set of coding modes. The coding mode retained is for example that which minimizes a bitrate-distortion type criterion. However, the invention is not restricted to this selection method and the mode retained can be selected according to another criterion for example an a priori type criterion. The coding mode selected by the decision module DECISION as well as the motion data, for example the motion vector(s) in the case of the temporal prediction mode or INTER mode are transmitted to a prediction module PRED. The motion vector(s) and the coding mode selected are also transmitted to the entropy coding module COD to be coded in the stream F. If an INTER prediction mode is retained by the decision module DECISION, the prediction module PRED then determines in the reference image Ir previously reconstructed and stored in the memory MEM, the prediction block Bp from the motion vector determined by the motion estimation module ME and the coding mode determined by the decision module DECISION. If an INTRA prediction mode is retained by the decision module DECISION, the prediction module PRED determines in the current image, from among the blocks previously coded and stored in the memory MEM, the prediction block Bp.
The prediction module PRED is able to determine the prediction block Bp taking into account a luminosity variation model defined by weighted prediction parameters representative of a luminosity variation between the images of the sequence or inside an image. For this purpose, the coding device 12 comprises a module for selection of weighted prediction parameters SELECT1 . The selection module SELECT 1 implements the steps 32 to 40 of the method described in reference to figure 3.
The invention further relates to a decoding device 13 described with reference to figure 5. The decoding device 13 receives at the input a stream F of coded data representative of a sequence of images. The stream F is for example transmitted by a coding device 12. The decoding device 13 comprises an entropy decoding module DEC able to generate decoded data, for example coding modes and decoded data relating to the content of images. The decoding device 13 also comprises a motion data reconstruction module. According to a first embodiment, the motion data reconstruction module is the entropy decoding module DEC that decodes a part of the stream F representative of said motion vectors.
According to a variant not shown in figure 5, the motion data reconstruction module is a motion estimation module. This solution for reconstructing motion data via the decoding device 13 is known as "template matching".
The decoded data relating to the content of images is then transmitted to a module ITQ able to carry out an inverse quantization followed by an inverse transform. The ITQ module is identical to the ITQ module of the coding device 12having generated the coded stream F. The ITQ module is connected to a calculation module ADD able to add pixel by pixel the block from the ITQ module and the prediction block Bp to generate a block of reconstructed image data that are stored in a memory MEM. The decoding device 13 also comprises a prediction module PRED identical to the prediction module PRED of the coding device 12. If an INTER prediction mode is decoded, the prediction module PRED determines in a reference image Ir previously reconstructed and stored in the memory MEM, the prediction block Bp from the motion vector MV and the coding mode decoded for the current block Be by the entropy decoding module DEC. If an INTRA prediction mode is decoded, the prediction module PRED determines in the current image, from among the blocks previously reconstructed and stored in the memory MEM, the prediction block Bp.
The prediction module PRED is able to determine the prediction block Bp taking into account a luminosity variation model defined by weighted prediction parameters representative of a luminosity variation between the images of the sequence or inside an image. For this purpose, the decoding device 13 comprises a module for selection of weighted prediction parameters SELECT2. The selection module SELECT2 implements the steps 22 to 28 of the method for reconstruction described in reference to figure 2. Step 20 for decoding of wp1 is preferably implemented by the entropy decoding module DEC.
The methods for coding and for reconstruction are described for a parameter wp1 and a parameter wp2. However the invention can apply to a set of parameters J1 and set of parameters J2. J1 can thus comprise a multiplicative weighted parameter wp1 and an offset o1 . The same applies to J2. More complex parameter sets can be implemented. The parameter sets are not necessarily the same.
The methods for coding and reconstruction are described for a single reference block Br. However, the invention can apply to a current block with which several reference blocks are associated (case of bi-directional prediction for example).

Claims

Claims
1 . Method for reconstructing a current block of an image predicted from at least one reference block, said current block being in the form of coded data comprising steps for:
- decoding (20) at least one weighted prediction parameter, called an explicit parameter (wp1 ), from said coded data,
said method being characterized in that it also comprises the following steps:
- calculating (22) at least one weighted prediction parameter, called an implicit parameter (wp2), from a first causal neighbouring area (Lc1 ) of said current block (Be) and a first corresponding neighbouring area (Lr1 ) of said reference block (Br),
- calculating (24) a first distortion (D1 ) resulting from the prediction of a second causal neighbouring area (Lc2) of said current block with said explicit parameter and a second distortion (D2) resulting from the prediction of said second causal neighbouring area (Lc2) with said implicit parameter, said second causal neighbouring area being different from said first causal neighbouring area,
- comparing (26) said first and second distortions,
- selecting (28) one parameter between said explicit parameter and said implicit parameter according to the result of said comparison, and
- reconstructing (30) said current block using said selected parameter.
2. Method for reconstruction according to claim 1 , wherein said first distortion (D1 ) respectively said second distortion (D2) is calculated according to the following steps:
- predicting said second neighbouring area (Lc2) of the current block using a second corresponding neighbouring area (Lr2) of the reference block while taking account of said explicit prediction parameter respectively of said implicit prediction parameter, and
- calculating said first distortion respectively said second distortion between said prediction and said second neighbouring area of the current block.
3. Method for reconstruction according to one of the previous claims, wherein said first causal neighbouring area and said second causal neighbouring area are separate.
4. Method for reconstruction according to one of the previous claims, wherein said first causal neighbouring area is determined by selecting pixels belonging to neighbouring blocks of said current block for which a difference between said explicit parameter decoded and said explicit parameter of said current block is lower than a threshold.
5. Method for reconstruction according to one of the previous claims, wherein said reference block belongs to the same image as the current block.
6. Method for reconstruction according to one of the previous claims, wherein said reference block belongs to an image other than the image to which the current block belongs.
7. Method for coding a current block of an image sequence by prediction using at least one reference block comprising steps for:
- calculating (32) at least one weighted prediction parameter, called the explicit parameter (wp1 ), using the current block and the reference block, said method being characterized in that it also comprises the following steps: - calculating (34) at least one weighted prediction parameter, called an implicit parameter (wp2), from a first causal neighbouring area (Lc1 ) of said current block (Be) and a first corresponding neighbouring area (Lr1 ) of said reference block (Br),
- calculating (36) a first distortion (D1 ) resulting from the prediction of a second causal neighbouring area (Lc2) of said current block with said explicit parameter and a second distortion (D2) resulting from the prediction of said second causal neighbouring area (Lc2) with said implicit parameter, said second causal neighbouring area being different from said first causal neighbouring area, - comparing (38) said first and second distortions, and
- selecting (40) a parameter between said explicit parameter and said implicit parameter according to the result of said comparison, and
- coding (42) said current block using said selected parameter.
8. Method for coding according to claim 7, wherein said explicit parameter is coded.
9. Device for reconstructing a current block of an image predicted from at least one reference block, said current block being in the form of coded data comprising:
- means for decoding at least one weighted prediction parameter, called an explicit parameter, from said coded data,
said device for reconstructing being characterized in that it also comprises: - means for calculating at least one weighted prediction parameter, called an implicit parameter, from a first causal neighbouring area of said current block and a first corresponding neighbouring area of said reference block,
- means for calculating a first distortion resulting from the prediction of a second causal neighbouring area of said current block with said explicit parameter and a second distortion resulting from the prediction of said second causal neighbouring area with said implicit parameter, said second causal neighbouring area being different from said first causal neighbouring area,
- means for comparing said first and second distortions,
- means for selecting one parameter between said explicit parameter and said implicit parameter according to the result of said comparison, and
- means for reconstructing said current block using said selected parameter.
10. Device for coding a current block of an image sequence by prediction using at least one reference block comprising:
- means for calculating at least one weighted prediction parameter, called the explicit parameter, using the current block and the reference block, said device for coding being characterized in that it also comprises:
- means for calculating at least one weighted prediction parameter, called an implicit parameter, from a first causal neighbouring area of said current block and a first corresponding neighbouring area of said reference block,
- means for calculating a first distortion resulting from the prediction of a second causal neighbouring area of said current block with said explicit parameter and a second distortion resulting from the prediction of said second causal neighbouring area with said implicit parameter, said second causal neighbouring area being different from said first causal neighbouring area,
- means for comparing said first and second distortions, and
- means for selecting a parameter between said explicit parameter and said implicit parameter according to the result of said comparison, and
- means for coding said current block using said selected parameter.
EP12707349.2A 2011-03-14 2012-03-08 Method for reconstructing and coding an image block Withdrawn EP2687011A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1152044 2011-03-14
PCT/EP2012/053978 WO2012123321A1 (en) 2011-03-14 2012-03-08 Method for reconstructing and coding an image block

Publications (1)

Publication Number Publication Date
EP2687011A1 true EP2687011A1 (en) 2014-01-22

Family

ID=44131712

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12707349.2A Withdrawn EP2687011A1 (en) 2011-03-14 2012-03-08 Method for reconstructing and coding an image block

Country Status (7)

Country Link
US (1) US20140056348A1 (en)
EP (1) EP2687011A1 (en)
JP (1) JP5938424B2 (en)
KR (1) KR20140026397A (en)
CN (1) CN103430543A (en)
BR (1) BR112013023405A2 (en)
WO (1) WO2012123321A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2948845A1 (en) 2009-07-30 2011-02-04 Thomson Licensing METHOD FOR DECODING A FLOW REPRESENTATIVE OF AN IMAGE SEQUENCE AND METHOD FOR CODING AN IMAGE SEQUENCE
US9883180B2 (en) 2012-10-03 2018-01-30 Avago Technologies General Ip (Singapore) Pte. Ltd. Bounded rate near-lossless and lossless image compression
CN104363449B (en) * 2014-10-31 2017-10-10 华为技术有限公司 Image prediction method and relevant apparatus
CN107005692B (en) * 2014-11-27 2020-11-17 株式会社Kt Method and apparatus for decoding or encoding video signal

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7515637B2 (en) * 2004-05-21 2009-04-07 Broadcom Advanced Compression Group, Llc Video decoding for motion compensation with weighted prediction
CN101023673B (en) * 2004-09-16 2011-06-08 汤姆逊许可证公司 Video codec with weighted prediction utilizing local brightness variation
KR101406156B1 (en) * 2006-02-02 2014-06-13 톰슨 라이센싱 Method and apparatus for adaptive weight selection for motion compensated prediction
CN101385346B (en) 2006-02-17 2012-05-30 汤姆森许可贸易公司 Video frequency endong device and mehtod for solving local lightness variation
EP2039171B1 (en) * 2006-07-07 2016-10-05 Telefonaktiebolaget LM Ericsson (publ) Weighted prediction for video coding
CN101911708B (en) * 2008-01-10 2016-03-09 汤姆森特许公司 The illumination compensation method and apparatus of intra-predicted video
WO2010035731A1 (en) * 2008-09-24 2010-04-01 ソニー株式会社 Image processing apparatus and image processing method
TWI498003B (en) 2009-02-02 2015-08-21 Thomson Licensing Method for decoding a stream representative of a sequence of pictures, method for coding a sequence of pictures and coded data structure
US8711930B2 (en) * 2009-07-09 2014-04-29 Qualcomm Incorporated Non-zero rounding and prediction mode selection techniques in video encoding

Also Published As

Publication number Publication date
CN103430543A (en) 2013-12-04
JP2014514808A (en) 2014-06-19
WO2012123321A1 (en) 2012-09-20
JP5938424B2 (en) 2016-06-22
BR112013023405A2 (en) 2016-12-13
US20140056348A1 (en) 2014-02-27
KR20140026397A (en) 2014-03-05

Similar Documents

Publication Publication Date Title
US11743487B2 (en) Method, device, and computer program for optimizing transmission of motion vector related information when transmitting a video stream from an encoder to a decoder
CN107529063B (en) Encoding method, encoding device, decoding method, and decoding device
US20120269263A1 (en) Method for coding and method for reconstruction of a block of an image
US20070098067A1 (en) Method and apparatus for video encoding/decoding
KR101808327B1 (en) Video encoding/decoding method and apparatus using paddding in video codec
KR101482896B1 (en) Optimized deblocking filters
WO2007108642A1 (en) Image encoding/decoding method and apparatus
EP1982530A2 (en) Method and apparatus for reusing available motion information as a motion estimation predictor for videoencoding
EP1997316A2 (en) Image encoding/decoding method and apparatus
US8699576B2 (en) Method of and apparatus for estimating motion vector based on sizes of neighboring partitions, encoder, decoding, and decoding method
EP2687011A1 (en) Method for reconstructing and coding an image block
US9055292B2 (en) Moving image encoding apparatus, method of controlling the same, and computer readable storage medium
JP5639444B2 (en) Motion vector generation apparatus, motion vector generation method, and computer program
WO2011069831A1 (en) Method and apparatus for coding and decoding an image block
KR100987581B1 (en) Method of Partial Block Matching for Fast Motion Estimation
CN117941352A (en) Inter prediction method, encoder, decoder, and storage medium
KR20180103673A (en) Video encoding/decoding method and apparatus using paddding in video codec

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20130924

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20160929

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20170210