WO2008116836A2 - Image processing method and coding device implementing said method - Google Patents

Image processing method and coding device implementing said method Download PDF

Info

Publication number
WO2008116836A2
WO2008116836A2 PCT/EP2008/053399 EP2008053399W WO2008116836A2 WO 2008116836 A2 WO2008116836 A2 WO 2008116836A2 EP 2008053399 W EP2008053399 W EP 2008053399W WO 2008116836 A2 WO2008116836 A2 WO 2008116836A2
Authority
WO
WIPO (PCT)
Prior art keywords
blocks
block
value
sub
sbi
Prior art date
Application number
PCT/EP2008/053399
Other languages
French (fr)
Other versions
WO2008116836A3 (en
Inventor
Stéphane ALLIE
Xavier Ducloux
Denis Mailleux
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of WO2008116836A2 publication Critical patent/WO2008116836A2/en
Publication of WO2008116836A3 publication Critical patent/WO2008116836A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • H04N19/198Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • Scope of the invention relates to the domain of image coding. More specifically it relates to an image processing method and a coding device implementing said method.
  • a method for coding an image I or a sequence of images I in the form of a bitstream S generally comprises, for each image to be coded, a step E1 to transform, with a transform such as a DCT (Discrete Cosine Transform) of luminance and chrominance data associated with the pixels of the image, a step E2 to determine at least one Target quantization step QP * associated with the image I or with a part of this image, a step E3 of quantization to quantize the image data using one or more quantization steps and a step E4 of entropic coding of quantized data.
  • a transform such as a DCT (Discrete Cosine Transform) of luminance and chrominance data associated with the pixels of the image
  • a step E2 to determine at least one Target quantization step QP * associated with the image I or with a part of this image
  • a step E3 of quantization to quantize the image data using one or more quantization steps
  • the image quantization step QP * is generally determined by a bitrate control algorithm as a function of a bitrate setting D determined as a function of the application and associated with the image I. According to one variant several target quantization steps QP * are determined during step E2 for the image I, each of them associated with a group of macroblocks, such as for example a macroblock slice.
  • the determination step E2 of the target quantization step QP * for an image or image slice is followed by a step E2' of local adjustment of the quantization step within said image or said slice of image, for example at the level of a block of pixels.
  • This local adjustment is based on the recognition that the human eye perceives faults more easily on the uniform zones than on the heavily textured zones i.e. having a high spatial complexity. Consequently, the uniform zones must be preserved and therefore quantized less heavily than the textured zones that can be degraded without being perceived by the eye.
  • the standard approaches modify the target quantization step QP * locally, for example for each block of pixels from a spatial analysis of image data of these blocks with the aid of operators adapted to this effect.
  • This spatial analysis highlights the psycho-visual characteristics of each block and enables uniform blocks and heavily textured blocks to be identified.
  • a higher quantization step than the target quantization step QP* is then associated with the heavily textured blocks and conversely a lower quantization step than the target quantization step QP * is associated with the uniform blocks.
  • the spatial analysis operators used generally assimilate blocks that have a clear object contour, also called front, with heavily textured blocks, or assimilate blocks having a significant and continuous luminance variation with textured blocks. Consequently, a local adjustment method of the quantization step based on such operators will attribute a high quantization step both to blocks comprising contours and blocks having a continuous luminance variation, when on the contrary it should preserve these zones.
  • the invention relates to a method of processing an image divided into blocks of pixels each of which is associated with at least one luminance value.
  • the method includes the following steps: - calculate for each block a first value representative of the contrast in the block,
  • the method according to the invention advantageously enables a distinction to be made between textured zones, zones including a contour and homogenous zones or zones having a distinct variation but a continuous luminance, and to attribute a quantization step offset as a function of this characterisation.
  • the method also comprises a step to calculate for each of the sub- blocks a fourth value representative of the contrast in the sub-block, a step to calculate for each of the sub-blocks a fifth value representative of the spatial activity and a step to calculate for each of the sub-blocks a sixth value representative of the mean luminance of the sub-block.
  • the first value is calculated, for each of the blocks, from the fourth valued of the sub-blocks of the block and of adjacent sub-blocks of the block
  • the second value is calculated from the fifth values of the sub-blocks of the block and of adjacent sub-blocks of the block
  • the third value is calculated from the sixth values of the sub-blocks of the block.
  • the blocks are macroblocks of 16 by 16 pixels and the sub-blocks are pixel blocks of 8 by 8 pixels.
  • the first value is equal to the maximum value of the four values of the sub-blocks of the block and of the adjacent sub-blocks of the block.
  • the second value is equal to the minimum value of the fifth values of the sub-blocks of the block and of the adjacent sub-blocks of the block.
  • the third value is equal to the mean of the sixth values of the sub-blocks of the block.
  • the adjacent sub-blocks of the block have a side in common with the block.
  • the fourth value of a sub-block of size I by J pixels is equal to where pel, j is the luminance value associated with the pixel of coordinate (i,j)
  • the fifth value of a sub-block of size I by J pixels is equal to
  • the invention also relates to a coding device capable of implementing the method according to the invention.
  • the coding device comprises a transformation module to transform image data associated with each of the blocks into transform data, an image processing module suitable to determine a quantization step offset for each of the blocks, a quantization module suitable to quantize, for each of the blocks, the transform data according to a predetermined quantization step and the quantization step offset of the block and an entropic coding module suitable to code, for each block, the quantized data in the form of a bitstream.
  • the image processing module comprises: - means for calculating for each one of the blocks a first value representative of the contrast in the block,
  • FIG. 1 illustrates an image coding method according to the prior art
  • FIG. 2 illustrates an image processing method according to a first embodiment of the invention
  • - figure 3 illustrates a step of the image processing method according to the first embodiment of the invention
  • - figure 4 illustrates an image processing method according to a second embodiment of the invention
  • - figure 5 illustrates a step of the image processing method according to the second embodiment of the invention
  • - figure 6 represents a macroblock and the adjacent blocks of said macroblock
  • FIG. 7 illustrates an image coding device according to the invention
  • FIG. 8 illustrates an image sequence coding device according to the invention.
  • the invention relates to an image processing method to locally adjust a target quantization step QP * within an image or an image slice made of pixels, each of which is associated with at least one luminance value.
  • a first embodiment is described with reference to figure 2.
  • the method includes an initialisation step 10 at the value 0 of the index k of the blocks bk of said image. It then comprises a calculation step 12 for the block bk of a first value noted as MF k representative of the contrast in the block bk. More precisely, this first value is representative of the maximum contrast of block bk in the vertical direction and the horizontal direction.
  • the value MF k is preferentially calculated according to the following equation: where: - 1 is the height of block bk in number of pixels,
  • - peli j is the luminance value associated with the pixel of coordinate (i,j) in the block bk
  • the value of MASk is preferentially calculated according to the following equation:
  • DCk representative of the mean luminance also known as average level of luminance in the block bk.
  • the value of DCk is preferentially calculated according to the following equation:
  • Steps 12, 13 and 14 of the method can also be performed simultaneously or in an entirely different order to the order described.
  • the combination of the three values determined in steps 12 to 14 advantageously enables the contents of block bk to be characterized. They particularly make it possible to determine if the block bk includes a contour, if it is textured, if it is homogenous or if it includes a zone of significant variation but with continuous luminance. In fact, a homogenous block bk, a block bk comprising a zone of significant variation but continuous luminance or including a clear and isolated contour should be preserved, i.e. quantized with a quantization step lower than the target quantization step QP*, whereas a textured block bk can be quantized with a higher quantization step. Further, the DCk criteria enables the luminance level of the homogenous zones to be characterized. Indeed, in a homogenous zone, the faults are visible except when the mean luminance level of the zone is high or conversely low.
  • the target quantization step QP * associated with the image or image slice to which the block bk belongs is adjusted at the level of block bk as a function of the values MF k , MASk and DCk calculated during steps 10 to 14. More specifically, during step 15 a quantization offset value noted as ⁇ QP k , is determined with respect to QP * for the block bk, i.e. the quantization step later used to quantize the block b k is equal to QP * + ⁇ QP k .
  • the quantization offset ⁇ QP k is determined in step 15 in the following manner:
  • the content of block bk is characterised as having a textured content that can be further quantized.
  • the content of block bk is identified as containing a marked contour that must be maintained.
  • the content of block bk is characterised as having homogenous content (i.e. uniform or low contrast) having an average luminance level, i.e. between S4 and S5, that must be maintained.
  • the content of block bk is not characterised as being textured, or uniform with an average luminance level or as containing an isolated colour, which is why the value 0 is attributed.
  • the k index is incremented by 1.
  • k is compared to the value N. If k is greater than or equal to N then the method is terminated, otherwise the method restarts at step 12.
  • the determination of the value of the quantization offset ⁇ QP k does not require knowledge of the value of the target quantization step QP*, which is generally determined during coding by a bitrate control method. Consequently, the quantization offset values ⁇ QP k can be determined prior to the coding method.
  • a macroblock is a pixel block.
  • a macroblock is a block of pixels of 16 by 16. It is itself divided into several non-overlapping sub-blocks of 8 by 8 pixels.
  • Another embodiment is described with reference to figure 4.
  • the image is divided into N non-overlapping macroblocks, noted as MB k , where k is the macroblock index in the image, k varying from 0 to N.
  • Each sub-block is noted as sbi, where I is the sub-block index in the image, I varying from 0 to M.
  • the method according to the second embodiment comprises an initialisation step 20 with a value 0 of the k index of the macroblocks MB k and of the I index of the sub-blocks sbi of said image. It then comprises a calculation step 22 for the sub-block of pixels sbi of a first value noted as MF k representative of the contrast in the sub-block sbi. More precisely, it is representative of the maximum contrast of the sub-block b k in the vertical direction and the horizontal direction.
  • the value MFi is preferentially calculated according to the following equation: where: - 1 is the height of the sub-block sbi in number of pixels,
  • - J is the width of the sub-block sbi in number of pixels
  • - peli j is the luminance value associated with the pixel of coordinate (i,j) in the sub-block sbi,
  • - MAX(U 1 ) is the maximum value a, in the group ⁇ a 0 , ai, ..., aj.-i ⁇ .
  • the value MASi is preferentially calculated according to the following equation:
  • MAS ⁇ ⁇ P el 1+lj It also includes a calculation step 25 for the block of pixels sbi of a third value noted as DCi representative of the mean luminance in the block sbi.
  • the value DCi is preferentially calculated according to the following equation:
  • Steps 22, 24 and 25 of the method can also be performed simultaneously or in an entirely different order to that described.
  • the I index is incremented by 1.
  • I is compared to the value M. If I is greater than or equal to M then the method continues to step 30, otherwise the method restarts at step 22.
  • the MFi values calculated for each sub-block sbi of a macroblock MB k are combined to obtain an MBF k value for the macroblock MBk.
  • the value MBF k is the maximum value MFi among the values MFi associated with the sub-blocks sbi of the macroblock as well as the adjacent sub-blocks sbi of the macroblock MB k , i.e. those that have a commons side with one of the sub-blocks sbi of the macroblock MB k .
  • the sub-blocks sbi adjacent to macroblock MB k are greyed out on figure 4. This selection advantageously enables priority to be given to the neighbouring area of a contour.
  • the MASi values calculated for each sub-block sbi of a macroblock MB k are combined to obtain an MBAS k value for the macroblock MB k .
  • Le MBASki value is the minimum MASi value among the MASi values associated with the sub-blocks sbi of the macroblock MB k as well as with the sub-blocks sbi adjacent to the macroblock MB k , i.e. the greyed out sub-blocks sbi in figure 4. This selection advantageously enables the zone to favour to be enlarged beyond the uniform zone and a particularity of spatial activity to be suppressed, for example.
  • the target quantization step QP* associated with the image or image slice to which belongs the macroblock MB k is adjusted according to the values MBF k , MBAS k and MBDC k calculated during steps 30 to 32.
  • an offset quantization value with respect to a QP* is determined for the macroblock MB k .
  • the quantization step later used to quantize the macroblock MB k . is equal to QP*+ ⁇ QPk.
  • the content of macroblock MB k is characterised as having a textured content that can be further quantized.
  • the content of block MB k is identified as containing a marked contour that must be preserved.
  • the content of macroblock MB k is characterised as being a uniform content having an average luminance level, i.e. between S4 and S5, that must be preserved.
  • the contents of macroblock MB k are not characterised as being textured, or uniform with an average luminance level or as containing an isolated colour, which is why the value 0 is attributed.
  • ⁇ Q3 is chosen equal to ⁇ Q2 for example equal to 1.
  • the kl index is incremented by 1.
  • k is compared to the value N. If k is superior to N then the method is terminated, otherwise the method restarts at step 30.
  • the determination of the value of the quantization offset ⁇ QP k does not require knowledge of the value of the target quantization step QP*, which is generally determined during coding by a bitrate control method. Consequently, the quantization offset values ⁇ QP k can be determined prior to the coding method.
  • This embodiment described for a macroblock MBk divided into sub- blocks sbi can be applied more generally to all blocks of pixels b k divided into sub-blocks sbi. For example, this embodiment can be applied to blocks b k of size 8 by 8 pixels divided into sub-blocks sbi of size 4 by 4 pixels.
  • the invention relates to an coding device ENC of an image I in the form of a bitstream S suitable to implement the method of the invention. It comprises a transformation module T able to receive the image I. It further comprises an image processing module P able to receive the input image I at the input, a quantization module Q whose first input is linked to the output of the processing module T and an entropic coding module C whose input is linked to the output of the quantization module Q.
  • the entropic coding module C is able to generate the bitstream S.
  • the transformation module T is suitable to transform image data e.g. chrominance and luminance data, associated with the pixels of the image I, into transform data. This module implements for example, a DCT type transform.
  • the image processing module P is adapted to implement the steps 10 to 18 and/or 20 to 38 of the method according to the invention. In particular, it is adapted to determine for each block b k or macroblock MB k within the image I, a quantization offset ⁇ QP k with respect to a target quantization step QP* associated with the image I or an image slice to which belongs the block bk, respectively the macroblock MB k .
  • the module Q is adapted to quantize the transform data into quantized data using a quantization step possibly locally adjusted, i.e. equal to QP * + ⁇ QP k .
  • the coding module C is adapted to implement an entropic coding of quantized data in order to generate the bit stream S at the output of the ENC device.
  • the invention relates to a coding device ENCV of a sequence of images V in the form of a bitstream SV suitable to implement the method of the invention.
  • the coding device ENCV comprises modules already described in reference to figure 6 that have the same references i.e. the modules T and Q. It also comprises a subtractor D able to receive on a first input images I of the sequence V and on a second input images or part of a prediction image Ic. The input of the transformation module T is linked to the output of the subtractor D. It also includes, an image processing module P able to receive at the input images I from the sequence V. The output of the image processing module P is linked to the input of the quantization module Q.
  • the ENCV coding device includes a motion compensation module MC with an input linked to the output of a motion estimation module ME.
  • the image of part of the predicted image IC can possibly be obtained by spatial prediction of an image stored in the memory MEM.
  • the coding device ENCV includes a spatial prediction module INTRA.
  • the selection of the prediction mode is performed by the selection module SW.
  • the coding module ENCV also includes an inverse quantization module IQ whose input is linked to the output of the quantization device Q, an inverse transformation module IT whose the input is linked to the output of the inverse quantization device IQ.
  • the ENCV coding device includes a adder A with two inputs, one being linked to the output of the inverse transformation module IT and the other being linked to the output selection module SW.
  • the output of the adder A is linked to a memory MEM suitable to store images at the output of the adder A.
  • the motion estimation module ME is suitable to estimate the motion vectors MV between a current image I and an image previously coded.
  • the input of the motion compensation module MC and possibly the input of the spatial prediction module INTRA are also linked to the output of the memory MEM.
  • the device ENC comprises an entropic coding module CV whose first input is linked to the output of the quantization module Q and whose second input is linked to the output of the motion estimation module ME.
  • the entropic coding module CV is suitable to code the image data at the output of the quantization module Q and the motion vectors MV in the form of the bitstream SV.
  • the modules ME, MC, INTRA, IT, IQ and C are well known by those skilled in the art and are not further described. These modules are described in the video coding standards MPEG-2 or H.264.
  • the image processing module P is external to the coding device. It is for example integrated into a pre-processing device itself adapted to process images or a sequence of images prior to the coding with a view to facilitating the processing.
  • the image processing module P comprises a means of storing said quantization offset values ⁇ QP k .
  • the quantization offset values ⁇ QP k can for example be stored in a Look Up Table form associating a quantization offset value ⁇ QP k with each block b k or macroblock MB k .
  • the coding device ENCV codes a sequence of images in accordance with the video coding standard H264 known as MPEG-4 part 10 or MPEG-4 AVC.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)

Abstract

The invention relates to a method for processing an image divided into blocks of pixels. It comprises the following steps: - calculate (12, 30) for each one of said blocks a first value (MFk, MBFk) representative of the contrast of the contours in said block, - calculate (13, 32) for each one of said blocks a second value (MASk, MBASk) representative of the spatial activity of the said block, - calculate (14, 33) for each one of said blocks a third value (DCk, MBDCk) representative of the mean luminance of the said block, - determine (15, 34), with a view to a subsequent coding, for each one of said blocks a quantization offset step (ΔQPk) according to the said first, second and third values. The invention also relates to a coding device.

Description

IMAGE PROCESSING METHOD AND CODING DEVICE IMPLEMENTING SAID METHOD
1. Scope of the invention The invention relates to the domain of image coding. More specifically it relates to an image processing method and a coding device implementing said method.
2. Prior art Referring to figure 1 , a method for coding an image I or a sequence of images I in the form of a bitstream S, generally comprises, for each image to be coded, a step E1 to transform, with a transform such as a DCT (Discrete Cosine Transform) of luminance and chrominance data associated with the pixels of the image, a step E2 to determine at least one Target quantization step QP* associated with the image I or with a part of this image, a step E3 of quantization to quantize the image data using one or more quantization steps and a step E4 of entropic coding of quantized data. The image quantization step QP* is generally determined by a bitrate control algorithm as a function of a bitrate setting D determined as a function of the application and associated with the image I. According to one variant several target quantization steps QP* are determined during step E2 for the image I, each of them associated with a group of macroblocks, such as for example a macroblock slice.
According to standard approaches, the determination step E2 of the target quantization step QP* for an image or image slice is followed by a step E2' of local adjustment of the quantization step within said image or said slice of image, for example at the level of a block of pixels. This local adjustment is based on the recognition that the human eye perceives faults more easily on the uniform zones than on the heavily textured zones i.e. having a high spatial complexity. Consequently, the uniform zones must be preserved and therefore quantized less heavily than the textured zones that can be degraded without being perceived by the eye. From this recognition, the standard approaches modify the target quantization step QP* locally, for example for each block of pixels from a spatial analysis of image data of these blocks with the aid of operators adapted to this effect. This spatial analysis highlights the psycho-visual characteristics of each block and enables uniform blocks and heavily textured blocks to be identified. A higher quantization step than the target quantization step QP* is then associated with the heavily textured blocks and conversely a lower quantization step than the target quantization step QP* is associated with the uniform blocks. However, the spatial analysis operators used generally assimilate blocks that have a clear object contour, also called front, with heavily textured blocks, or assimilate blocks having a significant and continuous luminance variation with textured blocks. Consequently, a local adjustment method of the quantization step based on such operators will attribute a high quantization step both to blocks comprising contours and blocks having a continuous luminance variation, when on the contrary it should preserve these zones.
3. Summary of the invention The purpose of the invention is to compensate for at least one disadvantage of the prior art. For this purpose, the invention relates to a method of processing an image divided into blocks of pixels each of which is associated with at least one luminance value. According to the invention, the method includes the following steps: - calculate for each block a first value representative of the contrast in the block,
- calculate for each block a second value representative of the spatial activity of the block,
- calculate for each block a third value representative of the mean luminance of the block,
- determine, with a view to a subsequent coding, for each block a quantization step offset according to the first, second and third values.
The method according to the invention advantageously enables a distinction to be made between textured zones, zones including a contour and homogenous zones or zones having a distinct variation but a continuous luminance, and to attribute a quantization step offset as a function of this characterisation. According to another embodiment, wherein each block is divided into sub-blocks, the method also comprises a step to calculate for each of the sub- blocks a fourth value representative of the contrast in the sub-block, a step to calculate for each of the sub-blocks a fifth value representative of the spatial activity and a step to calculate for each of the sub-blocks a sixth value representative of the mean luminance of the sub-block. According to this embodiment, the first value is calculated, for each of the blocks, from the fourth valued of the sub-blocks of the block and of adjacent sub-blocks of the block, the second value is calculated from the fifth values of the sub-blocks of the block and of adjacent sub-blocks of the block and the third value is calculated from the sixth values of the sub-blocks of the block.
According to an advantageous characteristic, the blocks are macroblocks of 16 by 16 pixels and the sub-blocks are pixel blocks of 8 by 8 pixels. According to a particular characteristic, the first value is equal to the maximum value of the four values of the sub-blocks of the block and of the adjacent sub-blocks of the block.
According to a particular characteristic, the second value is equal to the minimum value of the fifth values of the sub-blocks of the block and of the adjacent sub-blocks of the block.
According to another particular characteristic, the third value is equal to the mean of the sixth values of the sub-blocks of the block.
According to another particular characteristic, the adjacent sub-blocks of the block have a side in common with the block.
According to a particularly advantageous characteristic, the fourth value of a sub-block of size I by J pixels is equal to
Figure imgf000005_0001
where pel,j is the luminance value associated with the pixel of coordinate (i,j)
J-I in the sub-block (sbi) and where MAX(a ) is the maximum value Q] of the j=0 * J - group {a0, ai, ..., aj.i}. According to a particularly advantageous characteristic, the fifth value of a sub-block of size I by J pixels is equal to
- pel>+ι,j
Figure imgf000006_0001
The invention also relates to a coding device capable of implementing the method according to the invention. The coding device comprises a transformation module to transform image data associated with each of the blocks into transform data, an image processing module suitable to determine a quantization step offset for each of the blocks, a quantization module suitable to quantize, for each of the blocks, the transform data according to a predetermined quantization step and the quantization step offset of the block and an entropic coding module suitable to code, for each block, the quantized data in the form of a bitstream. According to an essential characteristic of the invention, the image processing module comprises: - means for calculating for each one of the blocks a first value representative of the contrast in the block,
- means for calculating for each one of the blocks a second value representative of the spatial activity of the block,
- means for calculating for each one of the blocks a third value representative of the mean luminance of the block, and
- means for determining for each one of said blocks said quantization step offset (ΔQPk) according to said first, second and third values.
4. List of figures The invention will be better understood and illustrated by means of embodiments and implementations, by no means limiting, with reference to the figures attached in the appendix, wherein:
- figure 1 illustrates an image coding method according to the prior art,
- figure 2 illustrates an image processing method according to a first embodiment of the invention,
- figure 3 illustrates a step of the image processing method according to the first embodiment of the invention, - figure 4 illustrates an image processing method according to a second embodiment of the invention,
- figure 5 illustrates a step of the image processing method according to the second embodiment of the invention, - figure 6 represents a macroblock and the adjacent blocks of said macroblock, and
- figure 7 illustrates an image coding device according to the invention, and
- figure 8 illustrates an image sequence coding device according to the invention.
5. Detailed description of the invention
The invention relates to an image processing method to locally adjust a target quantization step QP* within an image or an image slice made of pixels, each of which is associated with at least one luminance value. A first embodiment is described with reference to figure 2. According to this embodiment, the image divided into N blocks bk not overlapping with k the index of the block in the image varying from 0 to N-1. The method includes an initialisation step 10 at the value 0 of the index k of the blocks bk of said image. It then comprises a calculation step 12 for the block bk of a first value noted as MFk representative of the contrast in the block bk. More precisely, this first value is representative of the maximum contrast of block bk in the vertical direction and the horizontal direction. The value MFk is preferentially calculated according to the following equation:
Figure imgf000007_0001
where: - 1 is the height of block bk in number of pixels,
- J is the width of block bk in number of pixels,
- pelij is the luminance value associated with the pixel of coordinate (i,j) in the block bk,
J-I
- MAX(a}) is the maximum value Q] in the group {ao, ai, ..., aj.-i}.
Generally, I =J=8. It also includes a calculation step 13 for the pixel block bk of a second value noted as MASk representative of the spatial activity in the block bk. The value of MASk is preferentially calculated according to the following equation:
Figure imgf000008_0001
It also includes a calculation step 14 for the pixel block bk of a third value noted as DCk representative of the mean luminance also known as average level of luminance in the block bk. The value of DCk is preferentially calculated according to the following equation:
Figure imgf000008_0002
Steps 12, 13 and 14 of the method can also be performed simultaneously or in an entirely different order to the order described.
The combination of the three values determined in steps 12 to 14 advantageously enables the contents of block bk to be characterized. They particularly make it possible to determine if the block bk includes a contour, if it is textured, if it is homogenous or if it includes a zone of significant variation but with continuous luminance. In fact, a homogenous block bk, a block bk comprising a zone of significant variation but continuous luminance or including a clear and isolated contour should be preserved, i.e. quantized with a quantization step lower than the target quantization step QP*, whereas a textured block bk can be quantized with a higher quantization step. Further, the DCk criteria enables the luminance level of the homogenous zones to be characterized. Indeed, in a homogenous zone, the faults are visible except when the mean luminance level of the zone is high or conversely low.
At step 15, the target quantization step QP* associated with the image or image slice to which the block bk belongs is adjusted at the level of block bk as a function of the values MFk, MASk and DCk calculated during steps 10 to 14. More specifically, during step 15 a quantization offset value noted as ΔQPk, is determined with respect to QP* for the block bk, i.e. the quantization step later used to quantize the block bk is equal to QP*+ ΔQPk. According to an embodiment described with reference to figure 3, the quantization offset ΔQPk is determined in step 15 in the following manner: At step 150, MFk is compared to a first threshold S1 , if (MFk > S1 ) at step 151 , MASk is compared to a second threshold S2, if (MASk > S2) then at step 152, ΔQPk = ΔQP1 , with ΔQP1 a predetermined positive integer, else, at step 153, ΔQPk = ΔQP2, with ΔQP2 a predetermined negative integer, else at step 154, MASk is compared to a third threshold S3 and DCk is compared to a fourth threshold S4 and a fifth threshold S5 if (MASk < S3 AND DCk > S4 AND DCk < S5) then at step 155, ΔQPk = ΔQP3, with ΔQP3 a predetermined negative integer, else at step 156, MASk is compared to a sixth threshold S6, if (MASk > S6) then at step 157, ΔQPk = ΔQP1 , else at step 158, ΔQPk = 0.
In fact, at step 152 and step 157, the content of block bk is characterised as having a textured content that can be further quantized. At step 153, the content of block bk is identified as containing a marked contour that must be maintained. At step 155, the content of block bk is characterised as having homogenous content (i.e. uniform or low contrast) having an average luminance level, i.e. between S4 and S5, that must be maintained. At step 158, the content of block bk is not characterised as being textured, or uniform with an average luminance level or as containing an isolated colour, which is why the value 0 is attributed. Preferentially, ΔQ3 is chosen equal to ΔQ2. For example, ΔQ3=ΔQ2=1.
At step 16, the k index is incremented by 1. At step 18, k is compared to the value N. If k is greater than or equal to N then the method is terminated, otherwise the method restarts at step 12.
According to the method of the invention, the determination of the value of the quantization offset ΔQPk does not require knowledge of the value of the target quantization step QP*, which is generally determined during coding by a bitrate control method. Consequently, the quantization offset values ΔQPk can be determined prior to the coding method.
The method described for one image can be applied to a sequence of many images, e.g. a video. However, most video coding standards authorise modification of the quantization step only at a macroblock level. A macroblock is a pixel block. In general, a macroblock is a block of pixels of 16 by 16. It is itself divided into several non-overlapping sub-blocks of 8 by 8 pixels. For this purpose, another embodiment is described with reference to figure 4. The image is divided into N non-overlapping macroblocks, noted as MBk, where k is the macroblock index in the image, k varying from 0 to N. Each sub-block is noted as sbi, where I is the sub-block index in the image, I varying from 0 to M. The method according to the second embodiment comprises an initialisation step 20 with a value 0 of the k index of the macroblocks MBk and of the I index of the sub-blocks sbi of said image. It then comprises a calculation step 22 for the sub-block of pixels sbi of a first value noted as MFk representative of the contrast in the sub-block sbi. More precisely, it is representative of the maximum contrast of the sub-block bk in the vertical direction and the horizontal direction. The value MFi is preferentially calculated according to the following equation:
Figure imgf000010_0001
where: - 1 is the height of the sub-block sbi in number of pixels,
- J is the width of the sub-block sbi in number of pixels,
- pelij is the luminance value associated with the pixel of coordinate (i,j) in the sub-block sbi,
J-I
- MAX(U1) is the maximum value a, in the group {a0, ai, ..., aj.-i}.
It also includes a calculation step 24 for the sub-block of pixels sbi of a second value noted as MASi representative of the spatial activity in the sub- block sbi. The value MASi is preferentially calculated according to the following equation:
MASι ~Pel 1+lj
Figure imgf000010_0002
It also includes a calculation step 25 for the block of pixels sbi of a third value noted as DCi representative of the mean luminance in the block sbi. The value DCi is preferentially calculated according to the following equation:
Figure imgf000011_0001
Steps 22, 24 and 25 of the method can also be performed simultaneously or in an entirely different order to that described. At step 26, the I index is incremented by 1.
At step 28, I is compared to the value M. If I is greater than or equal to M then the method continues to step 30, otherwise the method restarts at step 22.
At step 30, the MFi values calculated for each sub-block sbi of a macroblock MBk are combined to obtain an MBFk value for the macroblock MBk. The value MBFk is the maximum value MFi among the values MFi associated with the sub-blocks sbi of the macroblock as well as the adjacent sub-blocks sbi of the macroblock MBk, i.e. those that have a commons side with one of the sub-blocks sbi of the macroblock MBk. The sub-blocks sbi adjacent to macroblock MBk are greyed out on figure 4. This selection advantageously enables priority to be given to the neighbouring area of a contour. At step 32, the MASi values calculated for each sub-block sbi of a macroblock MBk are combined to obtain an MBASk value for the macroblock MBk. Le MBASki value is the minimum MASi value among the MASi values associated with the sub-blocks sbi of the macroblock MBk as well as with the sub-blocks sbi adjacent to the macroblock MBk, i.e. the greyed out sub-blocks sbi in figure 4. This selection advantageously enables the zone to favour to be enlarged beyond the uniform zone and a particularity of spatial activity to be suppressed, for example.
These two filtering operations adjacent to the macroblock MBk allow the quantization step offset values in the image to be smoothed i.e. from one macroblock to the next, to avoid sudden breaks in quantization that could create visible effects. Other filters can be applied. Notably it is possible to imagine stretching the neighbourhood to non-adjacent sub-blocks. The DCi values calculated for each sub-block sbi of a macroblock MBk are also combined to obtain a mean value MBDCk for the macroblock MBk. For example, MBDCk is the mean of the DCk values associated with each sbi sub-block of the macroblock MBk. At step 34, the target quantization step QP* associated with the image or image slice to which belongs the macroblock MBk is adjusted according to the values MBFk, MBASk and MBDCk calculated during steps 30 to 32. For this purpose, an offset quantization value with respect to a QP*, noted as ΔQPk, is determined for the macroblock MBk. The quantization step later used to quantize the macroblock MBk. is equal to QP*+ ΔQPk. According to an embodiment described with reference to figure 6, the quantization offset is determined in step 34 in the following manner: at step 350, MBFk is compared to a first threshold SMB1 , if (MBFk > SMB1 ) at step 351 , MBASk is compared to a second threshold SMB2, if (MBASk > SMB2) then at step 352, ΔQPk = ΔQP1 , with ΔQP1 a positive integer, else, at step 353, ΔQPk = ΔQP2, with ΔQP2 a negative integer, else at step 354, MBASk is compared to a third threshold SMB3 and MBDCk is compared to a fourth threshold SMB4 and a fifth threshold SMB5 if (MBASk < SMB3 AND MBDCk > SMB4 AND MBDCk< SMB5) then at step 355, ΔQPk = ΔQP3, with ΔQP3 a negative integer, else at step 356, MBASk is compared to a sixth threshold SMB6, if (MBASk > SMB6) then at step 357, ΔQPk = ΔQP1 , else at step 358, ΔQPk = 0.
In fact, at step 352 and step 357, the content of macroblock MBk is characterised as having a textured content that can be further quantized. At step 353, the content of block MBk is identified as containing a marked contour that must be preserved. At step 355, the content of macroblock MBk is characterised as being a uniform content having an average luminance level, i.e. between S4 and S5, that must be preserved. At step 358, the contents of macroblock MBk are not characterised as being textured, or uniform with an average luminance level or as containing an isolated colour, which is why the value 0 is attributed.
Preferentially, ΔQ3 is chosen equal to ΔQ2 for example equal to 1. At step 36, the kl index is incremented by 1.
At step 38, k is compared to the value N. If k is superior to N then the method is terminated, otherwise the method restarts at step 30. According to the method of the invention, the determination of the value of the quantization offset ΔQPk does not require knowledge of the value of the target quantization step QP*, which is generally determined during coding by a bitrate control method. Consequently, the quantization offset values ΔQPk can be determined prior to the coding method. This embodiment described for a macroblock MBk divided into sub- blocks sbi can be applied more generally to all blocks of pixels bk divided into sub-blocks sbi. For example, this embodiment can be applied to blocks bk of size 8 by 8 pixels divided into sub-blocks sbi of size 4 by 4 pixels.
In reference to figure 7, the invention relates to an coding device ENC of an image I in the form of a bitstream S suitable to implement the method of the invention. It comprises a transformation module T able to receive the image I. It further comprises an image processing module P able to receive the input image I at the input, a quantization module Q whose first input is linked to the output of the processing module T and an entropic coding module C whose input is linked to the output of the quantization module Q. The entropic coding module C is able to generate the bitstream S. The transformation module T is suitable to transform image data e.g. chrominance and luminance data, associated with the pixels of the image I, into transform data. This module implements for example, a DCT type transform. The image processing module P is adapted to implement the steps 10 to 18 and/or 20 to 38 of the method according to the invention. In particular, it is adapted to determine for each block bk or macroblock MBk within the image I, a quantization offset ΔQPk with respect to a target quantization step QP* associated with the image I or an image slice to which belongs the block bk, respectively the macroblock MBk. The module Q is adapted to quantize the transform data into quantized data using a quantization step possibly locally adjusted, i.e. equal to QP*+ ΔQPk. The coding module C is adapted to implement an entropic coding of quantized data in order to generate the bit stream S at the output of the ENC device.
In reference to figure 8, the invention relates to a coding device ENCV of a sequence of images V in the form of a bitstream SV suitable to implement the method of the invention. The coding device ENCV comprises modules already described in reference to figure 6 that have the same references i.e. the modules T and Q. It also comprises a subtractor D able to receive on a first input images I of the sequence V and on a second input images or part of a prediction image Ic. The input of the transformation module T is linked to the output of the subtractor D. It also includes, an image processing module P able to receive at the input images I from the sequence V. The output of the image processing module P is linked to the input of the quantization module Q. In particular, it is suitable to determine for each block bk or macroblock MBk within an image I, a quantization offset ΔQPk with respect to a target quantization step QP* associated with an image I or with an image slice to which the block bk or the macroblock MBk respectively belongs. The target quantization step QP* is generally determined by a bitrate control module, not shown on the figure 8. The image or part of the prediction image IC is generally obtained by motion compensation of a stored image in the MEM memory. For this purpose, the ENCV coding device includes a motion compensation module MC with an input linked to the output of a motion estimation module ME. The image of part of the predicted image IC can possibly be obtained by spatial prediction of an image stored in the memory MEM. For this purpose, the coding device ENCV includes a spatial prediction module INTRA. The selection of the prediction mode is performed by the selection module SW. The coding module ENCV also includes an inverse quantization module IQ whose input is linked to the output of the quantization device Q, an inverse transformation module IT whose the input is linked to the output of the inverse quantization device IQ. The ENCV coding device includes a adder A with two inputs, one being linked to the output of the inverse transformation module IT and the other being linked to the output selection module SW. The output of the adder A is linked to a memory MEM suitable to store images at the output of the adder A. The motion estimation module ME is suitable to estimate the motion vectors MV between a current image I and an image previously coded. The input of the motion compensation module MC and possibly the input of the spatial prediction module INTRA are also linked to the output of the memory MEM. Finally the device ENC comprises an entropic coding module CV whose first input is linked to the output of the quantization module Q and whose second input is linked to the output of the motion estimation module ME. The entropic coding module CV is suitable to code the image data at the output of the quantization module Q and the motion vectors MV in the form of the bitstream SV. The modules ME, MC, INTRA, IT, IQ and C are well known by those skilled in the art and are not further described. These modules are described in the video coding standards MPEG-2 or H.264.
According to a variant, the image processing module P is external to the coding device. It is for example integrated into a pre-processing device itself adapted to process images or a sequence of images prior to the coding with a view to facilitating the processing. The image processing module P comprises a means of storing said quantization offset values ΔQPk. The quantization offset values ΔQPk can for example be stored in a Look Up Table form associating a quantization offset value ΔQPk with each block bk or macroblock MBk.
According to an advantageous embodiment the coding device ENCV codes a sequence of images in accordance with the video coding standard H264 known as MPEG-4 part 10 or MPEG-4 AVC.

Claims

Claims
1. Method for processing an image divided into blocks of pixels each of which is associated with at least one luminance value characterized in that it comprises the following steps:
- calculate (12, 30) for each one of said blocks a first value (MFk, MBFk) representative of the contrast in the said block, said value being calculated on the basis of values representative of the contrast of blocks adjacent to said block;
- calculate (13, 32) for each one of said blocks a second value (MASk, MBASk) representative of the spatial activity of the said block, said value being calculated on the basis of values representative of the contrast of blocks adjacent to said block;
- calculate (14, 33) for each one of said blocks a third value (DCk, MBDCk) representative of the mean luminance of the said block,
- determine (15, 34), with a view to a subsequent coding, for each one of said blocks a quantization offset step (ΔQPk) according to the said first, second and third values.
2. Method according to claim 1 , wherein each one of said blocks (bk, MBk) being divided into sub-blocks (sbi), said method comprises a step to calculate (22) for each of the sub-blocks a fourth value (MFi) representative of the contrast in said sub-block (sbi), a step to calculate (24) for each of the sub- blocks a fifth value (MASi) representative of the spatial activity of said sub- block (sbi) and a step to calculate (25) for each of the sub-blocks a sixth value (DCi) representative of the mean luminance of said sub-block (sbi) and wherein said first value (MBFk) is calculated, for each of said blocks (bk, MBk), from said fourth values of said sub-blocks (sbi) of said block (bk, MBk) and of adjacent sub-blocks (sbi) of said block (bk, MBk), wherein the said second value (MBASk) is calculated from said fifth values of the sub-blocks (sbi) of said block (bk, MBk) and of adjacent sub-blocks (sbi) of said block (bk, MBk), and wherein said third value (MBDCk) is calculated from said sixth values of said sub-blocks (sbi) of the said block (bk, MBk).
3. Method according to claim 2, wherein said blocks (MBk, bk) are macroblocks of size 16 by 16 pixels and said sub-blocks (sbi) are blocks of pixels of size 8 by 8 pixels.
4. Method according to claim 2 or 3, wherein said first value (MBFk) is equal to the maximum value of said fourth values (MFi) of the sub-blocks (sbi) of said block (MBk, bk) and of the adjacent sub-blocks (sbi) of said block (MBk, bk).
5. Method according to one of claims 2 to 4, wherein said second value (MBASk) is equal to the minimum value of said fifth values (MASi) of the sub- blocks (sbi) of said block (MBk, bk) and of the adjacent sub-blocks (sbi) to said block (MBk, bk).
6. Method according to one of claims 2 to 5, in which said third value (MBDCk) is equal to the mean of said sixth values (DCi) of sub-blocks (sbi) of said block (MBk, bk).
7. Method according to one of claims 2 to 6 in which said adjacent sub-blocks (sbi) of said block (MBk, bk) are the sub-blocks (sbi) having a side in common with said block (MBk, bk).
8. Method according to one of claims 2 to 7, in which said fourth value (MFi) of a sub-block (sbi) of I by J pixels is equal to
Figure imgf000017_0001
where pel,j is the luminance value associated with the pixel of coordinate (i,j)
J-I in the sub-block (sbi) and where MAX(a ) is the maximum value aj of the j=0 J group {a0, ai, ..., aj.i}.
9. Method according to one of claims 2 to 8, in which said fifth value (MASi) of a sub-block (sbi) of I by J pixels is equal to
- pel>+ι,j
Figure imgf000018_0001
where pel,j is the luminance value associated with the pixel of coordinate (i,j) in the sub-block sbi.
10. Coding device of an image (ENC) or a sequence of images (ENCV), each image being divided into blocks of pixels (bk, MBk) each of which is associated with at least one luminance value, said coding device (ENC, ENCV) comprising a transformation module (T) to transform image data of each one of said blocks (bk, MBk) into transform data, an image processing module (P) suitable to determine for each of said blocks a quantization step offset (ΔQPk), a quantization module (Q) suitable to quantize for each of said blocks, said transform data according to a predetermined quantization step (QP*) and the quantization step offset (ΔQPk) of said block, and an entropic coding module (C, CV) suitable to code, for each of said blocks, said quantized data in the form of a bit stream (S SV), said coding device (ENC, ENCV) being characterized in that said image processing module (P) comprises:
- means for calculating for each one of said blocks a first value (MFk, MBFk) representative of contrast in the said block, said value being calculated on the basis of values representative of the contrast of blocks adjacent to said block,
- means for calculating for each one of said blocks a second value (MASk, MBASk) representative of the spatial activity of the block said value being calculated on the basis of values representative of the spatial activity of blocks adjacent to said block,
- means for calculating for each one of said blocks a third value (DCk, MBDCk) representative of the mean luminance of the block, and
- means for determining for each one of said blocks said quantization step offset (ΔQPk) according to said first, second and third values.
PCT/EP2008/053399 2007-03-23 2008-03-20 Image processing method and coding device implementing said method WO2008116836A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0754015 2007-03-23
FR0754015A FR2914125A1 (en) 2007-03-23 2007-03-23 IMAGE PROCESSING METHOD AND ENCODING DEVICE IMPLEMENTING SAID METHOD.

Publications (2)

Publication Number Publication Date
WO2008116836A2 true WO2008116836A2 (en) 2008-10-02
WO2008116836A3 WO2008116836A3 (en) 2008-11-27

Family

ID=38787403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2008/053399 WO2008116836A2 (en) 2007-03-23 2008-03-20 Image processing method and coding device implementing said method

Country Status (2)

Country Link
FR (1) FR2914125A1 (en)
WO (1) WO2008116836A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011043793A1 (en) * 2009-10-05 2011-04-14 Thomson Licensing Methods and apparatus for embedded quantization parameter adjustment in video encoding and decoding
WO2011050978A1 (en) * 2009-11-02 2011-05-05 Panasonic Corporation Luminance dependent quantization
US8848788B2 (en) 2009-05-16 2014-09-30 Thomson Licensing Method and apparatus for joint quantization parameter adjustment
EP3044960A4 (en) * 2013-09-12 2017-08-02 Magnum Semiconductor, Inc. Methods and apparatuses including an encoding system with temporally adaptive quantization

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2200322A1 (en) * 2008-12-22 2010-06-23 Thomson Licensing Method and device for estimating a bit rate required for encoding a block of an image
FR3035760B1 (en) * 2015-04-29 2018-05-11 Digigram Video & Broadcast SYSTEM AND METHOD FOR ENCODING A VIDEO SEQUENCE

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5038389A (en) * 1987-06-25 1991-08-06 Nec Corporation Encoding of a picture signal in consideration of contrast in each picture and decoding corresponding to the encoding
US5818529A (en) * 1992-04-28 1998-10-06 Mitsubishi Denki Kabushiki Kaisha Variable length coding of video with controlled deletion of codewords
US6614941B1 (en) * 1995-10-30 2003-09-02 Sony Corporation Image activity in video compression

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5038389A (en) * 1987-06-25 1991-08-06 Nec Corporation Encoding of a picture signal in consideration of contrast in each picture and decoding corresponding to the encoding
US5818529A (en) * 1992-04-28 1998-10-06 Mitsubishi Denki Kabushiki Kaisha Variable length coding of video with controlled deletion of codewords
US6614941B1 (en) * 1995-10-30 2003-09-02 Sony Corporation Image activity in video compression

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EL-HESNAWI, M. R.: "Subjective analysis of image coding errors, Chapter 2 - Human Visual System." ELECTRICAL AND ELECTRONIC ENGINEERING, UNIVERSITY OF JOHANNESBURG, ZA, [Online] 31 May 2005 (2005-05-31), pages 5-18, XP002462370 Retrieved from the Internet: URL:http://etd.rau.ac.za/theses/available/etd-05042005-124411/restricted/chapter2.pdf> [retrieved on 2007-12-12] *
MINOO K ET AL: "Perceptual Video Coding with H.264" SIGNALS, SYSTEMS AND COMPUTERS, 2005. CONFERENCE RECORD OF THE THIRTY-NINTH ASILOMAR CONFERENCE ON PACIFIC GROVE, CALIFORNIA OCTOBER 28 - NOVEMBER 1,, PISCATAWAY, NJ, USA,IEEE, 28 October 2005 (2005-10-28), pages 741-745, XP010900101 ISBN: 1-4244-0131-3 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8848788B2 (en) 2009-05-16 2014-09-30 Thomson Licensing Method and apparatus for joint quantization parameter adjustment
WO2011043793A1 (en) * 2009-10-05 2011-04-14 Thomson Licensing Methods and apparatus for embedded quantization parameter adjustment in video encoding and decoding
CN102577379A (en) * 2009-10-05 2012-07-11 汤姆逊许可证公司 Methods and apparatus for embedded quantization parameter adjustment in video encoding and decoding
KR101873356B1 (en) 2009-10-05 2018-07-02 톰슨 라이센싱 Methods and apparatus for embedded quantization parameter adjustment in video encoding and decoding
US10194154B2 (en) 2009-10-05 2019-01-29 Interdigital Madison Patent Holdings Methods and apparatus for embedded quantization parameter adjustment in video encoding and decoding
WO2011050978A1 (en) * 2009-11-02 2011-05-05 Panasonic Corporation Luminance dependent quantization
EP3044960A4 (en) * 2013-09-12 2017-08-02 Magnum Semiconductor, Inc. Methods and apparatuses including an encoding system with temporally adaptive quantization

Also Published As

Publication number Publication date
FR2914125A1 (en) 2008-09-26
WO2008116836A3 (en) 2008-11-27

Similar Documents

Publication Publication Date Title
US8023562B2 (en) Real-time video coding/decoding
CN105959706B (en) Image encoding device and method, and image decoding device and method
EP2124453B1 (en) Method and apparatus for controlling loop filtering or post filtering in block based motion compensated video coding
CN108737835B (en) Image encoding device, image decoding device, and methods thereof
CA2961818C (en) Image decoding and encoding with selectable exclusion of filtering for a block within a largest coding block
US20030053541A1 (en) Adaptive filtering based upon boundary strength
NO322722B1 (en) Video encoding method by reducing block artifacts
US7822125B2 (en) Method for chroma deblocking
KR20180083389A (en) Video coding method and apparatus
WO2008116836A2 (en) Image processing method and coding device implementing said method
AU2011316747A1 (en) Internal bit depth increase in deblocking filters and ordered dither
EP1365590A1 (en) A method for pre-deleting noise of image
CN1285214C (en) Loop filtering method and loop filter
KR20080017136A (en) Fast mode decision method for h.264 encoding
Zhang et al. Transform-domain in-loop filter with block similarity for HEVC
Kim et al. H. 264 Intra mode decision for reducing complexity using directional masks and neighboring modes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08718109

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08718109

Country of ref document: EP

Kind code of ref document: A2