WO2011134105A1 - Method of processing an image - Google Patents
Method of processing an image Download PDFInfo
- Publication number
- WO2011134105A1 WO2011134105A1 PCT/CN2010/000592 CN2010000592W WO2011134105A1 WO 2011134105 A1 WO2011134105 A1 WO 2011134105A1 CN 2010000592 W CN2010000592 W CN 2010000592W WO 2011134105 A1 WO2011134105 A1 WO 2011134105A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- blockiness
- image
- pixels
- largest sub
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/181—Segmentation; Edge detection involving edge growing; involving edge linking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/40—Analysis of texture
- G06T7/41—Analysis of texture based on statistical description of texture
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
- H04N19/865—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness with detection of the former encoding block subdivision in decompressed video
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- the invention relates to image processing. More precisely, invention concerns a method for processing an image divided into blocks.
- Blockiness is one of the main artifacts in images encoded by block based codecs. Accurately determining the blockiness level of an image or of image blocks is necessary to evaluate the image quality and consequently helps the processing of the image. As an example, when filtering an image, a stronger filter is applied on blocks with high blockiness levels while lower or no filter is applied on the other blocks, i.e. those with low blockiness levels.
- Blockiness can be defined as the discontinuity at the boundaries of adjacent blocks in an image. Therefore, many known methods for determining a blockiness level operate at macroblocks' boundaries. These methods do not appropriately manage blockiness propagation. Indeed, due to motion compensation, blockiness artifacts are propagated from reference images into predicted images. Consequently, blockiness artifacts in the predicted images are not necessarily aligned with macroblock boundaries. In this case, known methods fail to determine an accurate blockiness level. In addition, such known methods do not accurately determine blockiness level when a deblocking filter is applied. Such a deblocking filter is for example used when encoding a video according to H.264 video coding standard. When a deblocking filter is applied, the discontinuity at the macroblock boundaries is decreased. In this case, known methods fail to determine accurate blockiness levels solely based on the difference at the boundaries. Finally, such known methods fail to accurately determine the blockiness level of images with large plain or complicated texture.
- the object of the invention is to overcome at least one of these drawbacks of the prior art.
- the invention relates to a method for processing an image divided into blocks of pixels comprising the steps of :
- the method further comprises determining an image blockiness level for the image by averaging the block blockiness levels.
- the image is processed on the basis of the image blockiness level.
- the processing step comprises one of the steps belonging to the set comprising:
- the identification step comprises, when a deblocking filter is applied on the image, identifying the detected largest sub-block as natural texture when the detected largest sub- block reaches at least two opposite block borders.
- the identification step comprises, when no deblocking filter is applied on the image or when no information is provided on if deblocking filter is applied or not, identifying the detected largest sub-block as natural texture when the detected largest sub-block exceeds at least two opposite block borders.
- the step of determining a block blockiness level comprises the steps of:
- the preliminary block blockiness level BBL for a block is calculated as follows:
- T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
- the preliminary block blockiness level BBL for a block is calculated as follows:
- T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
- FIG. 1 illustrates a processing method according to a first embodiment of the invention
- FIG. 2 illustrates sub-blocks of a current block whose pixels have an equal luminance value
- FIG. 3 illustrates a sub-block of a block whose pixels have an equal luminance value and neighboring pixels
- FIG. 5 illustrates a processing method according to another embodiment of the invention.
- the method of processing an image divided into blocks of pixels is described with reference to figure 1.
- the blocks are macroblocks.
- the largest sub-block whose pixels have an equal luminance value is detected within the current block.
- the largest sub-block is detected as follows:
- the rightmost start position (RSP) and the leftmost end position (LEP) of the horizontal rows are recorded as the new start and end positions of the merged sub-block, separately.
- the largest sub-block in the current block is the merged sub-block with the maximum number of pixels.
- the detected sub- block is therefore the sub-block B2 comprising grey pixels. Indeed, the sub- block B1 comprises less pixels. If both have same number of pixels we can choose to keep the first detected one.
- vertical rows and longest columns are considered instead of horizontal rows and longest lines.
- the detected largest sub-block is identified as a natural texture or a non natural texture.
- a deblocking filter such as for images encoded according to H.264 using default deblocking filter or for images encoded according to MPEG2 with a de-blocking filter applied as postprocessing
- the detected largest sub-block is identified as a natural texture when the detected largest sub-block reaches at least two opposite block borders of the current block (e.g. top and bottom borders and/or left and right borders) and as a non natural texture otherwise.
- the detected largest sub-block is identified as a natural texture when the detected largest sub-block exceeds at least two opposite block borders of the current block (e.g. top and bottom borders and/or left and right borders) and as a non natural texture otherwise.
- 'exceeds' means that the largest sub-block not only reaches the borders of the current block, but also has the same luminance value with at least one line of pixels of the neighboring blocks next to the borders.
- a weighted luminance difference d between the detected equal sub-block and its neighboring pixels is computed.
- - ⁇ is the selected average luminance value for which the highest weight should be given to the blockiness
- - ⁇ is a parameter related to ⁇
- - ⁇ ⁇ and ⁇ x are the average luminance and standard deviation of the neighboring pixels.
- ⁇ can be calculated as follows:
- the neighboring pixels are defined. As shown in figure 3, after detected the largest sub-block, e.g. the 4x5 block of black pixels on figure 3, neighboring pixels are defined for example as three lines of pixels to the left, right, top, and bottom. The neighboring pixels may be defined differently, for example as 4 lines of pixels instead of 3.
- the average luminance and standard deviation in the 4 neighboring blocks are calculated separately. They are referred as ien, Mright, Mtop, Mbottom, and aieft, anght, a t0 p, a bo ttom respectively.
- ien Mright, Mtop, Mbottom, and aieft, anght, a t0 p, a bo ttom respectively.
- McMeft anght
- a t0 p a bo ttom
- N left is the number of pixels in the left neighboring block and p i; is the luminance value of the f pixel.
- Mright, Mtop, Mbottom, and ⁇ , a top , a bo ttom Can be calculated in the same way.
- the overall average luminance value ⁇ ⁇ and standard deviation ⁇ ⁇ of the neighboring pixels are calculated as follows:
- step 16 a block blockiness level is determined for the current block.
- the step 16 is detailed on figure 4.
- a preliminary block blockiness level BBL is calculated for the current block on the basis of the number of pixels within the detected largest sub-block and on the basis of the results of the identification step 12.
- preliminary block blockiness level BBL is calculated as follows:
- T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
- the default value of (TV, T 2 ) is (30, 80).
- T1 and T2 can be adjusted a little lower, but not lower than (20, 70).
- T1 and T2 can be adjusted a little higher, but not higher than (50, 100).
- N ⁇ Ti refers to the case where the size of the detected largest sub-block is small. In this case the block blockiness level is set to zero.
- Another example to calculate the preliminary block blockiness level BBL is as below:
- T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
- the thresholds (T1 , T2) can be set in the same way as in the above example. According to other embodiments, it can set more thresholds to divide the block blockiness level into finer granularity.
- the preliminary block blockiness level BBL is adjusted depending on a blockiness sensitivity on the basis of the weighted luminance difference computed at step 14. Such blockiness sensitivity may be user defined based on application requirements. As an example, 5 levels of blockiness sensitivity are defined as follows:
- L5 is mainly used in applications dealing with high quality videos.
- BBL* is not different from BBL.
- L1 is mainly used in applications dealing with low quality videos.
- step 20 If all blocks of the image have been considered then the method continues to step 20, otherwise the next block in the image is considered as the new current block and the method goes back to step 10.
- the image is processed on the basis of the block blockiness levels computed at step 16.
- the image is filtered on a block basis, the blocks having a high blockiness level being filtered more strongly than the block having a low blockiness level.
- the image is encoded with encoding parameters adapted on the basis of the blockiness level.
- the quantization parameter is adapted on a block level.
- the blocks having a high blockiness level are encoded with a lower quantization parameter than the blocks having a low blockiness level.
- the image is distributed over a channel. Knowing the image quality in the user end thanks to the block blockiness level, the video distributor adjusts the image coding parameters (e.g.
- the distributors can charge differently the end user according to the received video quality level evaluated by the block blockiness levels or by a video blockiness level.
- the video blockiness level is determined on the basis of the block blockiness levels determined for the blocks of all the images of the video.
- the video blockiness level is determined as the average of these block blockiness levels BBL.
- the video blockiness level VBL is determined as a function of these block blockiness levels BBL considering spatial and temporal masking.
- an image blockiness level IBL is determined at step 18 on the basis of the block blockiness levels.
- the image blockiness level is determined as the average of the block blockiness levels.
- the image blockiness level IBL is determined as the sum of the weighted block blockiness levels.
- the weight of block is determined based on the region of interest ROI information. The block located in the region with higher interest has higher weight than the block located in the region with lower interest. The image is thus processed at step 20 on the basis of this image blockiness level IBL.
- the method of processing illustrated on figures 1 and 4 further comprises a step 19 of computing a blockiness map from the block blockiness levels computed at step 16.
- Block blockiness levels are first scaled to (0, 255) as follows:
- the blockiness map is a 8-bit grey-scale picture with corresponding block pixels value set as the scaled blockiness.
- the size of the corresponding block may be scaled, i.e., if the block size for blockiness calculation is 16x 6, the size of the corresponding block in blockiness map may be 16x16, 4x4, 32x32, ...
- Figure 6 represents an exemplary architecture of an image processing device 2 according to the invention.
- the processing device 2 comprises following elements that are linked together by a data and address bus 24:
- microprocessor 21 which is, for example, a DSP (or Digital Signal Processor);
- reception module 26 for reception of images
- module 27 for transmission of processed images (i.e. the output of the processing method) to an application and/or a display.
- ROM 22 comprises a program "prog" 220. Algorithms of the processing method according to the invention are stored in the ROM 22. When switched on, the CPU 21 uploads the program 220 in the RAM and executes the corresponding instructions.
- RAM 23 comprises:
- the digital part of the processing device 2 is implemented in pure hardware configuration (e.g. in one or several FPGA, ASIC or VLSI with corresponding memory) or in a configuration using both VLSI and DSP.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The invention relates to a method for processing an image divided into blocks of pixels comprising the steps of: detecting (10), for each block, a largest sub-block whose pixels have an equal luminance value; identifying (12), for each block, if the detected largest sub-block is or is not a natural texture; calculating (14), for each block, a weighted luminance difference between the detected largest sub-block and neighboring pixels; determining (16), for each block, a block blockiness level on the basis of the number of pixels within the detected largest sub-block, of said identification step and of said weighted luminance difference; and processing (20) the image on the basis of the block blockiness levels.
Description
METHOD OF PROCESSING AN IMAGE
1. FIELD OF THE INVENTION
The invention relates to image processing. More precisely, invention concerns a method for processing an image divided into blocks.
2. BACKGROUND OF THE INVENTION
Blockiness is one of the main artifacts in images encoded by block based codecs. Accurately determining the blockiness level of an image or of image blocks is necessary to evaluate the image quality and consequently helps the processing of the image. As an example, when filtering an image, a stronger filter is applied on blocks with high blockiness levels while lower or no filter is applied on the other blocks, i.e. those with low blockiness levels.
Blockiness can be defined as the discontinuity at the boundaries of adjacent blocks in an image. Therefore, many known methods for determining a blockiness level operate at macroblocks' boundaries. These methods do not appropriately manage blockiness propagation. Indeed, due to motion compensation, blockiness artifacts are propagated from reference images into predicted images. Consequently, blockiness artifacts in the predicted images are not necessarily aligned with macroblock boundaries. In this case, known methods fail to determine an accurate blockiness level. In addition, such known methods do not accurately determine blockiness level when a deblocking filter is applied. Such a deblocking filter is for example used when encoding a video according to H.264 video coding standard. When a deblocking filter is applied, the discontinuity at the macroblock boundaries is decreased. In this case, known methods fail to determine accurate blockiness levels solely based on the difference at the boundaries. Finally, such known methods fail to accurately determine the blockiness level of images with large plain or complicated texture.
3. BRIEF SUMMARY OF THE INVENTION
The object of the invention is to overcome at least one of these drawbacks of the prior art.
To this aim the invention relates to a method for processing an image divided into blocks of pixels comprising the steps of :
- detecting, for each block, a largest sub-block whose pixels have an equal luminance value;
- identifying, for each block, if the detected largest sub-block is or is not a natural texture;
- calculating, for each block, a weighted luminance difference between the detected largest sub-block and neighboring pixels;
- determining, for each block, a block blockiness level on the basis of the number of pixels within the detected largest sub-block, of the identification step and of the weighted luminance difference; and
- processing the image on the basis of the block blockiness levels.
Advantageously, the method further comprises determining an image blockiness level for the image by averaging the block blockiness levels. In this case, the image is processed on the basis of the image blockiness level.
According to an aspect of the invention, the processing step comprises one of the steps belonging to the set comprising:
- encoding step;
- filtering step; and
- distributing step.
According to a particular aspect of the invention, the identification step comprises, when a deblocking filter is applied on the image, identifying the detected largest sub-block as natural texture when the detected largest sub- block reaches at least two opposite block borders.
According to another aspect of the invention, the identification step comprises, when no deblocking filter is applied on the image or when no information is provided on if deblocking filter is applied or not, identifying the detected largest sub-block as natural texture when the detected largest sub-block exceeds at least two opposite block borders.
Advantageously, the step of determining a block blockiness level comprises the steps of:
- determining a preliminary block blockiness level on the basis of the number of pixels within the detected largest sub-block and on the basis of the results of the identification step; and
- adjusting the preliminary block blockiness level as a function of the weighted luminance difference, the function depending on a blockiness sensivity.
According to a specific embodiment, the preliminary block blockiness level BBL for a block is calculated as follows:
0 w henthe detectedlargest subblock is natural texture
1 w henT2 < N
BBL =
N/T2 whenT1 < N < T2
0 whenN < T1
where T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
According to a variant, the preliminary block blockiness level BBL for a block is calculated as follows:
0 whenthe detectedlargest subblock is a natural texture
1 whenT2 < N
BBL
0.5 whenT1 < N < T2
0 whenN < T1
where T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
4. BRIEF DESCRIPTION OF THE DRAWINGS
Other features and advantages of the invention will appear with the following description of some of its embodiments, this description being made in connection with the drawings in which:
- Figure 1 illustrates a processing method according to a first embodiment of the invention;
- Figure 2 illustrates sub-blocks of a current block whose pixels have an equal luminance value;
- Figure 3 illustrates a sub-block of a block whose pixels have an equal luminance value and neighboring pixels;
- Figure 4 illustrates one of the step of the processing method according to the invention;
- Figure 5 illustrates a processing method according to another embodiment of the invention; and
- Figure 6 illustrates a processing device according to the invention.
5. DETAILED DESCRIPTION OF THE INVENTION
The method of processing an image divided into blocks of pixels is described with reference to figure 1. According to a specific embodiment the blocks are macroblocks.
At step 10, the largest sub-block whose pixels have an equal luminance value is detected within the current block. As an example illustrated on figure 2, the largest sub-block is detected as follows:
- detecting the longest line of pixels with equal luminance value in every horizontal row of the current block and recording the start and end position of the longest line and the corresponding equal luminance value;
- comparing, for each recorded equal luminance value, the recorded equal luminance values of the neighboring rows and merging into one sub- block the adjacent detected longest lines with the same recorded equal luminance values.
The rightmost start position (RSP) and the leftmost end position (LEP) of the horizontal rows are recorded as the new start and end positions of the merged sub-block, separately. The largest sub-block in the current block is the merged sub-block with the maximum number of pixels. On figure 2, the detected sub- block is therefore the sub-block B2 comprising grey pixels. Indeed, the sub- block B1 comprises less pixels. If both have same number of pixels we can choose to keep the first detected one.
According to a variant, vertical rows and longest columns are considered instead of horizontal rows and longest lines.
At step 12, the detected largest sub-block is identified as a natural texture or a non natural texture. When a deblocking filter is applied (such as for images encoded according to H.264 using default deblocking filter or for images encoded according to MPEG2 with a de-blocking filter applied as postprocessing), the detected largest sub-block is identified as a natural texture when the detected largest sub-block reaches at least two opposite block borders of the current block (e.g. top and bottom borders and/or left and right borders) and as a non natural texture otherwise.
When no deblocking filter is applied (such as for images encoded according to MPEG2 or for images encoded according to H.264 with deblocking filter
disabled, or for the images for which no information is provided on if deblocking filter is applied or not, i.e. when we do not know if a deblocking filter is or is not used) the detected largest sub-block is identified as a natural texture when the detected largest sub-block exceeds at least two opposite block borders of the current block (e.g. top and bottom borders and/or left and right borders) and as a non natural texture otherwise. Here 'exceeds' means that the largest sub-block not only reaches the borders of the current block, but also has the same luminance value with at least one line of pixels of the neighboring blocks next to the borders.
At step 14, a weighted luminance difference d between the detected equal sub-block and its neighboring pixels is computed. Several luminance masking methods can be used for this purpose. As an example, for 8-bit grey- scale images, d may be computed as follows: d = w χ (|μβ - μη , where μβ ίε the average luminance of the detected largest sub-block, μ„ is the average luminance of neighboring pixels, separately and w is a weight related to texture and luminance masking, w can be calculated as follows:
otherwise
where: - ζ is the selected average luminance value for which the highest weight should be given to the blockiness;
- λ is a parameter related to ζ; and
- μη and <x„ are the average luminance and standard deviation of the neighboring pixels.
Users can optionally set the value of ζ in the range from 70 to 90. Default value is 81. λ can be calculated as follows:
μ„ and σ„ s can be calculated as explained below.
Firstly, the neighboring pixels are defined. As shown in figure 3, after detected the largest sub-block, e.g. the 4x5 block of black pixels on figure 3, neighboring pixels are defined for example as three lines of pixels to the left, right, top, and bottom. The neighboring pixels may be defined differently, for
example as 4 lines of pixels instead of 3.
Secondly, the average luminance and standard deviation in the 4 neighboring blocks are calculated separately. They are referred as ien, Mright, Mtop, Mbottom, and aieft, anght, at0p, abottom respectively. As an example, Miett and am are calculated as follows:
aleft
where Nleft is the number of pixels in the left neighboring block and pi; is the luminance value of the f pixel. Mright, Mtop, Mbottom, and σ^Μ, atop, abottom Can be calculated in the same way. Finally, the overall average luminance value μη and standard deviation ση of the neighboring pixels are calculated as follows:
At step 16, a block blockiness level is determined for the current block. The step 16 is detailed on figure 4.
At step 160, a preliminary block blockiness level BBL is calculated for the current block on the basis of the number of pixels within the detected largest sub-block and on the basis of the results of the identification step 12. As an example, preliminary block blockiness level BBL is calculated as follows:
0 whenthe detectedlargest subblock is a natural texture
1 whenT2 < N
BBL =
N T2 whenT1 < N < T2
0 whenN < T1
where T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block. The default value of (TV, T2) is (30, 80). For the images with very complicated texture, T1 and T2 can be adjusted a little lower, but not lower than (20, 70). For the images with large plain texture, they can be adjusted a little higher, but not higher than (50, 100). The case N < Ti refers to the case where the size of the detected largest sub-block is small. In
this case the block blockiness level is set to zero.
Another example to calculate the preliminary block blockiness level BBL is as below:
whenthe detectedlargest subblock is a natural texture
whenT2 < N
whenT1 < N < T2
where T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block. The thresholds (T1 , T2) can be set in the same way as in the above example. According to other embodiments, it can set more thresholds to divide the block blockiness level into finer granularity. At step 162, the preliminary block blockiness level BBL is adjusted depending on a blockiness sensitivity on the basis of the weighted luminance difference computed at step 14. Such blockiness sensitivity may be user defined based on application requirements. As an example, 5 levels of blockiness sensitivity are defined as follows:
- most sensitive level L5:
¾s = BBL
L5 is mainly used in applications dealing with high quality videos.
- Intermediate sensitive level L3:
BBL 2 < d
¾3 = (rf - 1) x BBL l≤d < 2
( 0 d < 1
BBL* = { , /T , J where the thresholds of (Ti, T2) are set as (50,
0 Nb < Ti
100), i.e. to their upper values. Apart from the value of (T1 , T2), BBL* is not different from BBL.
L1 is mainly used in applications dealing with low quality videos.
L5 is used for generating high quality videos. It marks all the possible blockiness areas even though the blockiness can only be noticed by very careful checking. Therefore, even for such areas, where the blockiness can only be noticed by very careful checking, the value BL5 is different from 0. The blockiness may be ignored in some conditions (such as displayed in low quality display equipment), but only if it is possible to be noticed, it will be marked out. L4-L2 levels have intermediate sensitivity between L5 and L1. Content creator can optionally select them for different quality requirements. L1 is used for generating low quality videos, such as some videos shared in internet. It only marks the areas with very strong blockiness that may influence the video quality very much. There may be some areas with noticeable blockiness but not marked out, i.e. with BLi=0, if the blockiness does not influence the video quality very much.
According to a variant, only a subset of the above blockiness sensitivity levels is defined. According to another variant, more blockiness sensitivity levels are defined.
If all blocks of the image have been considered then the method continues to step 20, otherwise the next block in the image is considered as the new current block and the method goes back to step 10.
At step 20, the image is processed on the basis of the block blockiness levels computed at step 16. According to an advantageous embodiment, the image is filtered on a block basis, the blocks having a high blockiness level being filtered more strongly than the block having a low blockiness level. According to a variant, the image is encoded with encoding parameters adapted on the basis of the blockiness level. As an example, the quantization parameter is adapted on a block level. To get a stable quality, the blocks having a high blockiness level are encoded with a lower quantization parameter than the blocks having a low blockiness level. According to another embodiment the
image is distributed over a channel. Knowing the image quality in the user end thanks to the block blockiness level, the video distributor adjusts the image coding parameters (e.g. quantization parameter, deblocking filter parameters) and possibly the channel bandwidth on the basis of these block blockiness levels. In addition, the distributors can charge differently the end user according to the received video quality level evaluated by the block blockiness levels or by a video blockiness level. Advantageously, the video blockiness level is determined on the basis of the block blockiness levels determined for the blocks of all the images of the video. As an example, the video blockiness level is determined as the average of these block blockiness levels BBL. According to a variant, the video blockiness level VBL is determined as a function of these block blockiness levels BBL considering spatial and temporal masking.
According to another embodiment illustrated on figure 5, an image blockiness level IBL is determined at step 18 on the basis of the block blockiness levels. As an example, the image blockiness level is determined as the average of the block blockiness levels. According to a variant, the image blockiness level IBL is determined as the sum of the weighted block blockiness levels. The weight of block is determined based on the region of interest ROI information. The block located in the region with higher interest has higher weight than the block located in the region with lower interest. The image is thus processed at step 20 on the basis of this image blockiness level IBL.
According to another variant, the method of processing illustrated on figures 1 and 4 further comprises a step 19 of computing a blockiness map from the block blockiness levels computed at step 16. Block blockiness levels are first scaled to (0, 255) as follows:
Scaled Blockiness = BLi X 255, i = (1,2, 3, 4,5)
where BLi is the blockiness of level LL
The blockiness map is a 8-bit grey-scale picture with corresponding block pixels value set as the scaled blockiness. In the blockiness map the size of the corresponding block may be scaled, i.e., if the block size for blockiness calculation is 16x 6, the size of the corresponding block in blockiness map may be 16x16, 4x4, 32x32, ...
Figure 6 represents an exemplary architecture of an image processing device 2 according to the invention. The processing device 2 comprises following elements that are linked together by a data and address bus 24:
- a microprocessor 21 (or CPU), which is, for example, a DSP (or Digital Signal Processor);
- a ROM (or Read Only Memory) 22;
- a RAM (or Random Access Memory) 23;
- a user interface 25;
- a reception module 26 for reception of images;
- possibly a module 27 for transmission of processed images (i.e. the output of the processing method) to an application and/or a display.
Each of these elements of figure 3 are well known by those skilled in the art and won't be disclosed further. In each of mentioned memory, the word « register » used in the specification can correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data). ROM 22 comprises a program "prog" 220. Algorithms of the processing method according to the invention are stored in the ROM 22. When switched on, the CPU 21 uploads the program 220 in the RAM and executes the corresponding instructions. RAM 23 comprises:
- in a register 230, the program executed by the CPU 21 and uploaded after switch on of the processing device 2;
- input data in a register 231 ;
- processed data in different state of the processing method in a register 232;
- T1 and T2 in registers 233 and 234 respectively;
- ζ in register 235 ; and
- other variables used for processing the image in a register 236 (e.g. filter parameters encoding parameters, and/or monitoring parameters, etc).
According to a variant of the invention, the digital part of the processing device 2 is implemented in pure hardware configuration (e.g. in one or several FPGA, ASIC or VLSI with corresponding memory) or in a configuration using both VLSI and DSP.
Claims
1. Method for processing an image divided into blocks of pixels comprising the steps of :
- detecting (10), for each block, a largest sub-block whose pixels have an equal luminance value;
- identifying (12), for each block, if the detected largest sub-block is or is not a natural texture;
characterized in that said method further comprises the steps of:
- calculating (14), for each block, a weighted luminance difference between the detected largest sub-block and neighboring pixels;
- determining (16), for each block, a block blockiness level on the basis of the number of pixels within the detected largest sub-block, of said identification step and of said weighted luminance difference; and
- processing (18) said image on the basis of said block blockiness levels.
2. Method according to claim 1 , said method further comprises determining (17) an image blockiness level for said image by averaging the block blockiness levels and wherein said image is processed on the basis of said image blockiness level.
3. Method according to claim 1 or 2, wherein said processing step comprises one of the steps belonging to the set comprising:
- encoding step;
- filtering step; and
- distributing step.
4. Method according to any of claims 1 to 3, wherein the identification step comprises, when a deblocking filter is applied on said image, identifying the detected largest sub-block as natural texture when the detected largest sub- block reaches at least two opposite block borders.
5. Method according to any of claims 1 to 3, wherein the identification step comprises, when no deblocking filter is applied on said image or when no information is provided on if deblocking filter is applied or not, identifying the detected largest sub-block as natural texture when the detected largest sub- block exceeds at least two opposite block borders.
6. Method according to any of claims 1 to 5, wherein the step of determining (16) a block blockiness level comprises the steps of:
- determining (160) a preliminary block blockiness level on the basis of the number of pixels within the detected largest sub-block and on the basis of the results of the identification step (12); and
- adjusting (162) said preliminary block blockiness level as a function of the weighted luminance difference, said function depending on a blockiness sensivity.
7. Method according to claim 6, wherein the preliminary block blockiness level BBL for a block is calculated as follows:
0 whenthe detectedlargestsubblock is natural texture
1 whenT2 < N
BBL N T2 w henT1 < N < T2
0 whenN < T1
where T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
8. Method according to claim 6, wherein the preliminary block blockiness level BBL for a block is calculated as follows:
0 whenthe detectedlargest subblock is a natural texture
1 whenT2 < N
BBL
0.5 whenT1 < N < T2
0 w henN < T1
where T1 and T2 are threshold values and N is the number of pixels within the detected largest sub-block.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2010/000592 WO2011134105A1 (en) | 2010-04-29 | 2010-04-29 | Method of processing an image |
US13/642,622 US9076220B2 (en) | 2010-04-29 | 2010-04-29 | Method of processing an image based on the determination of blockiness level |
EP10850438.2A EP2564591A4 (en) | 2010-04-29 | 2010-04-29 | Method of processing an image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2010/000592 WO2011134105A1 (en) | 2010-04-29 | 2010-04-29 | Method of processing an image |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011134105A1 true WO2011134105A1 (en) | 2011-11-03 |
Family
ID=44860728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2010/000592 WO2011134105A1 (en) | 2010-04-29 | 2010-04-29 | Method of processing an image |
Country Status (3)
Country | Link |
---|---|
US (1) | US9076220B2 (en) |
EP (1) | EP2564591A4 (en) |
WO (1) | WO2011134105A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3239929B1 (en) * | 2016-04-27 | 2019-06-12 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method and program |
CN110728676B (en) * | 2019-07-22 | 2022-03-15 | 中南大学 | Texture feature measurement method based on sliding window algorithm |
US11334979B2 (en) * | 2020-05-08 | 2022-05-17 | Istreamplanet Co., Llc | System and method to detect macroblocking in images |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1742281A (en) * | 2002-12-27 | 2006-03-01 | 摩托罗拉公司 | Video deblocking method and apparatus |
US20080143739A1 (en) * | 2006-12-13 | 2008-06-19 | Harris Jerry G | Method and System for Dynamic, Luminance-Based Color Contrasting in a Region of Interest in a Graphic Image |
CN101605257A (en) * | 2008-06-11 | 2009-12-16 | 北京中创信测科技股份有限公司 | A kind of blocking effect analytical method and system |
CN101682767A (en) * | 2007-04-09 | 2010-03-24 | 特克特朗尼克公司 | Systems and methods for measuring loss of detail in a video codec block |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TW395133B (en) | 1998-07-15 | 2000-06-21 | Koninkl Philips Electronics Nv | Detection of a watermark in a compressed video signal |
WO2000026860A1 (en) | 1998-10-29 | 2000-05-11 | Koninklijke Philips Electronics N.V. | Watermark detection |
US6417861B1 (en) * | 1999-02-17 | 2002-07-09 | Sun Microsystems, Inc. | Graphics system with programmable sample positions |
WO2003034335A2 (en) | 2001-10-11 | 2003-04-24 | Koninklijke Philips Electronics N.V. | Method and apparatus for discriminating between different regions of an image |
US7567721B2 (en) | 2002-01-22 | 2009-07-28 | Digimarc Corporation | Digital watermarking of low bit rate video |
JP4344504B2 (en) | 2002-04-17 | 2009-10-14 | 株式会社日立製作所 | Method for detecting digital watermark information |
CN100469128C (en) | 2003-10-10 | 2009-03-11 | 皇家飞利浦电子股份有限公司 | Detection of a watermark in a digital signal |
US20070223693A1 (en) | 2004-06-08 | 2007-09-27 | Koninklijke Philips Electronics, N.V. | Compensating Watermark Irregularities Caused By Moved Objects |
CA2674164A1 (en) | 2006-12-28 | 2008-07-17 | Thomson Licensing | Detecting block artifacts in coded images and video |
JP2008258807A (en) | 2007-04-03 | 2008-10-23 | Toshiba Corp | Electronic watermark detector, video reproducing device, video duplicating device and electronic watermark detection program |
US8111929B2 (en) * | 2008-02-29 | 2012-02-07 | Interra Systems Inc. | Method and system to detect and quantify blockiness in video files |
WO2010021691A1 (en) | 2008-08-19 | 2010-02-25 | Thomson Licensing | Luminance evaluation |
WO2010021700A1 (en) | 2008-08-19 | 2010-02-25 | Thomson Licensing | A propagation map |
EP2534839A4 (en) | 2010-02-11 | 2014-06-11 | Thomson Licensing | Method for processing image |
-
2010
- 2010-04-29 EP EP10850438.2A patent/EP2564591A4/en not_active Withdrawn
- 2010-04-29 US US13/642,622 patent/US9076220B2/en not_active Expired - Fee Related
- 2010-04-29 WO PCT/CN2010/000592 patent/WO2011134105A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1742281A (en) * | 2002-12-27 | 2006-03-01 | 摩托罗拉公司 | Video deblocking method and apparatus |
US20080143739A1 (en) * | 2006-12-13 | 2008-06-19 | Harris Jerry G | Method and System for Dynamic, Luminance-Based Color Contrasting in a Region of Interest in a Graphic Image |
CN101682767A (en) * | 2007-04-09 | 2010-03-24 | 特克特朗尼克公司 | Systems and methods for measuring loss of detail in a video codec block |
CN101605257A (en) * | 2008-06-11 | 2009-12-16 | 北京中创信测科技股份有限公司 | A kind of blocking effect analytical method and system |
Non-Patent Citations (1)
Title |
---|
See also references of EP2564591A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP2564591A1 (en) | 2013-03-06 |
US20130039420A1 (en) | 2013-02-14 |
US9076220B2 (en) | 2015-07-07 |
EP2564591A4 (en) | 2014-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Wang et al. | VideoSet: A large-scale compressed video quality dataset based on JND measurement | |
JP5270573B2 (en) | Method and apparatus for detecting block artifacts | |
TWI804478B (en) | Method of encoding an image including a privacy mask | |
US8315475B2 (en) | Method and apparatus for detecting image blocking artifacts | |
US10134121B2 (en) | Method and system of controlling a quality measure | |
KR101761928B1 (en) | Blur measurement in a block-based compressed image | |
US7957467B2 (en) | Content-adaptive block artifact removal in spatial domain | |
US20140321552A1 (en) | Optimization of Deblocking Filter Parameters | |
US10013772B2 (en) | Method of controlling a quality measure and system thereof | |
JP2015501603A (en) | Adaptive false contour generation prevention in hierarchical coding of images with extended dynamic range | |
EP2564591A1 (en) | Method of processing an image | |
US20100322304A1 (en) | Multi-source filter and filtering method based on h.264 de-blocking | |
Nur Yilmaz | A no reference depth perception assessment metric for 3D video | |
US8724899B2 (en) | Method of processing an image and corresponding device | |
US10652538B2 (en) | Video encoding method and system | |
Yu et al. | A perceptual quality metric based rate-quality optimization of h. 265/hevc | |
CN117201792A (en) | Video encoding method, video encoding device, electronic equipment and computer readable storage medium | |
EP3151189A1 (en) | Method for determining a modifiable block | |
JP2005122571A (en) | Image processor, image processing method, program and storage medium | |
CN112004090A (en) | Target boundary determining method, computer device and storage medium | |
Abate | Detection and measurement of artefacts in digital video frames |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10850438 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010850438 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13642622 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |