US20070140590A1 - Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal - Google Patents

Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal Download PDF

Info

Publication number
US20070140590A1
US20070140590A1 US11/634,157 US63415706A US2007140590A1 US 20070140590 A1 US20070140590 A1 US 20070140590A1 US 63415706 A US63415706 A US 63415706A US 2007140590 A1 US2007140590 A1 US 2007140590A1
Authority
US
United States
Prior art keywords
block
difference
pair
pixel
extracted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/634,157
Inventor
Shunichi Kimura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Business Innovation Corp
Original Assignee
Fuji Xerox Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuji Xerox Co Ltd filed Critical Fuji Xerox Co Ltd
Assigned to FUJI XEROX CO., LTD. reassignment FUJI XEROX CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KIMURA, SHUNICHI
Publication of US20070140590A1 publication Critical patent/US20070140590A1/en
Priority to US13/064,321 priority Critical patent/US8135232B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/413Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
    • H04N1/417Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information using predictive or differential encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/409Edge or detail enhancement; Noise or error suppression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/004Diagnosis, testing or measuring for television systems or their details for digital television systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the invention relates to an image evaluation apparatus for judging image quality of an input image. Particularly, the invention relates to an image evaluation apparatus for estimating geometric distortion given by JPEG.
  • Degree of deterioration of image quality generated in decoded image data varies according to degree of compression (such as compression ratio, quantization level, etc.) in image data. Therefore, image processing for correcting such deterioration of image quality may be changed in accordance with the degree of compression.
  • degree of compression such as compression ratio, quantization level, etc.
  • an image evaluation apparatus includes a pixel extraction unit, an intra-difference calculation unit, an inter-difference calculation unit and an evaluation unit.
  • the pixel extraction unit that extracts, from an input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks.
  • the intra-pair difference calculation unit calculates a difference between the extracted pair of block-boundary pixels as a first difference.
  • the intra-pair difference calculation unit calculates a difference between the extracted pair of non-block-boundary pixels as a second difference.
  • the inter-pair difference calculation unit calculates a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region.
  • the evaluation unit evaluates an amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region.
  • FIG. 1 is a diagram showing the hardware configuration of an image processing apparatus 2 to which an image evaluation method according to an exemplary embodiment of the invention is applied, with a controller 20 as its center;
  • FIG. 2 is a diagram showing the functional configuration of an image processing program 5 executed by the controller 20 ( FIG. 3 );
  • FIGS. 3A and 3B are views showing an extracted pixel region
  • FIG. 4 is a view showing an example of the extracted pixel region with 32 ⁇ 32 pixels, in an input image
  • FIG. 5 is a flow chart of an image processing process
  • FIG. 6 is a graph showing an experimental result of an image evaluation process according to the exemplary embodiment
  • FIG. 7 is a graph of a standard deviation in a flat region
  • FIGS. 8A and 8B are graphs showing experimental results in Modification 1;
  • FIGS. 9A and 9B are views showing extracted pixel regions each extracted in only one direction
  • FIGS. 10A and 10B are graphs showing experimental results in the case where pixels are extracted only transversely
  • FIGS. 11A and 11B are views showing extracted pixel regions each sampled at predetermined intervals
  • FIGS. 12A to 12 C are graphs showing experimental results in the case of line sampling
  • FIGS. 13A and 13B are graphs showing experimental results in the case of block sampling
  • FIGS. 14A to 14 F are views showing pairs of block-boundary pixels and pairs of non-block-boundary pixels in Modification 3;
  • FIGS. 15A to 15 C are views for explaining methods of judging the flat region in Modification 4.
  • FIG. 16 is a view for exemplifying an image on which a clipping is performed
  • FIG. 17 is a view for exemplifying an image in which block-boundary positions are shifted only in the transverse direction;
  • FIG. 18 is a view for exemplifying an image in which block-boundary positions are shifted in both of the transverse direction and the longitudinal direction;
  • FIG. 19 is a view for explaining a method for calculating block distortion by changing a shift
  • FIG. 20 is a graph showing an experimental result relating to a modified example 5.
  • FIG. 21 is a diagram showing the functional configuration of the image processing apparatus 2 of the modified example 5.
  • FIG. 22 is a flowchart of the operation of the image processing apparatus 2 of the modified example 5.
  • a difference between adjacent pixels in the case where a block boundary is between the adjacent pixels is regarded as a target signal and “a difference between adjacent pixels in the case where no block boundary is between the adjacent pixels” is regarded as a background signal.
  • the exemplary embodiment limits pixel positions from which the background signal is acquired and pixel position from which the target signal is acquired, to ones which are close to each other.
  • the image processing apparatus 2 performs the following processing. Two pixels in a block boundary (i.e. a pair of block-boundary pixels) are extracted and a difference between the two pixels is calculated. Then, two pixels near to the extracted pixels but in a region other than the block boundary are regarded as a pair of non-block-boundary pixels. A pair of such non-block-boundary pixels or pairs of such non-block-boundary pixels are extracted and the difference between each pair of non-block-boundary pixels is calculated.
  • the maximum pixel value and the minimum pixel value among the pairs of block-boundary pixels and the pairs of non-block-boundary pixels are obtained, and a difference between the maximum pixel value and the minimum pixel value is calculated.
  • an image region is judged to be a random image region. Such an image region is not used for calculation of distortion (calculation of the average).
  • FIG. 1 is a diagram showing the hardware configuration of the image processing apparatus 2 to which the image evaluation method according to the exemplary embodiment of the invention is applied, with a controller 20 as its center.
  • the image processing apparatus 2 includes a controller 20 , a communication device 220 , a storage device 240 , and a user interface device (UI device) 230 .
  • the controller 20 includes a CPU 212 , and a memory 214 .
  • the storage device 240 includes an HDD, and a CD device.
  • the UI device 230 includes an LCD or CRT display device, and a keyboard or touch panel.
  • the image processing apparatus 2 is provided in the inside of a printer 10 .
  • the image processing apparatus 2 acquires image data through the communication device 220 or the storage device 240 and corrects deterioration of image quality caused by a coding process on the basis of the acquired image data.
  • FIG. 2 is a diagram showing the functional configuration of an image processing program 5 executed by the controller 20 ( FIG. 3 ).
  • the image processing program 5 includes an image evaluation unit 500 , and an image correction unit 600 .
  • the image evaluation unit 500 implements the image evaluation method according to the exemplary embodiment of the invention.
  • the image correction unit 600 corrects image data on the basis of a result of evaluation made by the image evaluation unit 500 .
  • the image evaluation unit 500 includes a pixel-pair extraction section 510 , a flat-region judgment section 520 , a block-boundary difference acquisition section 530 , a non-block-boundary difference acquisition section 540 , an extracted-region distortion calculation section 550 , and a total image distortion calculation section 560 .
  • the pixel-pair extraction section 510 divides the input image data into 8 ⁇ 8 pixel blocks and extracts pairs of block-boundary pixels and pairs of non-block-boundary pixels from the divided pixel blocks.
  • the pixel-pair extraction section 510 divides the input image into 8 ⁇ 8 pixel blocks and extracts eight pairs of block-boundary pixels from adjacent blocks each having the divided 8 ⁇ 8 pixel blocks.
  • the pixel-pair extraction section 510 of this example extracts, from adjacent two blocks, four pixels a, b, c and d as a pair of block-boundary pixels and pairs of non-block-boundary pixels. That is, the pixel-pair extraction section 510 extracts the pair of block-boundary pixels and the pairs of non-block-boundary pixels simultaneously.
  • the pair of block-boundary pixels consist of the pixels b and c.
  • One pair of non-block-boundary pixels consist of the pixels a and b.
  • Another pair of non-block-boundary pixels consist of the pixels c and d.
  • the pixels b and c are opposed to each other across the boundary between the adjacent two blocks. On the other hand, there is no boundary between the pixels a and b or between the pixels c and d.
  • FIG. 4 shows pairs of block-boundary pixels and pairs of non-block-boundary pixels, which are extracted in the case where the input image has a size of 32 ⁇ 32.
  • the input image has a size of 32 ⁇ 32
  • pixel regions shown in FIG. 4 are extracted.
  • I the number of pixels a, b, c and d.
  • E(I) be an amount of block distortion of the extracted pixel region I. A method of obtaining E(I) will be described below.
  • the non-block-boundary difference acquisition section 540 calculates a difference N(I) between a pair of non-block-boundary pixels according to the following expression.
  • N ( I ) ⁇ abs ( a ⁇ b )+ abs ( c ⁇ d ) ⁇ /2
  • N(I) is a function for calculating an average of absolute values of differences between the pairs of non-block-boundary pixels in the extracted pixel region I.
  • the extracted-region distortion calculation section 550 calculates the block distortion E(I) of the extracted region I according to the following expression.
  • E ( I ) B ( I ) ⁇ N ( I )
  • a threshold TH 1 is prepared in advance.
  • the total image distortion calculation section 560 selects extracted regions I having flatness H(I) smaller than the threshold TH 1 and calculates the average (mean(E)) and the standard deviation (std(E)) using the selected extracted regions I. This is for the purpose of calculating block distortion with only flat regions.
  • sqrt(x) is a function for obtaining square root of x.
  • the index I denotes an extracted pixel region.
  • S denotes an amount of block distortion.
  • S2 indicates a square sum of S.
  • pixel-pair extraction section 510 extracts a pixel region I
  • the flat-region judgment section 520 calculates H(I) based on the extracted pixel region I (step S 102 ).
  • the total image distortion calculation section 560 determines whether or not H(I) calculated by the flat-region judgment section 520 is smaller than TH 1 (step S 103 ).
  • the total image distortion calculation section 560 determines that H(I) is smaller than TH 1 (Yes in step S 103 ), the total image distortion calculation section 560 increments NumI by 1, calculates E(I) in the above-described manner, adds E(I) to S and also adds E(I) ⁇ E(I) to S2 (step S 104 ). If the total image distortion calculation section 560 determines that H(I) is equal to or larger than TH 1 (No in step S 103 ), the process jumps to step S 105 . In the step S 105 , the total image distortion calculation section 560 increments I by 1. Then, the total image distortion calculation section 560 determines whether or not I is equal to MaxI (step S 106 ).
  • E mean
  • std(E) i.e., a standard deviation of E(I)
  • the image evaluation unit 500 decides that it is impossible to obtain block distortion BN of input image data.
  • the image evaluation process performed by the image processing apparatus 2 detects that the images No. 15 and No. 16 are small in block distortion, the image evaluation process operates good.
  • FIG. 6 is a graph showing block distortions detected from the images No. 1 to No. 33 by the image processing apparatus 2 .
  • block distortions of the images No. 15 and No. 16 are particularly smaller than those of the other images. This agrees with the evaluation result based on eye observation. That is, it is obvious that the image processing apparatus 2 according to this exemplary embodiment can evaluate block distortion appropriately.
  • the amount of block distortion BN of the input image data is calculated in such a manner that the average of E(I) (mean(E)) is divided by the standard deviation of E(I) (std(E)).
  • the reason why the average of E(I) is divided by the standard deviation of E(I) is to normalize the average of E(I). That is, in an image having E(I) varying widely, it is conceived that the value of mean(E) is insignificant. On the other hand, in an image having E(I) varying narrowly, it is conceived that the value of mean(E) is significant.
  • the pixel-pair extraction section 510 extracts only transverse pixel pairs as shown in FIG. 9A . That is, the pixel-pair extraction section 510 of the modification 2 extracts pixel regions I each of which contains a boundary between adjacent two blocks and has pixels arranged in the same row.
  • FIG. 10 shows an experimental result in this case.
  • FIG. 10A shows the experimental result of the modification 2 of the exemplary embodiment (that is, the same figures as FIG. 6 ).
  • block distortion can be detected with sufficient accuracy even if only the transverse pixel pairs are used.
  • the pixel-pair extraction section 510 may extract only longitudinal pixel pairs as shown in FIG. 9B . That is, the pixel-pair extraction section 510 may extract pixel regions I each of which contains a boundary between adjacent two blocks and has pixels arranged in the same column.
  • FIG. 10B shows another experimental result of the modification 2 of the exemplary embodiment (that is, the same figures as FIG. 6 ). As is obvious from reference to FIG. 10B , block distortion can be detected with sufficient accuracy even if only the transverse pixel pairs are used.
  • the pixel-pair extraction section 510 may extract pixel pairs (pixel regions I) so as to sample the pixel pairs (pixel regions I) at intervals of several lines (several rows) as shown in FIG. 11A .
  • FIG. 11A is equivalent to the case where the pixel-pair extraction section 510 samples the pixel pairs (pixel regions I) only transversely and Nr is equal to 2.
  • FIGS. 12A to 12 C show experimental results in this case.
  • FIG. 12A is equivalent to the case where the pixel-pair extraction section 510 extracts all transverse pixel pairs (pixel regions I).
  • FIG. 12B is equivalent to the case where the pixel-pair extraction section 510 samples transverse pixel pairs (pixel regions I) at intervals of two pixels.
  • FIG. 12C is equivalent to the case where the pixel-pair extraction section 510 samples transverse pixel pairs (pixel regions I) at intervals of four pixels.
  • the pixel-pair extraction section 510 may sample processing blocks as shown in FIG. 11B . Specifically, the pixel-pair extraction section 510 may sample only one block from Nb ⁇ Nb blocks. Incidentally, FIG. 11B is equivalent to the case where the pixel-pair extraction section 510 performs transverse sampling, Nr is equal to 4 and Nb is equal 2.
  • FIGS. 13A and 13B show experimental results in this case.
  • the exemplary embodiment has shown the case where pixels a, b, c and d shown in FIG. 3B are extracted as a pair of block-boundary pixels and a pair of non-block-boundary pixels (pixel region I), the pattern of the pair of non-block-boundary pixels is not limited to the exemplary embodiment. Another pattern of a pair of non-block-boundary pixels corresponding to a pair of block-boundary pixels b and c will be described below.
  • the pixel-pair extraction section 510 may use pixels a and b as a pair of non-block-boundary pixels.
  • the pixel-pair extraction section 510 may extract a pixel region including a singe pair of block-boundary pixels (pixels b and c) and a single pair of non-block-boundary pixels (pixels a and b). In this case, it is not necessary to acquire an average of differences between non-block-boundary pixels.
  • the pixel-pair extraction section 510 may use pixels a and d as a pair of non-block-boundary pixels. It is not necessary that the pair of non-block-boundary pixels contain the pixel b. In other words, the pixel-pair extraction section 510 may extract a pixel region including a single pair of block-boundary pixels (b and c) and a single pair of non-block-boundary pixels (pixels a and d). Although FIG. 14B shows the case where pixels d and b are not adjacent to each other, these pixels may be adjacent to each other. In that case, the pixel-pair extraction section 510 does not treat a pair of pixels d and b as a pair of non-block-boundary pixels.
  • the pixel-pair extraction section 510 may extract plural pairs of non-block-boundary pixels (a pair of pixels a 1 and d 1 and a pair of pixels a 2 and d 2 ), which are separate from a pair of block-boundary pixels (a pair of pixels b and c).
  • the pixel-pair extraction section 510 may extract a pixel region including a single pair of block-boundary pixels and plural pairs of non-block-boundary pixels, which are separate from the pair of block-boundary pixels.
  • FIG. 14C the pixel-pair extraction section 510 may extract plural pairs of non-block-boundary pixels (a pair of pixels a 1 and d 1 and a pair of pixels a 2 and d 2 ), which are separate from a pair of block-boundary pixels (a pair of pixels b and c).
  • the pixel-pair extraction section 510 may extract a pixel region including a single pair of block-boundary pixels and plural pairs of non-block-boundary pixels, which are separate from the
  • the pixel-pair extraction section 510 may regard pixels a to e (a pixels region including the pixels a to e) as that including three pairs of non-block-boundary pixels “a and d”, “d and e” and “e and b”.
  • the pixel-pair extraction section 510 may extract a pair of block-boundary pixels and a pair of non-block-boundary pixels from different lines (rows), respectively.
  • the pixel-pair extraction section 510 may extract a pixel region including plural pixels of different lines (rows).
  • a relative positional relation between the pair of block-boundary pixels is the same as that between the pair of non-block-boundary pixels. For example, as shown in FIG. 14E , since pixels b and c are transversely adjacent to each other, pixels a and d are transversely adjacent to each other.
  • the relative positional relation between a pair of pixels may rotate by 90 degrees from that between the other pair of pixels (e.g., pixels a and d).
  • the flat-region judgment section 520 sets the difference between a maximum pixel value and a minimum pixel value in the extracted pixel region I, as H(I).
  • Another method may be, however, used as a method for judging the flat region (non-edge region).
  • the difference between a maximum pixel value and a minimum pixel value in a region having pairs of non-block-boundary pixels may be set as H(I).
  • the flat-region judgment section 520 may calculate the difference between the maximum pixel value and the minimum pixel value in pixel regions contained in a single block, as H(I). Since two blocks are adjacent to the boundary, two kinds of differences between the maximum pixel value and the minimum pixel value can be calculated. The largest value in the two kinds of differences may be set as H(I).
  • H(I) is obtained as follows.
  • H(I) is obtained as follows.
  • H (I) may be not the largest value of the differences, but the average of the differences as follows.
  • the variance or standard deviation of pixel values in the extracted pixel region I may be set as H(I).
  • degree of distortion in units of blocks each having 8 pixels by 8 pixels is calculated as an amount of block distortion based on the assumption that block positions of the JPEG coding process are known.
  • Examples of the case where the block positions are unknown may include the case where a clipping process is performed as shown in FIG. 16 and the case where a rotation process is performed.
  • FIG. 17 is a view for exemplifying an image in which blocks of the coding process are shifted in the transverse direction. At first, description will be given on the case where the block boundary is shifted in the transverse direction as shown in FIG. 17 .
  • the image processing apparatus 2 calculates a block distortion based on a gradation difference among four pixels (a, b, c, d) arranged in the transverse direction over a block boundary (e.g. absolute values of differences between respective pairs of adjacent pixels or an amount of block distortion of a pixel region, which includes the four pixels (a, b, c, d) and includes a block boundary). Therefore, the shift of the block positions in the longitudinal direction of the image does not substantially effect on calculating of the amount of the block distortion. For example, as shown in FIG. 17 , even if the block positions are shifted in the longitudinal direction, positions of four pixels, which are to be used in the calculating, are merely shifted in the longitudinal direction and are still include the block boundary. Therefore, the block distortion can be detected normally.
  • a gradation difference among four pixels (a, b, c, d) arranged in the transverse direction over a block boundary e.g. absolute values of differences between respective pairs of adjacent pixels or an amount of block distortion of
  • FIG. 18 exemplifies an image in which blocks of the coding process are also shifted in the transverse direction.
  • the block boundary is shifted in the transverse direction as shown in FIG. 18 .
  • four pixels (a, b, c, d) arranged in the transverse direction may not include a block boundary. Therefore, block distortion may not be calculated based on the above described method.
  • the image processing apparatus 2 of the modification 5 has the configuration shown in FIG. 21 .
  • FIG. 21 is similar to FIG. 1 , but is different in that specific components of the CPU 212 are shown in FIG. 21 .
  • the CPU 212 includes a block setting unit 212 a , a difference evaluation unit 212 b , a control unit 212 c and an evaluation value generating unit 212 d .
  • the control unit 212 c controls the block setting unit 212 a , the difference evaluation unit 212 b and the evaluation value generating unit 212 d.
  • the image processing apparatus 2 of the modification 5 calculates amounts of block distortion of plural types of blocks, which are different in phase or size, of an image. Then, the image processing apparatus 2 evaluates an amount of block distortion of an input image based on the calculated plural amounts of block distortion. In this example, the image processing apparatus 2 calculates amounts of block distortion while shifting a start position (in total 8 positions) in the transverse direction, and adopts the maximum value.
  • the difference evaluation unit 212 b calculates the block distortion E(I, 1) of each extracted pixel region I. Then, the difference evaluation unit 212 b calculates the amounts of block distortion BN(F, 1) of an entire image F based on the block distortions E(I, 1) by the method described in the exemplary embodiment. The difference evaluation unit 212 b stores the calculated amounts of block distortion BN(F, 1) of the entire image F. In the image F, block positions of the coding process are unknown. At this time, as shown in FIG. 18 , block boundaries may not be calculated. Even in that case, the image processing apparatus 2 does not care.
  • the block setting unit 212 a shifts the start point of block distortion calculation by one pixel in the right direction.
  • the difference evaluation unit 212 b calculates the block distortion E(I, 2) of each extracted region I.
  • the difference evaluation unit 212 b calculates and stores an amount of block distortion BN (F, 2) of the entire image F in a similar manner.
  • the image processing apparatus 2 calculates amounts of block distortion B(N, i) of the entire image F while the block setting unit 212 a shifts the start position in the right direction one pixel by one pixel.
  • the maximum value of the thus calculated BN (F, i) is expressed as BN(F).
  • the evaluation value generating unit 212 d judges that the block boundary is located in a position, which gives the maximum value BN(F).
  • a position of each block can be expressed in the “phase.”
  • the shift of the start position of the block distortion calculation can be expressed in the “phase.” For example, if the shift is equal to the block size (that is, 8 blocks), the “phase” is equal to 2 ⁇ radian. Also, if the shift is equal to a half of the block size (that is, 4 blocks), the “phase” is equal to ⁇ radian.
  • any of the calculated amounts of block distortion should be matched with the true one.
  • JPEG block distortion is larger as an amount of block distortion is lager.
  • factors other than JPEG block distortion cause an amount of block distortion, which is a large boundary difference appearing in eight-pixel intervals. From these natures, it is reasonable that if the maximum value of the calculated eight values is larger than a certain level, the maximum value is considered as just the amount of block distortion.
  • the maximum value of the calculated eight values is smaller than a certain level, it is difficult to precisely judge whether or not such a maximum value is an amount of block distortion. However, even if such a maximum value is used as an amount of block distortion, there would arise no problem because the maximum value is relatively small. Accordingly, if the maximum value of the calculated eight values is used as an amount of block distortion regardless of magnitude of the maximum value, there would arise no problem.
  • Edge images which are in the surroundings of respective 33 images used in an example of the modification 5, are clipped by three pixels. Then, an amount of block distortion is calculated by the above description method for adopting the maximum value. Then, as shown in FIG. 20 , the amount of block distortion can be calculated with no large difference between the clipped case and the no-clipped case.
  • the image processing apparatus 2 may estimate the block-boundary positions of the coding process by calculating the amount of block distortion with plural phases and comparing the calculated amounts of block distortion with each other.
  • the amount of block distortion is a feature value, which takes a large value in a JPEG image highly compressed. Therefore, the maximum value is often outstand among the eight values calculated from the eight positions. On the other hand, if the maximum value of the eight values is not outstand, there is a possibility that the maximum value is a value obtained by coincidence from a structure of an image rather than the amount of block distortion. Therefore, the image processing apparatus 2 may calculate the amounts of block distortion with the plural phases, compare the amounts of block distortion with each other and judge whether or not it is necessary to remove the block distortion.
  • the image processing apparatus 2 calculates amounts of block distortion of blocks having plural sizes and compares the amounts of block distortion with each other, to thereby calculate the true amount of block distortion.
  • the difference evaluation unit 212 b determines whether or not H(I, i) is smaller than TH 1 (step S 203 ). If the difference evaluation unit 212 b determines that H(I, i) is smaller than TH 1 (Yes in step S 203 ), the difference evaluation unit 212 b increments NumI by 1, calculates E(I, i) in the above-described manner, adds E(I, i) to S and also adds E(I, i) ⁇ E(I, i) to S2 (step S 204 ). If the difference evaluation unit 212 b determines that H(I, i) is equal to or larger than TH 1 (No in step S 203 ), the process jumps to step S 205 .
  • the difference evaluation unit 212 b increments I by 1. Then, the difference evaluation unit 212 b determines whether or not I is equal to MaxI (step S 206 ). If the difference evaluation unit 212 b determines that I is not equal to MaxI, that is, is less than MaxI (No in step S 206 ), the process returns to the step S 202 .
  • mean (E, i) i.e., an average of E(I, i)
  • std(E, i) i.e., a standard deviation of E
  • control unit 212 c decides that it is impossible to obtain block distortion BN (F, i).
  • flat-region judgment section 520 is not essential to the exemplary embodiment.
  • the total image distortion calculation section 560 may calculate the medium of E(I), the mode of E(I) or the sum of E(I) (only the sum can be acquired when the size of the image is constant and the flat pixel region judgment is not made) instead of the average of E(I).

Abstract

An image evaluation apparatus includes a pixel extraction unit, an intra-pair difference calculation unit, an inter-pair difference calculation unit and an evaluation unit. The pixel extraction unit extracts, from an input image, a pixel region including a pair of block-boundary pixels in a boundary position of coding blocks and a pair of non-block-boundary pixels in a position other than the boundary position. The intra-pair difference calculation unit calculates a difference between the extracted pair of block-boundary pixels as a first difference, and a difference between the extracted pair of non-block-boundary pixels as a second difference. The inter-pair difference calculation unit calculates a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region. The evaluation unit evaluates an amount of block distortion of the input image based on the calculated amount of block distortion of the extracted pixel region.

Description

    TECHNICAL FIELD
  • The invention relates to an image evaluation apparatus for judging image quality of an input image. Particularly, the invention relates to an image evaluation apparatus for estimating geometric distortion given by JPEG.
  • DESCRIPTION OF THE RELATED ART
  • Degree of deterioration of image quality generated in decoded image data varies according to degree of compression (such as compression ratio, quantization level, etc.) in image data. Therefore, image processing for correcting such deterioration of image quality may be changed in accordance with the degree of compression.
  • SUMMARY
  • According to an aspect of the invention, an image evaluation apparatus includes a pixel extraction unit, an intra-difference calculation unit, an inter-difference calculation unit and an evaluation unit. The pixel extraction unit that extracts, from an input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks. The intra-pair difference calculation unit calculates a difference between the extracted pair of block-boundary pixels as a first difference. The intra-pair difference calculation unit calculates a difference between the extracted pair of non-block-boundary pixels as a second difference. The inter-pair difference calculation unit calculates a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region. The evaluation unit evaluates an amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will be described in detail based on the following figures, wherein:
  • FIG. 1 is a diagram showing the hardware configuration of an image processing apparatus 2 to which an image evaluation method according to an exemplary embodiment of the invention is applied, with a controller 20 as its center;
  • FIG. 2 is a diagram showing the functional configuration of an image processing program 5 executed by the controller 20 (FIG. 3);
  • FIGS. 3A and 3B are views showing an extracted pixel region;
  • FIG. 4 is a view showing an example of the extracted pixel region with 32×32 pixels, in an input image;
  • FIG. 5 is a flow chart of an image processing process;
  • FIG. 6 is a graph showing an experimental result of an image evaluation process according to the exemplary embodiment;
  • FIG. 7 is a graph of a standard deviation in a flat region;
  • FIGS. 8A and 8B are graphs showing experimental results in Modification 1;
  • FIGS. 9A and 9B are views showing extracted pixel regions each extracted in only one direction;
  • FIGS. 10A and 10B are graphs showing experimental results in the case where pixels are extracted only transversely;
  • FIGS. 11A and 11B are views showing extracted pixel regions each sampled at predetermined intervals;
  • FIGS. 12A to 12C are graphs showing experimental results in the case of line sampling;
  • FIGS. 13A and 13B are graphs showing experimental results in the case of block sampling;
  • FIGS. 14A to 14F are views showing pairs of block-boundary pixels and pairs of non-block-boundary pixels in Modification 3;
  • FIGS. 15A to 15C are views for explaining methods of judging the flat region in Modification 4;
  • FIG. 16 is a view for exemplifying an image on which a clipping is performed;
  • FIG. 17 is a view for exemplifying an image in which block-boundary positions are shifted only in the transverse direction;
  • FIG. 18 is a view for exemplifying an image in which block-boundary positions are shifted in both of the transverse direction and the longitudinal direction;
  • FIG. 19 is a view for explaining a method for calculating block distortion by changing a shift;
  • FIG. 20 is a graph showing an experimental result relating to a modified example 5;
  • FIG. 21 is a diagram showing the functional configuration of the image processing apparatus 2 of the modified example 5; and
  • FIG. 22 is a flowchart of the operation of the image processing apparatus 2 of the modified example 5.
  • DETAILED DESCRIPTION
  • In an image processing apparatus 2 according to an exemplary embodiment, “a difference between adjacent pixels in the case where a block boundary is between the adjacent pixels” is regarded as a target signal and “a difference between adjacent pixels in the case where no block boundary is between the adjacent pixels” is regarded as a background signal. Also, the exemplary embodiment limits pixel positions from which the background signal is acquired and pixel position from which the target signal is acquired, to ones which are close to each other.
  • More specifically, the image processing apparatus 2 performs the following processing. Two pixels in a block boundary (i.e. a pair of block-boundary pixels) are extracted and a difference between the two pixels is calculated. Then, two pixels near to the extracted pixels but in a region other than the block boundary are regarded as a pair of non-block-boundary pixels. A pair of such non-block-boundary pixels or pairs of such non-block-boundary pixels are extracted and the difference between each pair of non-block-boundary pixels is calculated.
  • Then, an average of the differences between the pairs of non-block-boundary pixels is calculated. Then, differences between the differences between the pairs of block-boundary pixels and the average of the differences between the pairs of non-block-boundary pixels are calculated. Finally, an average of absolute values of the differences is regarded as block distortion.
  • Incidentally, for calculation of the average, the maximum pixel value and the minimum pixel value among the pairs of block-boundary pixels and the pairs of non-block-boundary pixels are obtained, and a difference between the maximum pixel value and the minimum pixel value is calculated. When the difference between the maximum pixel value and the minimum pixel value is large, an image region is judged to be a random image region. Such an image region is not used for calculation of distortion (calculation of the average).
  • [Hardware Configuration]
  • The hardware configuration of the image processing apparatus 2 (image evaluation apparatus) according to this exemplary will be described first.
  • FIG. 1 is a diagram showing the hardware configuration of the image processing apparatus 2 to which the image evaluation method according to the exemplary embodiment of the invention is applied, with a controller 20 as its center.
  • As shown in FIG. 1, the image processing apparatus 2 includes a controller 20, a communication device 220, a storage device 240, and a user interface device (UI device) 230. The controller 20 includes a CPU 212, and a memory 214. The storage device 240 includes an HDD, and a CD device. The UI device 230 includes an LCD or CRT display device, and a keyboard or touch panel.
  • For example, the image processing apparatus 2 is provided in the inside of a printer 10. The image processing apparatus 2 acquires image data through the communication device 220 or the storage device 240 and corrects deterioration of image quality caused by a coding process on the basis of the acquired image data.
  • [Image Processing Program]
  • FIG. 2 is a diagram showing the functional configuration of an image processing program 5 executed by the controller 20 (FIG. 3).
  • As shown in FIG. 2, the image processing program 5 includes an image evaluation unit 500, and an image correction unit 600. The image evaluation unit 500 implements the image evaluation method according to the exemplary embodiment of the invention. The image correction unit 600 corrects image data on the basis of a result of evaluation made by the image evaluation unit 500.
  • The image evaluation unit 500 includes a pixel-pair extraction section 510, a flat-region judgment section 520, a block-boundary difference acquisition section 530, a non-block-boundary difference acquisition section 540, an extracted-region distortion calculation section 550, and a total image distortion calculation section 560.
  • When an image is input image data to the pixel-pair extraction section 510, the pixel-pair extraction section 510 divides the input image data into 8×8 pixel blocks and extracts pairs of block-boundary pixels and pairs of non-block-boundary pixels from the divided pixel blocks.
  • For example, as shown in FIG. 3A, the pixel-pair extraction section 510 divides the input image into 8×8 pixel blocks and extracts eight pairs of block-boundary pixels from adjacent blocks each having the divided 8×8 pixel blocks.
  • As shown in FIG. 3B, the pixel-pair extraction section 510 of this example extracts, from adjacent two blocks, four pixels a, b, c and d as a pair of block-boundary pixels and pairs of non-block-boundary pixels. That is, the pixel-pair extraction section 510 extracts the pair of block-boundary pixels and the pairs of non-block-boundary pixels simultaneously. In the example shown in FIG. 3B, the pair of block-boundary pixels consist of the pixels b and c. One pair of non-block-boundary pixels consist of the pixels a and b. Another pair of non-block-boundary pixels consist of the pixels c and d. The pixels b and c are opposed to each other across the boundary between the adjacent two blocks. On the other hand, there is no boundary between the pixels a and b or between the pixels c and d.
  • FIG. 4 shows pairs of block-boundary pixels and pairs of non-block-boundary pixels, which are extracted in the case where the input image has a size of 32×32. When the input image has a size of 32×32, pixel regions shown in FIG. 4 are extracted.
  • Incidentally, it is not necessary to extract the pixel regions at once. For processing, four pixels a, b, c and d as shown in FIG. 3B may be extracted at once. After a process of extracting such four pixels is completed, a process of extracting next four pixels may be performed.
  • For the sake of convenience of description, the pixel regions each containing four pixels are numbered by I (I=1, 2, . . . , MaxI). Each four-pixel region is regarded as an extracted pixel region I. In the following description, a, b, c and d represent pixel values of pixels a, b, c and d. Let E(I) be an amount of block distortion of the extracted pixel region I. A method of obtaining E(I) will be described below.
  • The flat-region judgment section 520 (FIG. 2) calculates flatness H(I) of the extracted pixel region I according to the following expression:
    H(I)=max(a,b,c,d)−min(a,b,c,d)
    where max(x0, x1, . . . ) is a function for calculating a maximum value in x0, x01, . . . , and min(x0, x1, . . . ) is a function for calculating the minimum value in x0, x01, . . . .
  • The block-boundary difference acquisition section 530 (FIG. 2) calculates a block-boundary difference B(I) according to the following expression:
    B(I)=abs(b−c)
    where abs(x) is a function for calculating an absolute value of x.
  • The non-block-boundary difference acquisition section 540 calculates a difference N(I) between a pair of non-block-boundary pixels according to the following expression.
    N(I)={abs(a−b)+abs(c−d)}/2
    In other words, N(I) is a function for calculating an average of absolute values of differences between the pairs of non-block-boundary pixels in the extracted pixel region I.
  • The extracted-region distortion calculation section 550 calculates the block distortion E(I) of the extracted region I according to the following expression.
    E(I)=B(I)−N(I)
  • The total image distortion calculation section 560 calculates block distortion BN of the whole image according to the following expression:
    BN=mean(E)/std(E)
    where mean(E) is a function for calculating an average of E(I), and std(E) is a function for calculating standard deviation of E(I).
  • A threshold TH1 is prepared in advance. The total image distortion calculation section 560 selects extracted regions I having flatness H(I) smaller than the threshold TH1 and calculates the average (mean(E)) and the standard deviation (std(E)) using the selected extracted regions I. This is for the purpose of calculating block distortion with only flat regions.
  • [Operation]
  • The operation of the image evaluation unit 500 is implemented by a flow chart shown in FIG. 5. In FIG. 5, sqrt(x) is a function for obtaining square root of x.
  • In step S101, variants are initialized. Specifically, NumI=0, I=1, S=0 and S2=0. NumI denotes number of blocks, which have the flatness H(I) smaller than TH1. The index I denotes an extracted pixel region. S denotes an amount of block distortion. S2 indicates a square sum of S. Then, pixel-pair extraction section 510 extracts a pixel region I, and the flat-region judgment section 520 calculates H(I) based on the extracted pixel region I (step S102). The total image distortion calculation section 560 determines whether or not H(I) calculated by the flat-region judgment section 520 is smaller than TH1 (step S103). If the total image distortion calculation section 560 determines that H(I) is smaller than TH1 (Yes in step S103), the total image distortion calculation section 560 increments NumI by 1, calculates E(I) in the above-described manner, adds E(I) to S and also adds E(I)×E(I) to S2 (step S104). If the total image distortion calculation section 560 determines that H(I) is equal to or larger than TH1 (No in step S103), the process jumps to step S105. In the step S105, the total image distortion calculation section 560 increments I by 1. Then, the total image distortion calculation section 560 determines whether or not I is equal to MaxI (step S106). If the total image distortion calculation section 560 determines that I is not equal to MaxI, that is, is less than MaxI (No in step S106), the process returns to the step S102. If the total image distortion calculation section 560 determines that I is equal to MaxI (Yes in step S106), the total image distortion calculation section 560 calculates mean (E) (i.e., an average of E(I)) by dividing S by NumI, and calculates std(E) (i.e., a standard deviation of E(I)) by using the following expression: std ( E ) = S 2 NumI - ( mean ( E ) 2
    Then, the total image distortion calculation section 560 calculates a block distortion BN of the entire image by dividing mean(E) by std(E) (step S107).
  • If H(I)≧TH1 in all regions I (i.e., NumI=0), the image evaluation unit 500 decides that it is impossible to obtain block distortion BN of input image data.
  • [Experimental Result]
  • An experimental result of the image evaluation process made by the image processing apparatus 2 will be described below.
  • Thirty-three images (No. 1 to No. 33) are used in the experiment.
  • When the experimental images No. 1 to No. 33 are checked visually, block distortions of the images No. 15 and No. 16 appear to be smaller than those of the other images.
  • Therefore, if the image evaluation process performed by the image processing apparatus 2 detects that the images No. 15 and No. 16 are small in block distortion, the image evaluation process operates good.
  • FIG. 6 is a graph showing block distortions detected from the images No. 1 to No. 33 by the image processing apparatus 2.
  • As is obvious from FIG. 6, block distortions of the images No. 15 and No. 16 are particularly smaller than those of the other images. This agrees with the evaluation result based on eye observation. That is, it is obvious that the image processing apparatus 2 according to this exemplary embodiment can evaluate block distortion appropriately.
  • [Modification 1]
  • In the above exemplary embodiment, the amount of block distortion BN of the input image data is calculated in such a manner that the average of E(I) (mean(E)) is divided by the standard deviation of E(I) (std(E)). The reason why the average of E(I) is divided by the standard deviation of E(I) is to normalize the average of E(I). That is, in an image having E(I) varying widely, it is conceived that the value of mean(E) is insignificant. On the other hand, in an image having E(I) varying narrowly, it is conceived that the value of mean(E) is significant.
  • However, in the case of calculating the amount of block distortion BN of the input image data is performed only for the flat pixel regions as described in the exemplary embodiment, the value of the standard deviation little varies according to images as shown in FIG. 7. It is therefore conceivable that the necessity of dividing mean(E) by std(E) is low.
  • Accordingly, in the modification 1, the amount of block distortion of the whole image is calculated as follows.
    BN=mean(E)
  • FIGS. 8A and 8B are graphs for comparing an amount of block distortion calculated as BN=mean(E)/std(E) with an amount of block distortion calculated as BN=mean(E). FIG. 8A shows a result calculated as BN=mean(E)/std(E). FIG. 8B shows a result calculated as BN=mean(E).
  • As is obvious from reference to FIGS. 8A and 8B, the almost same result can be obtained even when BN=mean(E) is used.
  • [Modification 2]
  • Although the exemplary embodiment has shown the case where all pairs of block-boundary pixels are extracted as shown in FIG. 4, it is not necessary to extract all pairs of block-boundary pixels.
  • For example, in the modification 2, the pixel-pair extraction section 510 extracts only transverse pixel pairs as shown in FIG. 9A. That is, the pixel-pair extraction section 510 of the modification 2 extracts pixel regions I each of which contains a boundary between adjacent two blocks and has pixels arranged in the same row.
  • FIG. 10 shows an experimental result in this case. FIG. 10A shows the experimental result of the modification 2 of the exemplary embodiment (that is, the same figures as FIG. 6). As is obvious from reference to FIG. 10A, block distortion can be detected with sufficient accuracy even if only the transverse pixel pairs are used.
  • Incidentally, the pixel-pair extraction section 510 may extract only longitudinal pixel pairs as shown in FIG. 9B. That is, the pixel-pair extraction section 510 may extract pixel regions I each of which contains a boundary between adjacent two blocks and has pixels arranged in the same column. FIG. 10B shows another experimental result of the modification 2 of the exemplary embodiment (that is, the same figures as FIG. 6). As is obvious from reference to FIG. 10B, block distortion can be detected with sufficient accuracy even if only the transverse pixel pairs are used.
  • The pixel-pair extraction section 510 may extract pixel pairs (pixel regions I) so as to sample the pixel pairs (pixel regions I) at intervals of several lines (several rows) as shown in FIG. 11A. On the assumption that the pixel-pair extraction section 510 samples pixel pairs (pixel regions I) at intervals of Nr lines (rows), FIG. 11A is equivalent to the case where the pixel-pair extraction section 510 samples the pixel pairs (pixel regions I) only transversely and Nr is equal to 2.
  • FIGS. 12A to 12C show experimental results in this case. FIG. 12A is equivalent to the case where the pixel-pair extraction section 510 extracts all transverse pixel pairs (pixel regions I). FIG. 12B is equivalent to the case where the pixel-pair extraction section 510 samples transverse pixel pairs (pixel regions I) at intervals of two pixels. FIG. 12C is equivalent to the case where the pixel-pair extraction section 510 samples transverse pixel pairs (pixel regions I) at intervals of four pixels.
  • As is obvious from the graphs shown in FIGS. 12A to 12Cm, performance little deteriorates in spite of line sampling.
  • The pixel-pair extraction section 510 may sample processing blocks as shown in FIG. 11B. Specifically, the pixel-pair extraction section 510 may sample only one block from Nb×Nb blocks. Incidentally, FIG. 11B is equivalent to the case where the pixel-pair extraction section 510 performs transverse sampling, Nr is equal to 4 and Nb is equal 2.
  • FIGS. 13A and 13B show experimental results in this case. FIG. 13A is equivalent to the case where sampling is performed with Nr=2 and Nb=1 (that is, sampling is not performed for each block). FIG. 13B is equivalent to the case where sampling is performed with Nr=4 and Nb=2.
  • As is obvious from the graphs shown in FIGS. 13A and 13B, performance little deteriorates in spite of block sampling.
  • [Modification 3]
  • Although the exemplary embodiment has shown the case where pixels a, b, c and d shown in FIG. 3B are extracted as a pair of block-boundary pixels and a pair of non-block-boundary pixels (pixel region I), the pattern of the pair of non-block-boundary pixels is not limited to the exemplary embodiment. Another pattern of a pair of non-block-boundary pixels corresponding to a pair of block-boundary pixels b and c will be described below.
  • As shown in FIG. 14A, the pixel-pair extraction section 510 may use pixels a and b as a pair of non-block-boundary pixels. In other words, the pixel-pair extraction section 510 may extract a pixel region including a singe pair of block-boundary pixels (pixels b and c) and a single pair of non-block-boundary pixels (pixels a and b). In this case, it is not necessary to acquire an average of differences between non-block-boundary pixels.
  • As shown in FIG. 14B, the pixel-pair extraction section 510 may use pixels a and d as a pair of non-block-boundary pixels. It is not necessary that the pair of non-block-boundary pixels contain the pixel b. In other words, the pixel-pair extraction section 510 may extract a pixel region including a single pair of block-boundary pixels (b and c) and a single pair of non-block-boundary pixels (pixels a and d). Although FIG. 14B shows the case where pixels d and b are not adjacent to each other, these pixels may be adjacent to each other. In that case, the pixel-pair extraction section 510 does not treat a pair of pixels d and b as a pair of non-block-boundary pixels.
  • As shown in FIG. 14C, the pixel-pair extraction section 510 may extract plural pairs of non-block-boundary pixels (a pair of pixels a1 and d1 and a pair of pixels a2 and d2), which are separate from a pair of block-boundary pixels (a pair of pixels b and c). In other words, the pixel-pair extraction section 510 may extract a pixel region including a single pair of block-boundary pixels and plural pairs of non-block-boundary pixels, which are separate from the pair of block-boundary pixels. As shown in FIG. 14D, the pixel-pair extraction section 510 may regard pixels a to e (a pixels region including the pixels a to e) as that including three pairs of non-block-boundary pixels “a and d”, “d and e” and “e and b”.
  • As shown in FIG. 14E, the pixel-pair extraction section 510 may extract a pair of block-boundary pixels and a pair of non-block-boundary pixels from different lines (rows), respectively. In other words, the pixel-pair extraction section 510 may extract a pixel region including plural pixels of different lines (rows). In this case, a relative positional relation between the pair of block-boundary pixels is the same as that between the pair of non-block-boundary pixels. For example, as shown in FIG. 14E, since pixels b and c are transversely adjacent to each other, pixels a and d are transversely adjacent to each other.
  • Alternatively, as shown in FIG. 14F, the relative positional relation between a pair of pixels (e.g., pixels b and c) may rotate by 90 degrees from that between the other pair of pixels (e.g., pixels a and d).
  • [Modification 4]
  • In the exemplary embodiment, the flat-region judgment section 520 sets the difference between a maximum pixel value and a minimum pixel value in the extracted pixel region I, as H(I).
  • Another method may be, however, used as a method for judging the flat region (non-edge region). For example, the difference between a maximum pixel value and a minimum pixel value in a region having pairs of non-block-boundary pixels may be set as H(I).
  • In the case shown in FIG. 15A, H(I) is obtained as follows.
    H(I)=max(a1,d1,a2,d2)−min(a1,d1,a2,d2)
  • The flat-region judgment section 520 may calculate the difference between the maximum pixel value and the minimum pixel value in pixel regions contained in a single block, as H(I). Since two blocks are adjacent to the boundary, two kinds of differences between the maximum pixel value and the minimum pixel value can be calculated. The largest value in the two kinds of differences may be set as H(I).
  • For example, in the case shown in FIG. 15B, H(I) is obtained as follows. H ( I ) = max [ { max ( a 1 , d 1 ) - min ( a 1 , d 1 ) } , { max ( a 2 , d 2 ) - min ( a 2 , d 2 ) } ] = max { abs ( a 1 - d 1 ) , abs ( a 2 , d 2 ) }
  • Alternatively, in the case shown in FIG. 15C, H(I) is obtained as follows. H ( I ) = max [ { max ( a , b ) - min ( a , b ) } , { max ( c , d ) - min ( c , d ) } ] = max { abs ( a - b ) , abs ( c - d ) }
  • Alternatively, H (I) may be not the largest value of the differences, but the average of the differences as follows. H ( I ) = { max ( a , b ) - min ( a , b ) } , { max ( c , d ) - min ( c , d ) } / 2 = { abs ( a - b ) , abs ( c - d ) } / 2
  • Alternatively, the variance or standard deviation of pixel values in the extracted pixel region I may be set as H(I).
  • [Modification 5]
  • In the above exemplary embodiment and the modifications of the exemplary embodiment, degree of distortion in units of blocks each having 8 pixels by 8 pixels is calculated as an amount of block distortion based on the assumption that block positions of the JPEG coding process are known.
  • In a modification 5, description will be given on a method for calculating an amount of block distortion while detecting boundary positions of 8×8 block in the case where the block positions of the JPEG coding process are unknown.
  • Examples of the case where the block positions are unknown may include the case where a clipping process is performed as shown in FIG. 16 and the case where a rotation process is performed.
  • FIG. 17 is a view for exemplifying an image in which blocks of the coding process are shifted in the transverse direction. At first, description will be given on the case where the block boundary is shifted in the transverse direction as shown in FIG. 17.
  • As described in the exemplary embodiment and the modifications of the exemplary embodiment, the image processing apparatus 2 calculates a block distortion based on a gradation difference among four pixels (a, b, c, d) arranged in the transverse direction over a block boundary (e.g. absolute values of differences between respective pairs of adjacent pixels or an amount of block distortion of a pixel region, which includes the four pixels (a, b, c, d) and includes a block boundary). Therefore, the shift of the block positions in the longitudinal direction of the image does not substantially effect on calculating of the amount of the block distortion. For example, as shown in FIG. 17, even if the block positions are shifted in the longitudinal direction, positions of four pixels, which are to be used in the calculating, are merely shifted in the longitudinal direction and are still include the block boundary. Therefore, the block distortion can be detected normally.
  • FIG. 18 exemplifies an image in which blocks of the coding process are also shifted in the transverse direction. Next, description will be given on the case where the block boundary is shifted in the transverse direction as shown in FIG. 18. In this case, four pixels (a, b, c, d) arranged in the transverse direction may not include a block boundary. Therefore, block distortion may not be calculated based on the above described method.
  • For example, as shown in FIG. 18, in the case where the block positions are shifted in the transverse direction, four pixels which is to be used in the calculating are also shifted in the transverse direction, the gradation difference in the block boundary cannot be detected.
  • The image processing apparatus 2 of the modification 5 has the configuration shown in FIG. 21. FIG. 21 is similar to FIG. 1, but is different in that specific components of the CPU 212 are shown in FIG. 21. The CPU 212 includes a block setting unit 212 a, a difference evaluation unit 212 b, a control unit 212 c and an evaluation value generating unit 212 d. The control unit 212 c controls the block setting unit 212 a, the difference evaluation unit 212 b and the evaluation value generating unit 212 d.
  • The image processing apparatus 2 of the modification 5 calculates amounts of block distortion of plural types of blocks, which are different in phase or size, of an image. Then, the image processing apparatus 2 evaluates an amount of block distortion of an input image based on the calculated plural amounts of block distortion. In this example, the image processing apparatus 2 calculates amounts of block distortion while shifting a start position (in total 8 positions) in the transverse direction, and adopts the maximum value.
  • For example, as shown in FIG. 19, the difference evaluation unit 212 b calculates the block distortion E(I, 1) of each extracted pixel region I. Then, the difference evaluation unit 212 b calculates the amounts of block distortion BN(F, 1) of an entire image F based on the block distortions E(I, 1) by the method described in the exemplary embodiment. The difference evaluation unit 212 b stores the calculated amounts of block distortion BN(F, 1) of the entire image F. In the image F, block positions of the coding process are unknown. At this time, as shown in FIG. 18, block boundaries may not be calculated. Even in that case, the image processing apparatus 2 does not care.
  • Then, as shown in FIG. 19, the block setting unit 212 a shifts the start point of block distortion calculation by one pixel in the right direction. Then, the difference evaluation unit 212 b calculates the block distortion E(I, 2) of each extracted region I. The difference evaluation unit 212 b calculates and stores an amount of block distortion BN (F, 2) of the entire image F in a similar manner. Subsequently, the image processing apparatus 2 calculates amounts of block distortion B(N, i) of the entire image F while the block setting unit 212 a shifts the start position in the right direction one pixel by one pixel.
  • In this manner, the image processing apparatus 2 of this modification calculates the amounts of block distortion BN (F, i) (i=1, 2, . . . 8) of the entire image F eight times in total while shifting the start position one pixel by one pixel. Here, the maximum value of the thus calculated BN (F, i) is expressed as BN(F). The evaluation value generating unit 212 d judges that the block boundary is located in a position, which gives the maximum value BN(F).
  • Next, the reason why the maximum value of the amounts of block distortion, which are calculated with plural phases, is used as the amount of block distortion of the input image will be described. Here, the term “phase” is defined as follows. It is assumed that a hypothetical sinusoidal curve having the block size (in this exemplary embodiment, the block size=8) as one cycle. It is noted that amplitude of the hypothetical sinusoidal wave is not considered. Since the cycle of the hypothetical sinusoidal wave is equal to the block size (e.g. 8 blocks), if the “phase” changes, a position of the hypothetical sinusoidal wave changes. Furthermore, it is assumed that the positions of the blocks and the position of the hypothetical sinusoidal wave are fixed. At this time, a position of each block can be expressed in the “phase.” Also, the shift of the start position of the block distortion calculation can be expressed in the “phase.” For example, if the shift is equal to the block size (that is, 8 blocks), the “phase” is equal to 2Π radian. Also, if the shift is equal to a half of the block size (that is, 4 blocks), the “phase” is equal to Π radian.
  • At first, it is obvious that if amounts of block distortion (eight in total) are calculated while the start position is being shifted one pixel by one pixel, any of the calculated amounts of block distortion should be matched with the true one. Also, as descried in the exemplary embodiment and the modifications of the exemplary embodiments, when different images are compared, JPEG block distortion is larger as an amount of block distortion is lager. Also, it is rare that factors other than JPEG block distortion cause an amount of block distortion, which is a large boundary difference appearing in eight-pixel intervals. From these natures, it is reasonable that if the maximum value of the calculated eight values is larger than a certain level, the maximum value is considered as just the amount of block distortion. Also, if the maximum value of the calculated eight values is smaller than a certain level, it is difficult to precisely judge whether or not such a maximum value is an amount of block distortion. However, even if such a maximum value is used as an amount of block distortion, there would arise no problem because the maximum value is relatively small. Accordingly, if the maximum value of the calculated eight values is used as an amount of block distortion regardless of magnitude of the maximum value, there would arise no problem.
  • Edge images, which are in the surroundings of respective 33 images used in an example of the modification 5, are clipped by three pixels. Then, an amount of block distortion is calculated by the above description method for adopting the maximum value. Then, as shown in FIG. 20, the amount of block distortion can be calculated with no large difference between the clipped case and the no-clipped case.
  • The fact that the amount of block distortion can be calculated from eight positions in total means that boundary positions of the block distortion can be specified simultaneously. Therefore, the image processing apparatus 2 may estimate the block-boundary positions of the coding process by calculating the amount of block distortion with plural phases and comparing the calculated amounts of block distortion with each other.
  • Also, as described above, the amount of block distortion is a feature value, which takes a large value in a JPEG image highly compressed. Therefore, the maximum value is often outstand among the eight values calculated from the eight positions. On the other hand, if the maximum value of the eight values is not outstand, there is a possibility that the maximum value is a value obtained by coincidence from a structure of an image rather than the amount of block distortion. Therefore, the image processing apparatus 2 may calculate the amounts of block distortion with the plural phases, compare the amounts of block distortion with each other and judge whether or not it is necessary to remove the block distortion.
  • Also, in the case where a size of an image region corresponding to the block of the coding process changes due to enlargement or reduction of an image, the image processing apparatus 2 calculates amounts of block distortion of blocks having plural sizes and compares the amounts of block distortion with each other, to thereby calculate the true amount of block distortion.
  • The operation of the image processing apparatus 2 of the modification 5 will be described with reference to a flowchart shown in FIG. 22.
  • In step S201, variants are initialized. Specifically, I=1, i=1, NumI=0, S=0 and S2=0. It is noted that the index “I” and the index “i” are different variants. NumI denotes number of blocks, which have the flatness H(I, i) smaller than TH1. The index I denotes an extracted pixel region. The index “i” denotes a start point of the block distortion calculation. S denotes an amount of block distortion. S2 indicates a square sum of S. Then, the difference evaluation unit 212 b extracts a pixel region I from the start position i, and calculates H(I, i) based on the extracted pixel region I (step S202). The difference evaluation unit 212 b determines whether or not H(I, i) is smaller than TH1 (step S203). If the difference evaluation unit 212 b determines that H(I, i) is smaller than TH1 (Yes in step S203), the difference evaluation unit 212 b increments NumI by 1, calculates E(I, i) in the above-described manner, adds E(I, i) to S and also adds E(I, i)×E(I, i) to S2 (step S204). If the difference evaluation unit 212 b determines that H(I, i) is equal to or larger than TH1 (No in step S203), the process jumps to step S205. In the step S205, the difference evaluation unit 212 b increments I by 1. Then, the difference evaluation unit 212 b determines whether or not I is equal to MaxI (step S206). If the difference evaluation unit 212 b determines that I is not equal to MaxI, that is, is less than MaxI (No in step S206), the process returns to the step S202. If the difference evaluation unit 212 b determines that I is equal to MaxI (Yes in step S206), the difference evaluation unit 212 b calculates mean (E, i) (i.e., an average of E(I, i)) by dividing S by NumI, and calculates std(E, i) (i.e., a standard deviation of E(I, i)) by using the following expression: std ( E , ) = S 2 NumI - ( mean ( E , ) 2
    Then, the evaluation value generating unit 212 d calculates a block distortion BN (F, i) of the entire image by dividing mean(E, i) by std(E, i) (step S207).
  • If H(I, i)≧TH1 in all regions I (i.e., NumI=0), the control unit 212 c decides that it is impossible to obtain block distortion BN (F, i).
  • Then, the difference evaluation unit 212 b determines whether or not the index i=Maxi (step S208). If the difference evaluation unit 212 b determines that the index i is less than Maxi (No at step S208), the block setting unit 212 a updates the variants (step S209). Specifically, the block setting unit 212 a increments the index i by one, and resets the index I, Num I, S and S2 (i.e. sets I=1, NumI=0, S=0 and S2=0). Then, the process returns to the step S202, and repeats the steps S202 to S208 while the start point of the block distortion calculation is shifted one pixel in the right direction as shown in FIG. 19.
  • If the difference evaluation unit 212 b determines that the index i is equal to Maxi (Yes at step S208), the evaluation value generating unit 212 d selects the maximum value BN(F) from among BN(F, i) (i=1 to Maxi) as an amount of block distortion of the entire image F (step S210).
  • [Other Modifications]
  • It is noted that the flat-region judgment section 520 is not essential to the exemplary embodiment.
  • Although the exemplary embodiment has shown the case where the total image distortion calculation section 560 calculates an average of E(I), the total image distortion calculation section 560 may calculate the medium of E(I), the mode of E(I) or the sum of E(I) (only the sum can be acquired when the size of the image is constant and the flat pixel region judgment is not made) instead of the average of E(I).
  • The foregoing description of the exemplary embodiments of the invention has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise forms disclosed. Obviously, many modifications and variations will be apparent to practitioners skilled in the art. The exemplary embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, thereby enabling others skilled in the art to understand the invention for various embodiments and with the various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the following claims and their equivalents.

Claims (21)

1. An image evaluation apparatus comprising:
a pixel extraction unit that extracts, from an input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks;
an intra-pair difference calculation unit that calculates a difference between the extracted pair of block-boundary pixels as a first difference, the intra-pair difference calculation unit that calculates a difference between the extracted pair of non-block-boundary pixels as a second difference;
an inter-pair difference calculation unit that calculates a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region; and
an evaluation unit that evaluates an amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region.
2. The apparatus according to claim 1, wherein the pair of block-boundary pixels are opposite to each other across a boundary between adjacent coding blocks.
3. The apparatus according to claim 1, wherein:
the pixel extraction unit extracts the pixel region including the pair of block-boundary pixels and a plurality of pairs of non-block-boundary pixels,
the intra-pair difference calculation unit calculates a difference between each extracted pair of non-block-boundary pixels, and
the intra-pair difference calculation unit sets an average of the calculated differences between the extracted pairs of non-block-boundary pixels as the second difference.
4. The apparatus according to claim 1, wherein:
the pixel extraction unit extracts a plurality of pixel regions;
the intra-pair difference calculation unit calculates the first difference and the second difference, for each extracted pixel region,
the inter-pair difference calculation unit calculates the amount of block distortion of each extracted pixel region, and
the evaluation unit calculates an average of the amounts of block distortion of the extracted pixel regions, as the amount of block distortion of the input image.
5. The apparatus according to claim 1, wherein:
the pixel extraction unit extracts a plurality of pixel regions;
the intra-pair difference calculation unit calculates the first difference and the second difference, for each extracted pixel region,
the inter-pair difference calculation unit calculates the amount of block distortion of each extracted pixel region,
the evaluation unit divides an average of the amounts of block distortion of the extracted pixel regions by a standard deviation of the amounts of block distortion of the extracted pixel regions to obtain the amount of block distortion of the input image.
6. The apparatus according to claim 1, further comprising:
a flat-pixel-region judgment unit that judges whether or not the extracted pixel region contains an edge, on a basis of an amount of gradation change in the input image, wherein:
when the flat-pixel-region judgment unit judges that the extracted pixel region does not contain the edge, the intra-pair difference calculation unit calculates the first difference and the second difference.
7. The apparatus according to claim 6, wherein the flat-pixel-region judgment unit uses a difference between a maximum pixel value and a minimum pixel value in the extracted pixel region to judge whether or not the extracted pixel region contains the edge.
8. The apparatus according to claim 6, wherein:
the pair of block-boundary pixels are opposite to each other across a boundary between adjacent coding blocks,
the flat-pixel-region judgment unit calculates a difference between a maximum pixel value and minimum pixel value in the extracted pixel region in one of the adjacent coding blocks,
the flat-pixel-region judgment unit calculates a difference between a maximum pixel value and minimum pixel value in the extracted pixel region in the other of the adjacent coding blocks, and
the flat-pixel-region judgment unit uses the largest one of the calculated differences to judge whether or not the extracted pixel region contains the edge.
9. The apparatus according to claim 6, wherein:
the pair of block-boundary pixels are opposite to each other across a boundary between adjacent coding blocks,
the flat-pixel-region judgment unit calculates a difference between a maximum pixel value and minimum pixel value in the extracted pixel region in one of the adjacent coding blocks,
the flat-pixel-region judgment unit calculates a difference between a maximum pixel value and minimum pixel value in the extracted pixel region in the other of the adjacent coding blocks, and
the flat-pixel-region judgment unit uses an average of the calculated differences to judge whether or not the extracted pixel region contains the edge.
10. The apparatus according to claim 1, wherein the pixel extraction unit extracts the pixel region so that a relative position between the pair of block-boundary pixels is the same as a relative position between the pair of non-block-boundary pixels.
11. The apparatus according to claim 1, wherein the pixel extraction unit extracts the pixel region so that a relative position between the pair of block-boundary pixels rotates by at least 90 degrees from a relative position between the pair of non-block-boundary pixels.
12. The apparatus according to claim 1, wherein the pixel extraction unit extracts the pixel region including four pixels, which consist of the pair of block-boundary pixels and two pixels adjacent to the pair of block-boundary pixels in a right and left direction or in an upper and lower direction.
13. The apparatus according to claim 1, wherein the pixel extraction unit extracts the pixel region including the pair of block-boundary pixels, which are arranged in a transverse direction or a longitudinal direction.
14. The apparatus according to claim 1, wherein the pixel extraction unit extracts a plurality of pixel regions from a part of rows or columns of the input image.
15. The apparatus according to claim 1, wherein the pixel extraction unit extracts a plurality of pixel regions from a part of block boundaries in the input image.
16. An image evaluation apparatus comprising:
a block setting unit that sets a block, which has a predetermined size, in an input image;
a difference evaluation unit that evaluates a gradation difference of the input image based on the block set by the block setting unit;
a control unit that controls the block setting unit and the difference evaluation unit so as to evaluate the gradation difference of the input image with respect to plural types of blocks, which are different in phases in the image or in size; and
an evaluation value generating unit that generates an evaluation value of the input image based on the gradation differences, which are evaluated by the difference evaluation unit with respect to the respective plural types of blocks.
17. The apparatus according to claim 16, wherein the evaluation value generating unit selects a maximum value from among the gradation differences, which are calculated with respect to the respective plural types of blocks, as the evaluation value of the input image.
18. The apparatus according to claim 16, wherein:
the difference evaluation unit comprises:
a pixel extraction unit that extracts, from the input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks;
an intra-pair difference calculation unit that calculates a difference between the extracted pair of block-boundary pixels as a first difference, the intra-pair difference calculation unit that calculates a difference between the extracted pair of non-block-boundary pixels as a second difference;
an inter-pair difference calculation unit that calculates a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region; and
an evaluation unit that evaluates an amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region, and
the evaluation value generating unit generates the evaluation value of the input image based on the amounts of block distortion.
19. An image evaluation method comprising:
extracting, from an input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks;
calculating a difference between the extracted pair of block-boundary pixels as a first difference;
calculating a difference between the extracted pair of non-block-boundary pixels as a second difference;
calculating a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region; and
evaluating an amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region.
20. A computer readable medium storing a program causing a computer to execute a process for evaluating an amount of block distortion of an input image, the process comprising:
extracting, from the input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks;
calculating a difference between the extracted pair of block-boundary pixels as a first difference;
calculating a difference between the extracted pair of non-block-boundary pixels as a second difference;
calculating a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region; and
evaluating the amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region.
21. A computer data signal embodied in a carrier wave for enabling a computer to perform a process for evaluating an amount of block distortion of an input image, the process comprising:
extracting, from the input image, a pixel region including (i) a pair of block-boundary pixels in a boundary position of coding blocks and (ii) a pair of non-block-boundary pixels in a position other than the boundary position of the coding blocks;
calculating a difference between the extracted pair of block-boundary pixels as a first difference;
calculating a difference between the extracted pair of non-block-boundary pixels as a second difference;
calculating a difference between the first difference and the second difference as an amount of block distortion of the extracted pixel region; and
evaluating the amount of block distortion of the input image on a basis of the calculated amount of block distortion of the extracted pixel region.
US11/634,157 2005-12-16 2006-12-06 Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal Abandoned US20070140590A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/064,321 US8135232B2 (en) 2005-12-16 2011-03-18 Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2005363191 2005-12-16
JP2005-363191 2005-12-16
JP2006183349A JP2007189657A (en) 2005-12-16 2006-07-03 Image evaluation apparatus, image evaluation method and program
JP2006-183349 2006-07-03

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/064,321 Division US8135232B2 (en) 2005-12-16 2011-03-18 Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal

Publications (1)

Publication Number Publication Date
US20070140590A1 true US20070140590A1 (en) 2007-06-21

Family

ID=38173569

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/634,157 Abandoned US20070140590A1 (en) 2005-12-16 2006-12-06 Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal
US13/064,321 Expired - Fee Related US8135232B2 (en) 2005-12-16 2011-03-18 Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/064,321 Expired - Fee Related US8135232B2 (en) 2005-12-16 2011-03-18 Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal

Country Status (4)

Country Link
US (2) US20070140590A1 (en)
JP (1) JP2007189657A (en)
KR (1) KR100828540B1 (en)
CN (1) CN100579170C (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245675A1 (en) * 2008-03-25 2009-10-01 Koji Aoyama Image processing device and method and program
US20100226573A1 (en) * 2009-03-03 2010-09-09 Samsung Electronics Co., Ltd. System and method for block edge location with varying block sizes and offsets in compressed digital video
US20100246990A1 (en) * 2009-03-24 2010-09-30 Samsung Electronics Co., Ltd. System and method for measuring blockiness level in compressed digital video
US20190246137A1 (en) * 2011-11-10 2019-08-08 Sony Corporation Image processing apparatus and method

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7239755B1 (en) 1997-07-30 2007-07-03 Lg Electronics Inc. Method of reducing a blocking artifact when coding moving picture
US9071818B2 (en) * 2011-08-30 2015-06-30 Organizational Strategies International Pte. Ltd. Video compression system and method using differencing and clustering
SG11201401494PA (en) * 2011-10-21 2014-05-29 Organizational Strategies Internat Pte Ltd An interface for use with a video compression system and method using differencing and clustering
CN103533147A (en) * 2012-07-03 2014-01-22 邢东 Method of automatic scoring for mobile phone photographing
CN102999903B (en) * 2012-11-14 2014-12-24 南京理工大学 Method for quantitatively evaluating illumination consistency of remote sensing images
JP6862683B2 (en) * 2016-05-31 2021-04-21 大日本印刷株式会社 Judgment device and program
JP7052894B2 (en) * 2021-01-28 2022-04-12 大日本印刷株式会社 Judgment device and program
US11770584B1 (en) * 2021-05-23 2023-09-26 Damaka, Inc. System and method for optimizing video communications based on device capabilities

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5442459A (en) * 1992-12-09 1995-08-15 Samsung Electronics Co., Ltd. Process for encoding a half tone image considering similarity between blocks
US5553164A (en) * 1992-02-07 1996-09-03 Canon Kabushiki Kaisha Method for compressing and extending an image by transforming orthogonally and encoding the image
US5748788A (en) * 1992-09-29 1998-05-05 Cannon Kabushiki Kaisha Image processing method and apparatus
US6243416B1 (en) * 1997-03-12 2001-06-05 Oki Data Corporation Image coding and decoding methods, image coder, and image decoder
US20010004739A1 (en) * 1999-09-27 2001-06-21 Shunichi Sekiguchi Image retrieval system and image retrieval method
US20020097911A1 (en) * 1998-11-13 2002-07-25 Xerox Corporation Blocking signature detection for identification of JPEG images
US6434275B1 (en) * 1997-05-28 2002-08-13 Sony Corporation Block distortion reduction method and device and encoding method and device
US20020113901A1 (en) * 2001-02-16 2002-08-22 Osberger Wilfried M. Robust camera motion estimation for video sequences
US20040085460A1 (en) * 1997-12-17 2004-05-06 Yasuhiko Shiomi Imaging apparatus, control method, and a computer program product having computer program code therefor
US6801664B1 (en) * 1999-02-22 2004-10-05 Matsushita Electric Industrial Co., Ltd. Method of image coding, image coding apparatus, and recording medium including image coding program
US20040223656A1 (en) * 1999-07-30 2004-11-11 Indinell Sociedad Anonima Method and apparatus for processing digital images
US20050111542A1 (en) * 2003-11-20 2005-05-26 Canon Kabushiki Kaisha Image processing apparatus and method, and computer program and computer-readable storage medium
US7031393B2 (en) * 2000-10-20 2006-04-18 Matsushita Electric Industrial Co., Ltd. Block distortion detection method, block distortion detection apparatus, block distortion removal method, and block distortion removal apparatus

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5058181A (en) * 1989-01-25 1991-10-15 Omron Tateisi Electronics Co. Hardware and software image processing system
US5209220A (en) 1989-10-05 1993-05-11 Olympus Optical Co., Ltd. Endoscope image data compressing apparatus
JPH07106646B2 (en) * 1989-12-25 1995-11-15 富士ゼロックス株式会社 Image processing device
JP3432904B2 (en) * 1994-08-31 2003-08-04 三洋電機株式会社 Block distortion detection device
JP3149729B2 (en) * 1995-05-15 2001-03-26 松下電器産業株式会社 Block distortion removal device
JPH10191020A (en) * 1996-12-20 1998-07-21 Canon Inc Object image segmenting method and device
JPH10327315A (en) * 1997-05-26 1998-12-08 Fuji Xerox Co Ltd Image processing unit
GB9712645D0 (en) * 1997-06-18 1997-08-20 Nds Ltd Improvements in or relating to image processing
JP4239231B2 (en) * 1998-01-27 2009-03-18 ソニー株式会社 Block distortion reducing apparatus and method
US6978045B1 (en) * 1998-10-02 2005-12-20 Minolta Co., Ltd. Image-processing apparatus
JP3566145B2 (en) * 1999-09-03 2004-09-15 シャープ株式会社 Image forming device
US7251056B2 (en) * 2001-06-11 2007-07-31 Ricoh Company, Ltd. Image processing apparatus, image processing method and information recording medium
US6980997B1 (en) * 2001-06-28 2005-12-27 Microsoft Corporation System and method providing inlined stub
JP3772845B2 (en) * 2003-03-24 2006-05-10 コニカミノルタホールディングス株式会社 Image processing program, image processing apparatus, and photographing apparatus

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5553164A (en) * 1992-02-07 1996-09-03 Canon Kabushiki Kaisha Method for compressing and extending an image by transforming orthogonally and encoding the image
US5748788A (en) * 1992-09-29 1998-05-05 Cannon Kabushiki Kaisha Image processing method and apparatus
US5442459A (en) * 1992-12-09 1995-08-15 Samsung Electronics Co., Ltd. Process for encoding a half tone image considering similarity between blocks
US6243416B1 (en) * 1997-03-12 2001-06-05 Oki Data Corporation Image coding and decoding methods, image coder, and image decoder
US6434275B1 (en) * 1997-05-28 2002-08-13 Sony Corporation Block distortion reduction method and device and encoding method and device
US20040085460A1 (en) * 1997-12-17 2004-05-06 Yasuhiko Shiomi Imaging apparatus, control method, and a computer program product having computer program code therefor
US20020097911A1 (en) * 1998-11-13 2002-07-25 Xerox Corporation Blocking signature detection for identification of JPEG images
US6801664B1 (en) * 1999-02-22 2004-10-05 Matsushita Electric Industrial Co., Ltd. Method of image coding, image coding apparatus, and recording medium including image coding program
US20040223656A1 (en) * 1999-07-30 2004-11-11 Indinell Sociedad Anonima Method and apparatus for processing digital images
US20010004739A1 (en) * 1999-09-27 2001-06-21 Shunichi Sekiguchi Image retrieval system and image retrieval method
US7031393B2 (en) * 2000-10-20 2006-04-18 Matsushita Electric Industrial Co., Ltd. Block distortion detection method, block distortion detection apparatus, block distortion removal method, and block distortion removal apparatus
US20020113901A1 (en) * 2001-02-16 2002-08-22 Osberger Wilfried M. Robust camera motion estimation for video sequences
US20050111542A1 (en) * 2003-11-20 2005-05-26 Canon Kabushiki Kaisha Image processing apparatus and method, and computer program and computer-readable storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090245675A1 (en) * 2008-03-25 2009-10-01 Koji Aoyama Image processing device and method and program
US8340454B2 (en) * 2008-03-25 2012-12-25 Sony Corporation Image processing device and method and program
US20100226573A1 (en) * 2009-03-03 2010-09-09 Samsung Electronics Co., Ltd. System and method for block edge location with varying block sizes and offsets in compressed digital video
US8363978B2 (en) * 2009-03-03 2013-01-29 Samsung Electronics Co., Ltd. System and method for block edge location with varying block sizes and offsets in compressed digital video
US20100246990A1 (en) * 2009-03-24 2010-09-30 Samsung Electronics Co., Ltd. System and method for measuring blockiness level in compressed digital video
US8891609B2 (en) * 2009-03-24 2014-11-18 Samsung Electronics Co., Ltd. System and method for measuring blockiness level in compressed digital video
US20190246137A1 (en) * 2011-11-10 2019-08-08 Sony Corporation Image processing apparatus and method
US20230247217A1 (en) * 2011-11-10 2023-08-03 Sony Corporation Image processing apparatus and method

Also Published As

Publication number Publication date
KR20070064393A (en) 2007-06-20
US8135232B2 (en) 2012-03-13
KR100828540B1 (en) 2008-05-13
CN100579170C (en) 2010-01-06
JP2007189657A (en) 2007-07-26
US20110164824A1 (en) 2011-07-07
CN101014079A (en) 2007-08-08

Similar Documents

Publication Publication Date Title
US8135232B2 (en) Image evaluation apparatus, image evaluation method, computer readable medium and computer data signal
Gu et al. A fast reliable image quality predictor by fusing micro-and macro-structures
Yang et al. An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise
US7769206B2 (en) Finger/palm print image processing system and finger/palm print image processing method
US7792384B2 (en) Image processing apparatus, image processing method, program, and recording medium therefor
US7995866B2 (en) Rotation angle detection apparatus, and control method and control program of rotation angle detection apparatus
US8861895B2 (en) Image processing apparatus
US7702173B2 (en) Image processing method, image processing apparatus and image processing program
US20100232697A1 (en) Image processing apparatus and image processing method
US20090214121A1 (en) Image processing apparatus and method, and program
US9569822B2 (en) Removing noise from an image via efficient patch distance computations
US20060244861A1 (en) Method for detecting bisection pattern in deinterlacing
US20110142363A1 (en) Image correction apparatus and image correction method
EP2911382A1 (en) Image processing device, image processing method, and computer program
Wu et al. Self-similarity based structural regularity for just noticeable difference estimation
Wang et al. A new blind image quality framework based on natural color statistic
CN103229208A (en) Image processing system, image processing method, and image processing program
US20050163380A1 (en) Method and apparatus for detecting the location and luminance transition range of slant image edges
CN112149672A (en) Image processing method and device, electronic device and storage medium
US7649652B2 (en) Method and apparatus for expanding bit resolution using local information of image
US20170140510A1 (en) Jagged edge reduction using kernel regression
US8059912B2 (en) Method for an image adjustment
Qureshi et al. An information based framework for performance evaluation of image enhancement methods
US20090041344A1 (en) Methods and Systems for Determining a Background Color in a Digital Image
JP2007257470A (en) Similarity discrimination device, method and program

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJI XEROX CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KIMURA, SHUNICHI;REEL/FRAME:018643/0686

Effective date: 20061204

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION