WO2002093935A1 - Image processing apparatus - Google Patents
Image processing apparatus Download PDFInfo
- Publication number
- WO2002093935A1 WO2002093935A1 PCT/JP2002/004596 JP0204596W WO02093935A1 WO 2002093935 A1 WO2002093935 A1 WO 2002093935A1 JP 0204596 W JP0204596 W JP 0204596W WO 02093935 A1 WO02093935 A1 WO 02093935A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- unit
- block
- value
- block distortion
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/117—Filters, e.g. for pre-processing or post-processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
Definitions
- the present invention relates to an image processing apparatus for processing a restored image obtained by decoding compressed data obtained by compressing data forming an original image.
- the compression coding technology of image data has been remarkably advanced.
- the compression coding technology is effective not only for efficient use of storage media, but also for shortening the time required to transmit and receive image data over a network.
- an irreversible image compression method in which the original image and the restored image do not completely match is used.
- Many irreversible image compression methods divide image data into multiple blocks in units of MXN pixels, perform orthogonal transformation on each block, quantize the obtained orthogonal transformation coefficients, and then encode them. I have.
- a typical example of the irreversible image compression method is J PEG, which is widely used as a color still image compression method.
- the color conversion unit 10 converts the data of each pixel composed of multi-value data (density data) of red (R), green (G), and blue (B) into a luminance component ( Y) and color difference components (C r, C b).
- RGB space There are several definitions of RGB space, one of which is s.
- conversion from RGB data to YC r C b data is performed based on the following formula (Equation 1). It is.
- the DCT converter 11 performs a discrete cosine transform (DCT transform) of the YCrCb data in units of 8 ⁇ 8 pixel blocks.
- DCT transform discrete cosine transform
- the DCT conversion is performed based on the following equation (Equation 2).
- the above X indicates the horizontal position of the original image before DCT conversion in each block
- the above y indicates the vertical position of the original image before DCT conversion in each block.
- the above u indicates the horizontal position of the DCT coefficient after DCT conversion in each block
- the above V indicates the vertical position of the DCT coefficient after DCT conversion in each block.
- sub-sampling is performed on the chrominance component to increase the compression efficiency.
- sub-sampling is performed so that one pixel of the chrominance component corresponds to the luminance component of 2 ⁇ 2 pixels. Therefore, for the color difference component, data of 8 ⁇ 8 pixels is thinned out from a block of 16 ⁇ 16 pixels, and DCT conversion is performed.
- the quantization unit 12 quantizes the DCT coefficient. Assuming that the quantized DCT coefficient is QDCT [V] [u] and the value to quantize each component of the DCT coefficient is Qtable [V] [u], the quantization is based on the following equation (Equation 3). Done.
- each value of the quantization table 13 is used, which can be set arbitrarily by the user.
- the human eye has a lower sensitivity to high-frequency components than low-frequency components, and a lower sensitivity to color-difference components than luminance components.
- a relatively larger value is used for the quantization step value for the high frequency component than for the quantization step value.
- a relatively large value is used for the quantization step value for the chrominance component than for the luminance component.
- Figures 2 and 3 show the quantization tables recommended by the JPEG standard method. FIG.
- FIG. 2 is a quantization table for the luminance component (Y)
- FIG. 3 is a quantization table for the chrominance components (C r, C b). Since each quantization step value of the quantization table used for quantization is required at the time of decoding, it is stored in the encoded JPEG compressed data.
- the quantized DCT coefficient is encoded by the entropy encoding unit 14.
- Huffman coding is used as entropy coding.
- the above processing is the outline of the encoding processing from image data to JPEG compressed data.
- the above encoding process is basically performed in reverse order.
- the procedure of the decoding process will be described.
- the entropy decoding unit 15 performs entropy decoding on the JPEG compressed data.
- the inverse quantization unit 16 performs inverse quantization.
- the inverse quantization unit 16 reads the quantization table 13 used at the time of encoding from the JPEG compressed data, and inversely quantizes each quantization step value of the quantization table 13 to each encoded component. Used as the value to be quantified. That is, the inverse quantization unit 16 uses the inverse quantization table 17 having the same value as each quantization step value of the quantization table 13 used in the encoding as the inverse quantization step value, and Inverse quantization table 1 Each coded component is inversely quantized using the quantization step value.
- the DCT coefficient after inverse quantization is RDCT [V] [u]
- the inverse quantization operation is performed based on the following equation (Equation 4).
- RD CT [V] [u] QDCT [v] [u] XQ tab 1 e [v] [u]
- the DCT coefficients inversely quantized based on (Equation 4) above are quantized at the time of encoding. Since the values are calculated from the coefficients rounded by the quantization, the DCT coefficients obtained from the original image are not accurately reproduced. However, the exact DCT coefficient obtained from the original image is equal to or more than the lower limit value d DCT [V] [u] shown below (Equation 5) and the upper limit value p DCT [V] [ u].
- the inverse DCT transform unit 18 After the inverse quantization is performed in this way, the inverse DCT transform unit 18 next performs the inverse DCT transform.
- this inverse DCT conversion conversion from DCT coefficients to YC r C b data is performed.
- Y C r C b after the inverse DCT transformation is G [y] [X]
- the inverse DCT transformation is performed based on the following equation (Equation 7).
- the color conversion unit 19 performs a color conversion process from the YC r C b data to the RGB data to obtain a restored image.
- the following equation (Equation 8) is a conversion equation used when converting YC r C b data to s RGB data. (Equation 8)
- FIG. 4 shows an example of the original image
- FIG. 5 shows an example of a restored image obtained by JPEG-compressing the original image and decoding the compressed data.
- mosquito noise shown in Fig. 5.
- the mosquito noise is a gradation fluctuation that makes it appear that mosquitoes are flying around the edges in the restored image. This is due to the fact that when encoding data, many high-frequency components were lost due to quantization of the DCT coefficients, and the strong edges that existed in the original image were not accurately restored.
- Block distortion refers to a phenomenon in which gradation is discontinuous at the block boundary of a restored image because encoding processing is performed in units of 8 ⁇ 8 pixels blocks. This noise appears remarkably in the area where the gradation value has changed gradually in the original image.
- Japanese Patent No. 2962828 discloses that when encoding the original image, the original edge existing in the original image is blocked.
- a method is disclosed in which information for specifying a block located at a boundary is added to compressed data. Then, at the time of decoding, based on the above information, the restored image is distinguished into blocks in which the original edge existing in the original image is located at the block boundary and blocks other than those described above.
- the processing to remove the block distortion is performed for the blocks of the original image, and the processing to leave the edges for the blocks located at the block boundaries is performed for the original edge that existed in the original image. is there.
- the block where the original edge existing in the original image is located at the block boundary is specified. Information needs to be added to the compressed data. For this reason, the coding device adds a function to identify the block where the original edge existing in the original image is located at the block boundary, and adds information to identify the identified block to the compressed data. There is a problem that functions must be provided.
- the convex projection method refers to a method of alternately and repeatedly performing a smoothing process and a projection process based on constraints. The processing procedure of the convex projection method will be described below with reference to FIG.
- the constraint condition calculation unit 21 calculates the constraint condition for the projection processing.
- the above constraint condition is a condition for limiting each DCT coefficient of each block forming an image to be finally output to a range of DCT coefficients which may have been in the original image.
- the DCT coefficients are quantized in the process of JPEG encoding and decoding. As described in the description of the JPEG decoding process, the DCT coefficients before quantization have the lower limit d It is guaranteed that it is equal to or more than DCT [v] [u] and less than the upper limit p DCT [v] [u]. For this reason, the constraint condition calculation unit 21 calculates the lower limit value d DCT [V] [u] and the upper limit value pDCT [V] [u] indicating the variable range of the DCT coefficient as the constraint condition in the projection processing. (See (Equation 5) and (Equation 6) above).
- the smoothing processing section 22 performs filter processing on the restored image to smooth it uniformly.
- the smoothed image data is color-converted into YCrCb data by the color conversion unit 23, and then DCT-converted by the DCT conversion unit 24.
- the projection processing unit 25 calculates the lower limit value d DCT [V] [u] and the upper limit value p DCT [V] of the DCT coefficients calculated by the constraint condition calculating unit 21. u], and perform projection processing. That is, if the DCT coefficient calculated by the DCT conversion unit 24 is smaller than the lower limit d DCT [v] [u] or equal to or larger than the upper limit pDCT [V] [u], the DCT coefficient is changed to a variable range. Round to the limit value.
- the projection processing unit 25 converts the DCT coefficient into the lower limit d DCT [V ] [u], and if it is equal to or more than the upper limit value p DCT [V] [u], the projection processing unit 25 replaces the DCT coefficient with the upper limit value p DCT [v] [u] .
- the color conversion unit 27 color-converts the YC r C b data into RGB data.
- the end determination unit 28 determines whether to end or continue the noise removal processing. When it is determined that the processing is to be continued, the components from the smoothing processing unit 22 to the color conversion unit 27 repeat the same processing again.
- FIG. 7 shows an image obtained by processing the restored image of FIG. 5 by the conventional convex projection method. As shown in Fig. 7, by repeating the smoothing process and the projection process based on the constraint conditions, the noise existing in the restored image was reduced without generating large blur. I understand.
- the end determination unit 28 does not repeat the smoothing process and the projection process a preset number of times, but repeats the smoothing process and the projection process based on the evaluation index obtained from the image. May be determined. As an example, the end determination unit 28 terminates the smoothing process and the projection process when the amount of change in the image after performing the smoothing process and the projection process in each iterative process decreases. You may do so. Specifically, if the image after the k-th processing is expressed as f k (X, y), and the image after the k + 1-th processing is expressed as f k + i (x, y), the k + 1 processing The amount of change E of the image at is calculated by the following equation (Equation 9).
- the end determination unit 28 determines to end the smoothing processing and the projection processing.
- Japanese Patent Laid-Open No. 7-175018 discloses that an image is divided into a plurality of small areas by sequentially connecting pixels having a pixel value change smaller than a predetermined value with respect to adjacent pixels. In contrast to this, there is disclosed a technique in which each small area is not necessarily of the same size.) And the smoothing process is performed in each area. When this method is used, the region is divided at the edge portion, so that the smoothing process is not performed across the edge, and there is an effect that the blur at the edge portion can be relatively suppressed.
- the region may be divided by the discontinuity of the gradation generated at the block boundary as the block distortion.
- the block distortion cannot be reduced, and the block distortion is reduced. Will remain.
- an object of the present invention is to provide an image processing apparatus capable of suppressing blur at an edge portion and removing block distortion in consideration of the above-described conventional problems.
- the original edge added to the compressed data and existing in the original image is blocked.
- the block in the restored image to be subjected to the block distortion removal processing is specified.
- the value of the pixel a 1 of the corner a in the block of interest X, and the three blocks L, LU, and U adjacent to the block of interest X respectively
- the values of the pixels a 2, a 3, and a 4 at the corner a are averaged, and the averaged value is estimated as the value of the pixel a 1 at the corner a in the block X of interest.
- the value of the pixel a 1 in the restored image is subtracted from the estimated value of the pixel a 1, and the value is regarded as the correction amount for the pixel a 1.
- the correction is performed for the pixels bl, cl, and dl located at the corners b, c, and d in the block of interest X, respectively. Calculate the amount.
- the amount of correction for the pixels al, bl, cl, and dl is weighted according to the distance from the pixels a1, bl, cl, and dl, and averaged by weighting.
- the correction amount of each pixel in the block X is calculated.
- the calculated correction amount of each pixel is added to the value of the corresponding pixel of the restored image to obtain an image with reduced block distortion.
- each corner of the block of interest X the pixel value of the corner in the block of interest X and the pixel value of each of the corners of three adjacent blocks all have the same value.
- each corner of each block has the same pixel value, and a problem occurs that a smooth gradation change is not reproduced at each corner of each block.
- an object of the present invention is to provide an image processing apparatus that reproduces a smooth gradation change at each corner of each block constituting a restored image and removes block distortion.
- Japanese Patent Application Laid-Open No. 8-214309 discloses a technique of switching a smoothing filter in accordance with a compression ratio of compressed data to an original image and performing a filtering process on a restored image.
- the magnitude of the adverse effects of noise such as mosquito noise and block distortion on the visual sense depends not only on the strength of compression of the compressed data to the original image, but also greatly on the output size of the restored image. .
- the above noise is very conspicuous and has a large adverse effect on the visual image.On the other hand, when the restored image is reduced and output, the noise becomes less conspicuous and the visual adverse effect is reduced. .
- the present invention provides an image processing apparatus that performs efficient noise removal processing suitable for outputting a restored image in consideration of the above-described conventional problems and a magnification when outputting the restored image.
- the purpose is to do. Disclosure of the invention
- an image processing apparatus is an apparatus for processing a restored image obtained by decoding compressed data obtained by compressing data forming an original image.
- the image processing apparatus further includes an area specifying unit that specifies a block distortion area to which the block distortion removal processing is applied in the restored image.
- the image processing apparatus of the present invention further includes a block distortion region noise removing unit that performs a noise removal process on the block distortion region specified by the region specifying unit.
- an image processing method of the present invention is a method of processing a restored image obtained by decoding compressed data obtained by compressing data forming an original image, wherein the restored image includes And a region specifying step of specifying a block distortion region to which block distortion removal processing is applied.
- the image processing method of the present invention further includes a block distortion region noise removing step of performing a noise removal process on the block distortion region specified in the region specifying step.
- an image processing apparatus of the present invention is an apparatus for processing a restored image obtained by decoding compressed data obtained by compressing a data forming an original image, and An enlargement ratio detection unit that detects an enlargement ratio of an output image with respect to the image; and an enlargement ratio detected by the enlargement ratio detection unit. It is characterized by having a noise removing unit for removing existing noise.
- the image processing device of the present invention also includes an image enlargement unit that enlarges the restored image from which noise has been removed by the noise removal unit based on the enlargement ratio detected by the enlargement ratio detection unit. I have.
- an image processing method of the present invention is a method of processing a restored image obtained by decoding compressed data obtained by compressing data forming an original image, comprising: An enlargement ratio detection step for detecting an enlargement ratio of an image; and a noise removal step for removing noise present in the restored image based on the enlargement ratio detected in the enlargement ratio detection step. are doing.
- the image processing method according to the present invention also includes an image enlargement step for enlarging the restored image from which noise has been removed in the noise removal step based on the enlargement ratio detected in the enlargement ratio detection step.
- FIG. 1 is a diagram showing a JPEG encoding and decoding processing procedure.
- FIG. 2 is a diagram showing a quantization table recommended by the JPEG standard method for luminance components.
- FIG. 3 is a diagram showing a quantization table recommended by the JPEG standard method for color difference components.
- FIG. 4 is a diagram showing an example of the original image.
- FIG. 5 is a diagram showing a restored image obtained by decoding the JPEG compressed data of the original image of FIG.
- FIG. 6 is a diagram showing a convex projection method processing procedure.
- FIG. 7 is a diagram showing an image obtained by subjecting the restored image shown in FIG. 5 to conventional convex projection processing.
- FIG. 8 is a diagram for explaining conventional block distortion removal processing.
- FIG. 9 is a diagram illustrating a configuration and a processing procedure of the image processing apparatus according to the first embodiment.
- FIG. 10 is a diagram for explaining a method of specifying a block distortion region.
- FIG. 11 is a diagram for explaining a method of specifying a block distortion region.
- FIG. 12 is a diagram for explaining a method of specifying an edge region.
- FIG. 13 is a diagram for explaining a block intersection.
- FIG. 14 is a diagram showing the internal configuration of the block distortion region noise elimination unit 104.
- FIG. 15 is a diagram for explaining the density of block intersection pixels.
- FIG. 16 is a diagram for explaining pixel values in the block distortion region X to be processed.
- FIG. 17 is a diagram illustrating an image from which block distortion has been removed by the image processing apparatus according to the first embodiment.
- FIG. 18 is a diagram illustrating a configuration and a processing procedure of another image processing apparatus according to the first embodiment.
- FIG. 19 is a diagram illustrating a configuration and a processing procedure of the image processing apparatus according to the second embodiment.
- FIG. 20 is a diagram illustrating an example of a fill-in determination table.
- FIG. 21 is a diagram illustrating an example of the filter determination table.
- FIG. 22 is a diagram illustrating an example of the filter determination table.
- FIG. 23 is a diagram showing an internal configuration of the configuration of the noise removing unit 305.
- FIG. 24 is a diagram illustrating a configuration and a processing procedure of the block distortion region noise elimination unit 401.
- FIG. 25 is a diagram for explaining the block distortion removal processing.
- FIG. 26 is an explanatory diagram of a process for removing block distortion at the left and right block boundaries of the block distortion region.
- FIG. 27 is a diagram for explaining the block distortion removal processing.
- FIG. 28 is an explanatory diagram of the processing for removing the block distortion at the upper and lower block boundaries of the block distortion area.
- FIG. 9 shows a configuration and a processing procedure of the image processing apparatus according to the first embodiment.
- the DCT transform unit 101 transforms the restored image obtained from the JPEG compressed data by DCT, and the constraint condition calculating unit 102 uses it in the projection processing. Is calculated. Note that the constraints calculated here are the same as the constraints used in the ordinary convex projection method.
- the DCT transform coefficient is an example of the orthogonal transform coefficient.
- the area specifying unit 103 specifies a “block distortion area”, an “edge area”, and a “homogeneous area” in the restored image decoded from the JPEG compressed data.
- the area specifying unit 103 divides the restored image into three areas: a “block distortion area”, an “edge area”, and a “homogeneous area”.
- the region specifying unit 103 first specifies a “block distortion region” in the restored image, and specifies an “edge region” from the region excluding the “block distortion region” in the restored image. Then, an area that does not belong to the block distortion area or the edge area is specified as a “homogeneous area”.
- the value of the quantization table is set to be larger for the chrominance component than for the luminance component.
- the amount is significantly degraded compared to the luminance component. Therefore, in the first embodiment, the RGB data of the restored image is color-converted into YC r C b data, and only the luminance component (Y) is used to convert the RGB data into the “block distortion area” and “edge area”. ”, And“ homogeneous region ”.
- the area where the block distortion is obstructive and needs to be corrected is the area where the gradation value changes gradually in the original image. That is, it is a region that does not contain much high-frequency components. Therefore, the area specifying unit 103 performs DCT conversion on all blocks constituting the restored image, and performs a plot in which all DCT coefficient values equal to or higher than a predetermined frequency (order) are equal to or lower than a predetermined predetermined value.
- the block is identified as a block distortion region. For example, as shown in FIG. 10, the region specifying unit 103 specifies a block in which the values of DCT coefficients of the third order or higher are all zero as a block distortion region.
- the block specified as a block distortion region is subjected to noise removal processing by a method described later.
- the mosquito noise generated in the block adjacent to the block specified as the area may cause the correction amount of each pixel value in the block specified as the block distortion area to be an inappropriate value.
- the region specifying unit 103 sets a predetermined frequency (order) or higher among DCT coefficient values in a block having a predetermined value or less. It is preferable to identify a block surrounded by only blocks in which all DCT coefficient values equal to or higher than the frequency (order) are equal to or lower than a predetermined value as a block distortion region.
- the region identifying unit 103 determines, among the blocks whose DCT coefficients satisfy the conditions shown in FIG. 10, only the blocks surrounded by only the blocks whose DCT coefficients satisfy the conditions shown in FIG. 10, It is preferable to specify a block distortion region.
- a certain block of interest X is a block in which all DCT coefficient values equal to or higher than a predetermined frequency (order) are equal to or lower than a predetermined value.
- the DCT coefficient of the block of interest X is such that only the DC component, the primary AC component, and the secondary AC component have arbitrary values as shown in FIG.
- the block is zero.
- all of the eight blocks LU, U, RU, L, R, LD, D, and RD surrounding the block of interest X have a predetermined frequency (order, as shown in FIG.
- FIG. 12 is a diagram showing a configuration unit for specifying an edge region in the region specifying unit 103. As shown in FIG.
- the smoothing processing unit 201 performs a smoothing process using a Gaussian filter or the like on the restored image decoded from the JPEG compressed data.
- the restored image contains much noise, and it is necessary to reduce the noise.
- the secondary differential filter processing unit 202 performs a secondary differential filter process (for example, a Laplacian filter process) on the image smoothed by the smoothing processing unit 201, and thereafter,
- the edge candidate pixel detection unit 203 detects an edge candidate pixel using the zero-crossing method.
- a major feature of the detection method using the zero-crossing method is that the center of the edge can be detected with high accuracy.
- edge candidate pixels include pixels having an edge component caused by block distortion or mosquito noise.
- the edge candidate pixel detection unit 203 uses the zero-crossing method to change the value from positive to negative or from negative to positive in the second derivative filter processing result (for example, Laplacian filter processing result). This is because all the existing pixels are detected. In other words, the edge candidate pixel detection unit 203 determines that the value changes from positive to negative or negative from the second derivative filter processing result (for example, the Laplacian filter processing result) even if the gradation change is extremely small. If the pixel changes to positive, all of the pixels are detected as edge candidate pixels.
- the block distortion edge specifying unit 204 sets the edge candidate pixel detecting unit 2
- an edge candidate pixel caused by block distortion is specified. For example, if an edge candidate pixel located at a block boundary and not adjacent to an edge candidate pixel located inside the block is an edge candidate pixel caused by block distortion, the block distortion edge characteristic Set part 204 is specified.
- the mosquito noise edge specifying unit 205 selects the edge candidate pixel detecting unit 205.
- the edge candidate pixels caused by mosquito noise are specified. If a strong edge exists in a block in the original image, a weak gradation fluctuation around the original edge, that is, mosquito noise occurs in the restored image. Therefore, if there is an edge candidate pixel having a relatively high edge intensity and an edge candidate pixel having a relatively low edge intensity in the same block, the pixel having a relatively low edge intensity is caused by mosquito noise. It is highly likely that the pixel is an edge candidate pixel.
- the mosquito noise edge identification unit 205 examines the connectivity of the four neighborhoods with respect to the edge candidate pixels detected in each block. Specifically, the mosquito noise edge identification unit 205 examines the pixels located at the top, bottom, left, and right of the detected edge candidate pixels, and when an edge candidate pixel exists in these, It is determined that those edge candidate pixels are connected. In this way, as a result of examining the connectivity in the vicinity of 4 for all edge candidate pixels in the block, it is the total convenience of edge candidate pixels that were finally determined to be connected in each block. Are called connected edge candidates. If there are a plurality of connected edge candidates in the same block, the mosquito noise edge specifying unit 205 uses a level filter (Sobe1Fi1ter) to determine the edge of each edge candidate pixel.
- Sobe1Fi1ter level filter
- the average value is calculated for each connected edge candidate.
- the mosquito noise edge specifying unit 205 determines that the average value is relatively weaker than the predetermined ratio in the same block. All the pixels constituting the edge candidate are specified as edge candidate pixels caused by mosquito noise. For example, when the average value of the edge strength of a certain connected edge candidate is less than 80% of the connected edge candidate having the highest average edge strength in the same block, the mosquito noise edge specifying unit 20 5 specifies all pixels constituting the connected edge candidate as edge candidate pixels caused by mosquito noise.
- the minute gradation change edge identifying unit 206 detects edge candidate pixels whose absolute edge intensity is smaller than a predetermined value among the edge candidate pixels detected by the edge candidate pixel detecting unit 203.
- the minute gradation changing edge specifying unit 206 calculates the edge strength of each edge candidate pixel by a process using a Sobel filter, and determines a pixel whose strength is equal to or less than a predetermined value to a small scale. It is specified as a tonal change pixel.
- the edge pixel specifying unit 207 determines, from the edge candidate pixels detected by the edge candidate pixel detecting unit 203, an edge candidate pixel caused by the block distortion specified by the block distortion edge specifying unit 204, Pixels excluding the edge candidate pixel caused by the mosquito noise specified by the mosquito noise edge specifying unit 205 and the minute gradation changing pixel specified by the minute gradation changing edge specifying unit 206 are edge pixels. And specify.
- the edge region specifying unit 208 specifies an edge region in the restored image based on the edge pixel specified by the edge pixel specifying unit 207.
- the edge pixel specified by the edge pixel specifying unit 207 is a pixel located at the center of the edge, and the surrounding pixels adjacent to this pixel also have a relatively sharp gradation change in the original image. Probability is high.
- the edge region specifying unit 208 specifies a region including a plurality of pixels within a predetermined distance range from the edge pixel specified by the edge pixel specifying unit 207 as an edge region.
- the block distortion edge identification unit 204 identifies the edge candidate pixel caused by the block distortion, and then the mosquito noise edge identification unit 205 identifies the edge candidate pixel caused by the mosquito noise. Thereafter, the minute gradation change edge specifying unit 206 specifies the minute gradation change pixel.
- the order of specifying the edge candidate pixels caused by block distortion, the edge candidate pixels caused by mosquito noise, and the minute gradation change pixels is not limited.
- the region specifying unit 103 determines the region that does not belong to the block distortion region or the edge region in the restored image. Is identified as a homogeneous region.
- This homogeneous area is an area composed of an area where mosquito noise occurs in the restored image and an area where the gradation value changes relatively smoothly.
- the block distortion region noise removing unit 104 and the edge region noise removing unit 100 5 perform image processing corresponding to the block distortion region, the edge region, or the homogenous region.
- the block distortion region in which the block distortion is obstructive is a region where the gradation gradually changes mainly in the original image. This is due to the fact that encoding is performed independently on a block-by-block basis, so that the continuity of gradation is no longer maintained at the boundary between adjacent blocks due to quantization.
- To remove this block distortion simply performing smoothing by simple fill-in processing will result in discontinuities in gradation. It is difficult to eliminate sex. For this reason, special processing is required to effectively remove block distortion.
- a block distortion removal method for effectively removing block distortion by applying pixel interpolation will be described below.
- the processing procedure of the block distortion removal method according to the first embodiment will be described with reference to FIGS. 13 to 16.
- a block intersection a point at which four blocks intersect when a restored image is divided into blocks of 8 ⁇ 8 pixels. This point corresponds to the block intersection point 180 (180A to 180D) in FIG.
- the virtual pixel density (pixel value) at each block intersection 180 is referred to as the density of the block intersection pixel.
- FIG. 14 shows the internal configuration of the block distortion region noise elimination unit 104.
- the block intersection pixel density assigning section 160 calculates the density of the block intersection pixels.
- the density of the block intersection pixels is individually given from four blocks adjacent to each block intersection 180 of each block distortion area specified by the area specifying unit 103. That is, each block intersection 180 is given the density of four block intersection pixels.
- FIG. Figure 15 shows that the block distortion area to be processed (8 x 8 pixel block) X has 8 block distortion areas (8 x 8 pixel block) LU, U, RU, L, R, LD, D , RD. At each corner of the block distortion region X to be processed, as shown in FIG. 15, there are block intersections 180 (180A to 180D).
- the block intersection 18 OA is given the density A [4] of the block intersection pixel calculated from the pixels belonging to the block distortion area X to be processed.
- the block intersection 180 A has a block intersection pixel density A [1] calculated from pixels belonging to the block distortion region LU at the upper left of the block distortion region X, and a block distortion above the block distortion region X.
- the density A [2] of the pixel at the block intersection calculated from the pixels in the area U and the block distortion area X
- each block intersection 180 (180A to 180D) at each corner of the block distortion region X to be processed has four blocks surrounding the block intersection 180, respectively.
- the densities of the four block intersection pixels calculated from the pixels belonging to are given.
- the block intersection pixel density imparting unit 160 targets the pixels whose distance from the block intersection 180 is within a certain value for each block intersection 180 at each corner of the block distortion region X to be processed. Then, the pixel value of the pixel is weighted and averaged according to the reciprocal of the distance from the block intersection 180 to calculate the density of the block intersection pixel.
- the block intersection pixel density assigning section 160 calculates the average value of the pixels whose Euclidean distance from the block intersection 180 is within 2 using the reciprocal of the Euclidean distance as a weight, and calculates the average value.
- Is assigned to the block intersection 180 as the density of the block intersection pixel As an example, a method of calculating the density A [4] of block intersection pixels at block intersection 180 A using pixel values in the block distortion region X to be processed will be described.
- the pixel density A [4] is calculated based on the following equation (Equation 10).
- a [4] 2xf (0'0) +7 "2 //" 5x (f (l, 0) + f (0, l)) I (2 +, 2 / + 2 // ⁇ 5)
- f (0, 0), f (1, 0), f (0, 1) are within 2 Euclidean distances from the block intersection point 180A in the block distortion region X of the processing target shown in Fig. 16.
- the block intersection pixel density assigning unit 160 assigns the density of the four block intersection pixels to each of the four block intersections 180 A to 180 D surrounding the block distortion area X to be processed. I do.
- the corner correction amount calculation section 1661 calculates the correction amount of each block intersection 180 using the density of the block intersection pixel assigned to each block intersection 180.
- the correction amount of each block intersection 180 is referred to as the density correction amount of the block intersection pixel.
- the corner correction amount calculation unit 161 calculates the density of four block intersection pixels for each block intersection 180 (for example, at the block intersection 180A, A [1] to A [4] ), The density of block intersection pixels calculated from the pixels in the block distortion region X to be processed (for example, at block intersection 180 A, A [4]) is subtracted, and the block intersection pixels are subtracted. Is calculated.
- a method of calculating the density correction amount of the block intersection pixel will be described with an example of calculating the density correction amount of the block intersection pixel at the block intersection 180A.
- the density correction amount d A of the block intersection pixel is calculated based on the following equation (Equation 11).
- the correction amount calculation unit 16 2 The correction amount (pixel value correction amount) of each pixel in the block distortion region X to be processed is calculated based on the density correction amount of the block intersection pixel at each of the four block intersection points 180 surrounding the distortion region X. . Specifically, the density correction amounts of the block intersection pixels at the block intersections 180 A to 180 D are dA, d B, d C, and d D, respectively.
- the correction amount calculation unit 162 calculates the above dA, dB, dC, dD Each is weighted and averaged by the reciprocal of the Euclidean distance between each of the block intersections 180 A to 180 D and the center of the pixel, and the weighted average value is used as the correction amount of the pixel.
- dA, dB, dC, dD Each is weighted and averaged by the reciprocal of the Euclidean distance between each of the block intersections 180 A to 180 D and the center of the pixel, and the weighted average value is used as the correction amount of the pixel.
- the pixel value correction unit 163 calculates the above equation (Equation 1 2 )), A new pixel value is obtained by adding the pixel value correction amount g (X, y) calculated to the corresponding pixel value in the block distortion region X to be processed. Thus, the noise removal processing in the block distortion region is performed.
- the above-described block distortion removal processing is very effective as a method for removing block distortion that has occurred in a region where the gradation changes gradually.
- the values of the four pixels surrounding the corner at each corner of the block distortion area X to be processed are not the same.
- the block distortion region As described in the description of the method of specifying the block distortion region by the region specifying unit 103, in the block distortion region, all the values of the DCT coefficient having a predetermined frequency (order) or more are predetermined. Among the blocks having a predetermined value or less, the blocks surrounded by only those blocks having all DCT coefficient values equal to or higher than a predetermined frequency (order) are equal to or lower than a predetermined value. Is preferred.
- Blocks adjacent to the block distortion region specified in this manner are unlikely to contain edges. As described above, mosquito noise is often present around the edge, but if no edge is included, it is unlikely that mosquito noise is present. Therefore, the pixel value near the block intersection of the block adjacent to the block distortion region specified as described above is rarely locally inappropriate due to the influence of mosquito noise. Therefore, it is possible to avoid that the density correction amount of the block intersection pixel becomes an inappropriate value. In addition, by performing the above-described block distortion removal processing individually for each of the RGB components in the restored image, all the color components can be corrected.
- the edge area noise removing unit 105 is configured to remove the edge area specified by the area specifying unit 103, that is, the area formed by the original edge pixels existing in the original image and the surrounding pixels. Edge-preserving smoothing such as median fill processing is performed on each RGB color component of each pixel to reduce noise while leaving edges.
- the edge area noise removing unit 105 performs edge-preserving smoothing processing such as median fill processing on the edge area, and removes noise while retaining the edge. It is.
- the homogeneous area specified by the area specifying unit 103 is composed of an area where mosquito noise is generated in the restored image and an area where the gradation value is relatively smoothly changed. You. In an area where the gradation change is smooth, adjacent pixels have similar pixel values, so that even if a strong smoothing process is performed, the change amount of the pixel value is small and the influence on the image quality is small.
- the homogeneous region noise elimination unit 106 performs a smoothing process using FIR (Finite I mulse Response) for each of the RGB color components of each pixel forming the homogeneous region, and a 3 ⁇ 3 A strong smoothing process that uses the simple average of neighboring pixels as a new pixel value is performed to reduce mosquito noise.
- FIR Finite I mulse Response
- the restored image is analyzed, and the restored image is divided into three regions: a block distortion region, an edge region, and a homogeneous region, and noise removal processing suitable for each region is performed.
- Strongly occurring block distortion can be effectively removed while retaining the original edges that existed in the image.
- projection processing based on the constraint conditions calculated by the constraint condition calculation unit 102 is performed on the image on which noise removal processing has been performed for each area.
- color conversion is performed on the image data composed of the color conversion unit 107 RGB to YCrCb data.
- the DCT transform unit 108 performs a DCT transform on the YC r C b data, and projects based on the constraints calculated by the projection processing unit 109 and the constraint condition calculating unit 102. Perform processing.
- the inverse DCT transform unit 110 restores the rounded DCT coefficients to image data composed of YC r C b by inverse DCT transform, and the color transform unit 111 converts the YC r C b data to RGB. The restored image is obtained by color conversion into data.
- the end determination unit determines whether to end or continue the noise removal processing. If the end determination unit 112 determines that the processing has ended, the processing ends, and an image from which noise has been removed is output. On the other hand, if it is determined that the processing has not been completed, the area specifying unit 103 returns to the procedure of specifying the block distortion area, the edge area, and the homogeneous area, and repeats the subsequent processing.
- the luminance component (Y) is often coded by subsampling (subsampling) the information of the chrominance components (Cr, Cb).
- one chrominance component pixel is assigned to a luminance component of 2 ⁇ 2 pixels.
- the conversion unit 1108 performs DCT conversion on the sub-sampled data to calculate DCT coefficients
- the projection processing unit 1109 calculates the DCT coefficients of the luminance component and the color difference component. Perform projection processing for each.
- the processing unit 109 projects the color difference components using the values of the constraint conditions corresponding to the sub-sampled data.
- the inverse DCT transform unit 110 performs an inverse DCT transform on the projected DCT coefficients, performs an inverse transform of the sub-sampling, and obtains the data of the chrominance components being thinned out. Interpolation is performed up to the same number of pixels as the luminance component. With the above processing, similar noise removal processing can be realized even when sub-sampling was performed during JPEG encoding.
- FIG. 17 shows an image obtained by processing the restored image shown in FIG. 5 by the image processing apparatus according to the first embodiment.
- the unsightly block distortion is sufficiently removed by the image processing by the image processing apparatus of the first embodiment.
- the image processing apparatus according to the first embodiment performs the image processing according to the first embodiment described above so that the original edge existing in the original image is not dulled.
- the annoying block distortion present in the restored image shown in FIG. 5 can be sufficiently removed.
- the image processing apparatus according to the first embodiment can also remove mosquito noise.
- the same processing can be performed for a monochrome halftone image.
- the processing of the color conversion unit 107 and the color conversion unit 111 is omitted, and the area specifying unit 103 uses the grayscale value of the monochrome image to generate the block distortion area, the edge area,
- the block noise region noise elimination unit 104, the edge region noise elimination unit 105, and the homogenous region noise elimination unit 106 determine the block distortion region, the edge region, and the homogenous region. Then, the same processes as those described above are performed.
- the region identifying unit 103 identifies only the block distortion region in the restored image, and the block distortion region noise removing unit 104
- the noise removal processing performed by the block distortion area noise removing unit 104 may be performed.
- the homogeneous area noise removing unit 106 may perform the noise removing processing performed by the homogeneous area noise removing unit 106 in the above-described first embodiment. Even in this case, unsightly block distortions present in the restored image can be effectively removed.
- the homogenous area noise removing unit 106 performs strong smoothing processing on the area other than the block distortion area, the original edge existing in the original image becomes dull.
- the area specifying unit 103 determines that the block in which all the DCT coefficient values equal to or higher than the predetermined frequency (order) are equal to or lower than the predetermined predetermined value is a block distortion. It was specified as an area.
- all DCT coefficient values equal to or higher than a predetermined frequency (order) are used in advance. It is preferable to specify a block surrounded by only blocks having a predetermined value or less as a block distortion region.
- the area specifying unit 103 may specify the block distortion area as described later.
- the region specifying unit 103 first specifies the edge pixels in the plurality of blocks constituting the restored image as described above, and the possibility that an edge exists in the original image may be determined. Identifies a block as an edge block. Then, the area specifying unit 103 determines, among blocks in which all DCT coefficient values equal to or higher than a predetermined frequency (order) are equal to or lower than a predetermined value, blocks that are not adjacent to the edge of the edge. And may be specified as a block distortion region.
- a block that is not adjacent to an edge block is identified as a block distortion region.
- the mosquito noise due to the edge does not exist in the block adjacent to the block distortion region.
- the pixel value near the block intersection of the block adjacent to the block distortion region is rarely locally inappropriate due to the influence of mosquito noise, and the density correction amount of the block intersection pixel is inappropriate. Value can be avoided.
- the block distortion is removed, and the pixel value of each pixel in the block distortion region is removed. Can be appropriately corrected.
- the region identifying unit 103 may first identify an edge pixel among a plurality of blocks constituting the restored image as described above, and an edge may exist in the original image. Identify the block as an edge block. Then, the region specifying unit 103 may specify a block other than the edge block in the restored image that is not adjacent to the edge block as a block distortion region. In this case as well, mosquito noise due to edges does not exist in the block adjacent to the block distortion region, and the pixel value near the block intersection of the block adjacent to the block distortion region is affected by the mosquito noise. Is rarely locally inappropriate. Therefore, it is possible to prevent the density correction amount of the block intersection pixel from being an inappropriate value.
- the block distortion can be removed and the pixel value of each pixel in the block distortion region can be appropriately corrected.
- blocks other than the edge blocks in the restored image and adjacent to the edge block are regarded as a homogeneous region, and the homogeneous region noise elimination unit 106 for the homogeneous region is Perform strong smoothing processing.
- the area specifying unit 103 performs DCT conversion on all blocks constituting the restored image, and determines all DCT coefficient values having a predetermined frequency (order) or higher in advance.
- a block having a value equal to or less than the predetermined value is specified as a block distortion region.
- the region identification unit 103 does not perform DCT conversion on each block constituting the restored image, but decodes and inversely quantizes the JPEG compressed data block by block and obtains a DCT of a predetermined frequency or higher.
- a block in which all of the coefficient values are equal to or less than a predetermined value may be specified as a block distortion region.
- the region specifying unit 103 may specify a block that is not adjacent to an edge block as a block distortion region.
- a block in which the true edge of the original image is located at the block boundary may exist in the block distortion area specified by the area specifying unit 103.
- each pixel in the block distortion area X to be processed is calculated using the density of the block intersection pixel calculated from the pixel value of the block crossing the edge. If you correct the value, the edges that were at the block boundaries will be completely lost.
- the density of the block intersection pixel calculated from the pixels in the block distortion area X to be processed and the processing target If the difference between the block distortion area X and the density of block intersection pixels calculated from pixels in three adjacent blocks exceeds a predetermined value, the block intersection pixel X The density correction amount of the block intersection pixel is calculated without using the density.
- the density of the block intersection pixel is calculated so as to be significantly different from the density of the block intersection pixel calculated from the pixels in the block distortion area X to be processed, the edge of the block boundary in the original image Is determined, and the density of the block intersection pixels calculated from the pixels in the block distortion area X to be processed is subtracted from the average value of the density of the remaining three or less block intersection pixels, The density correction amount of the block intersection pixel is calculated.
- FIG. 19 shows the configuration and processing procedure of the image processing apparatus according to the second embodiment. The operation of the image processing apparatus according to the second embodiment will be briefly described.
- the decoding unit 301 decodes the JPEG compressed data to generate a restored image. obtain.
- the compression ratio detector 302 detects the compression ratio when the restored image has been JPEG-compressed.
- the enlargement ratio detection unit 303 detects the enlargement ratio when outputting the restored image.
- the processing content determination unit 304 determines the content of the noise removal processing based on the compression ratio detected by the compression ratio detection unit 302 and the enlargement ratio detected by the enlargement ratio detection unit 303. To determine.
- the noise elimination unit 3005 performs a noise elimination process on the restored image based on the content determined by the processing content determination unit 304.
- the image enlargement unit 303 performs an image enlargement process based on the enlargement ratio detected by the enlargement ratio detection unit 303.
- the decoding unit 301 performs a decoding process from JPEG compressed data to image data. This decoding process is realized by the processes from the entropy decoding unit 15 to the color conversion unit 19 in FIG. 1 described above.
- the decoding unit 301 detects the compression rate of the data amount of the JPEG compressed data before decoding and the data amount of the image data after decoding.
- the compression ratio detection unit 302 detects the compression ratio when the restored image has been JPEG-compressed based on the information transferred from the decoding unit 301. For example, the compression ratio detection unit 302 detects the compression ratio when the restored image is JPEG-compressed from the ratio of the data amount after decoding to the data amount before decoding.
- the enlargement ratio detection unit 303 detects the enlargement ratio when outputting the restored image. For example, the enlargement ratio detection unit 303 detects the enlargement ratio of the output image with respect to the restored image from the relationship between the number of pixels of the restored image, the resolution of the output device, and the output image size. Specifically, when a restored image of VGA (Video Graphics Array, 64 0 X 48 0 pixe 1 s) size is output in A4 size at a resolution of 600 dpi, it is enlarged. The ratio becomes about 8 times in each of the vertical and horizontal directions.
- VGA Video Graphics Array, 64 0 X 48 0 pixe 1 s
- the processing content determination unit 304 based on the compression ratio detected by the compression ratio detection unit 302 and the enlargement ratio detected by the enlargement ratio detection unit 303, The content of the noise removal processing performed by the 305 is determined.
- the processing content determination unit 304 has a file determination table indicating a predetermined relationship between the compression ratio and the enlargement ratio and the file size as shown in FIG.
- the filter determination table indicates a predetermined relationship between the compression ratio and the enlargement ratio and the filter size when there are three levels of the compression ratio and the enlargement ratio, respectively.
- the processing content determination unit 304 selects the compression ratio detected by the compression ratio detection unit 302 and the expansion ratio detected by the expansion ratio detection unit 303. Select a filter that is uniquely determined by the rate.
- the processing content is determined.
- the unit 304 selects the filter “B-1” as a filter to be used by the noise removing unit 305.
- the compression ratio level 1 has a higher compression ratio than the compression ratio level 2, and the compression ratio level 2 has a higher compression ratio than the compression ratio level 3. It also states that enlargement level 1 is higher than enlargement level 2 and enlargement level 2 is higher than enlargement level 3. Also, for example, enlargement level 1 is the level when the restored image is output by multiplying both vertically and horizontally by 8 times, and enlargement level 2 is the case when the restored image is output by 4 times both vertically and horizontally.
- the enlargement ratio level 3 is a level when the restored image is output in the same size.
- the processing content determination unit 304 may have a filter determination table shown in FIG. 21 or 22 instead of the filter determination table shown in FIG. In this case, the processing content determination unit 304 determines the compression ratio detected by the compression ratio detection unit 302 and the enlargement ratio detection unit 304 according to the file determination table shown in FIG. 21 or FIG. Select a filter that is uniquely determined by the enlargement factor detected by 3.
- the filter determination table of FIG. 20 shows the relationship between the compression ratio and the enlargement ratio and the filter size when the filter coefficients are all the same and the sizes differ according to the compression ratio and the enlargement ratio.
- the filter determination table of FIG. 21 shows the relationship between the compression ratio and the enlargement ratio and the filter coefficient when the filter sizes are all equal and the coefficients differ according to the compression ratio and the enlargement ratio.
- the filter determination table of FIG. 22 shows the relationship between the compression ratio and the enlargement ratio and the filter when both the filter size and the coefficient are different depending on the compression ratio and the enlargement ratio.
- the fill-in determination table in FIG. 20 when the compression ratio is level 3 and the enlargement ratio is level 3, the fill-in of “C_3” is determined before and after the processing. Is a filter that does not cause a change in When such a filter is selected by the processing content determining unit 304, the noise removing unit 304 may not perform the noise removing process.
- the processing content determination unit 304 has three types of filter determination tables shown in FIGS. 20, 21, and 22, and before the processing content determination unit 304 determines the processing content.
- the user may instruct the processing content determination unit 304 which one of the file determination tables to use. For example, in the filter determination table of FIG. 22, the coefficient near the center of the filter has a larger value than that of the filter determination table of FIG. For this reason, in the filter decision table of FIG. 22, smoothing is performed weaker than in the filter decision table of FIG. Therefore, if the user wants to give a higher priority to the suppression of edge blurring than to the reduction of mosquito noise, the user is instructed to use the filter determination table of FIG. 22.
- the processing content determining unit 304 selects a filter to be used by the noise removing unit 300 by using the filter determining table instructed by the user.
- the noise elimination unit 305 performs a noise elimination process on the restored image based on the content determined by the processing content determination unit 304.
- a noise removal process capable of effectively removing the block distortion among the noises generated in the restored image.
- FIG. 23 shows a configuration of the noise removing unit 305.
- the noise elimination section 305 is composed of an area identification section 103, a block distortion area noise elimination section 401, and a remaining area noise elimination section 402.
- the area specifying unit 103 specifies a “block distortion area” in which the block distortion is determined to be strongly generated and other areas in the restored image.
- a method of specifying the “block distortion region” by the region specifying unit 103 the method performed by the region specifying unit 103 in Embodiment 1 above is used.
- FIG. 24 shows a configuration and a processing procedure of the block distortion region noise elimination unit 401 performing the processing of the present method.
- Fig. 25 to Fig. 28 are explanatory diagrams of the algorithm of this method. Note that ⁇ , ⁇ , ⁇ , ⁇ , ⁇ , and ⁇ in FIG. 25 each represent a pixel, and the bold line represents a block boundary.
- a block composed of pixels represented by Pij is a block distortion region to be processed, and is referred to as a target block in the following description.
- the left-side correction amount calculation unit 501, the right-side correction amount calculation unit 502, the horizontal direction correction amount calculation unit 503, and the horizontal direction pixel value correction unit 504 correspond to the block of interest.
- the discontinuity of gradation at the left and right block boundaries is removed. The following describes the method of removing the discontinuity of gradation at the left block boundary of the block of interest. I will tell.
- the left-side correction amount calculation unit 501 calculates a left-side correction amount (HLj) for each row in the block of interest as preprocessing for removing block distortion on the left side of the block of interest.
- the left side correction amount calculating section 5 0 1 is based on the following equation (1 3), for each row in the target block shown in FIG. 2 5, the leftmost pixel (P 0 j), the left an intermediate position between the adjacent pixels (7j B), i.e., the position of the block boundary, to impart left correction amount (HLJ) (see Fig. 2 6).
- the left-side correction amount calculating unit 501 determines the pixel (P 0 j) from the average value of the value of the pixel at the left end (P 0 j) in the target block and the value of the pixel (B 7 j) on the left. Subtract the value of P 0 j) to calculate the left-side correction amount (HLj) for each row.
- the right-side correction amount calculation section 502 calculates a right-side correction amount (HRj) for each row in the block of interest as preprocessing for removing block distortion on the right side of the block of interest. Specifically, based on the following (Equation 14), the right-side correction amount calculation unit 502 determines, for each row in the block of interest shown in FIG. 25, the rightmost pixel (P 7j ) The right-side correction amount ( ⁇ ) is added to the middle position of the pixel (C 0 j), that is, the position on the block boundary (see Fig. 26).
- the right correction amount calculating section 5 0 for each row, the right end of the pixels in the target block ( ⁇ 7;) and the value of the pixel from the average value of the value of its right adjacent pixel of (C 0 j) ( Subtract the value of P 7 j) to calculate the right-side correction amount (HRj) for each row.
- the horizontal direction correction amount calculation unit 503 calculates the left side correction amount (HLj) and the right side correction amount (HRj) for each pixel in each row in the target block based on the following (Equation 15). ) Is weighted and averaged according to the distance between each pixel and the left and right block boundaries of the target block, and the correction amount ⁇ of each pixel for each row in the target block is calculated. (Number 1 5)
- the horizontal pixel value correction unit 504 calculates the correction amount Y ij of each pixel for each row calculated by the horizontal direction correction amount calculation unit 503 as shown in (Equation 16) below.
- the value of each pixel (Q ij) after correction is calculated for each row by adding to the value of the corresponding pixel (P ij).
- the discontinuity of the gradation at the left and right block boundaries of the target block is evenly distributed to the pixels within the block, so that the block distortion at the left and right block boundaries of the target block is effectively removed.
- the same processing is performed in the vertical direction. That is, for the image from which the block distortion at the left and right block boundaries of the block of interest has been removed, the upper correction amount calculator 505, the lower correction amount calculator 506, and the vertical correction amount calculator in FIG.
- the output unit 507 and the vertical pixel value correction unit 508 remove discontinuities in gradation at the upper and lower block boundaries of the target block. In the following, a method of removing the discontinuity of gradation at the upper and lower block boundaries of the target block will be described.
- the upper correction amount calculator 505 calculates an upper correction amount (VTi) for each column in the target block as preprocessing for removing block distortion above the target block.
- the upper correction amount calculating section 5 0 5 the following are based on the equation (1-7), for each column in the target block shown in FIG. 2 7, the upper end of the pixel (Q i0) thereon
- An upper correction amount (VTi) is assigned to an intermediate position between the adjacent pixel (Ai7), that is, a position on the block boundary (see FIG. 28).
- the upper correction amount calculation unit 505 calculates the pixel (Q i ) from the average value of the value of the pixel (Q i0 ) at the upper end in the block of interest and the value of the pixel (A i7 ) adjacent above it. io) is subtracted, and the upper correction amount (VTi) is calculated for each column.
- FIG. 27 is a diagram showing each pixel QU constituting an image from which block distortion has been removed at the left and right block boundaries of the target block. (Number 1 7)
- the lower correction amount calculation unit 506 calculates a lower correction amount (VBi) for each column in the target block as preprocessing for removing block distortion below the target block. Specifically, based on the following (Equation 18), the lower correction amount calculation unit 506 determines, for each column in the block of interest shown in FIG. 27, the pixel at the lower end (Q i7 ) and its The lower correction amount (VBi) is added to the intermediate position with the lower neighboring pixel (D i0 ), that is, the position on the block boundary (see Fig. 28). That is, for each column, the lower correction amount calculation unit 506 calculates the pixel (Q 17 Subtract the value of Q i7 ) and calculate the lower correction amount (VBi) for each column.
- Equation 18 the lower correction amount calculation unit 506 determines, for each column in the block of interest shown in FIG. 27, the pixel at the lower end (Q i7 ) and its The lower correction amount (VBi) is added to the intermediate position with the lower neighboring pixel (
- the vertical correction amount calculation unit 507 calculates the upper correction amount (VTi) and the lower correction amount for each pixel in each block in the target block as shown in (Equation 19) below.
- the amount (VBi) is weighted and averaged according to the distance between each pixel and the upper and lower block boundaries of the target block, and the correction amount ZU of each pixel for each column in the target block is calculated.
- the vertical pixel value correction unit 508 calculates the correction amount Z ij of each pixel for each column calculated by the vertical direction correction amount calculation unit 507 based on the following (Equation 20).
- the value of each corrected pixel (Ru) is calculated by adding it to the value of the pixel (Qij) to be corrected.
- K ij Q ij + Z ij
- the discontinuity of the gradation at the upper and lower block boundaries of the target block is evenly distributed to the pixels in the block, so that Block distortion at the lower block boundary is effectively removed.
- the remaining region noise elimination unit 402 performs filtering to remove noise such as mosquito noise.
- the filter size is increased, the smoothing effect increases, and noise is strongly removed, but the image blur increases.
- the smoothing effect is reduced and the image blur is reduced, but the noise removal effect is also reduced.
- the processing content determination unit 304 and the remaining area noise elimination unit 402 perform filter processing. Before performing the above, the content of the noise removal processing performed by the remaining area noise remover 402 is determined as described above.
- the remaining area noise removing section 402 performs a filtering process on the area other than the block distortion area based on the processing content determined by the processing content determining section 304.
- the image enlargement unit 303 enlarges the image data image-processed by the noise elimination unit 305 based on the enlargement ratio detected by the enlargement ratio detection unit 303 to remove noise.
- the obtained output image can be obtained.
- output includes display and printing.
- compression ratio detecting section 302 detects the compression ratio from the ratio of the data amount after decoding to the data amount before decoding.
- the compression ratio detection unit 302 may detect the compression ratio based on the information of the quantization table used when encoding the compressed data.
- the compression ratio detector 302 focuses on the value of the DC component of the quantization table, Assuming that the predetermined value S1 is larger than the predetermined value S2, if the value of the DC component is equal to or more than the predetermined value S1, the compression level 1 and the value of the DC component are the predetermined value S2 If the value is less than the predetermined value S1, the compression level 2 can be detected. If the value of the DC component is less than the predetermined value S2, the compression level 3 and the compression ratio can be detected.
- the compression ratio detection unit 302 focuses on a plurality of values of the quantization table, and calculates a value S1 [i] and a predetermined value S1 [i] set for each of the focused values.
- a predetermined value S2 [i] (value S1 [i] is larger than value S2 [i]) is compared. Then, if the number of coefficients that are equal to or more than the value S 1 [i] is equal to or more than a predetermined ratio, the compression level 1 is set.
- the compression rate detector 302 uses the method of setting the compression level to 2 if the number of coefficients equal to or more than 2 [i] is equal to or greater than a predetermined ratio, and to the compression level 3 otherwise. You may.
- the enlargement ratio detection unit 303 determines the enlargement ratio of the output image with respect to the restored image from the relationship between the number of pixels of the restored image, the resolution of the output device, and the output image size. Was detected.
- the enlargement ratio detection unit 303 may detect the enlargement ratio of the output image with respect to the restored image by detecting information of the enlargement ratio input by the user using an input unit (not shown) in advance.
- the processing content determination unit 304 determines the processing content by using only one of the compression ratio when the restored image is compressed and the enlargement ratio when outputting the restored image. Is also good.
- Embodiment 2 described above when noise is removed from the block distortion region, first, the block distortion at the left and right block boundaries of the block distortion region is reduced. After removal, the block distortion at the upper and lower block boundaries is removed. However, the above order may be changed, and the noise removal in the block distortion region may be performed by removing the block distortion at the upper and lower block boundaries and then removing the block distortion at the left and right block boundaries. Even in this case, the block distortion can be sufficiently removed. Further, in the above-described second embodiment, the remaining area noise removing section 402 performs filtering on an area other than the block distortion area based on the processing content determined by the processing content determining section 304.
- the block distortion area noise elimination section 401 does not perform the noise elimination processing on the block distortion area, and the remaining area noise elimination section 402 performs processing on the entire restored image by the processing content determination section 304.
- Filter processing may be performed based on the determined processing content.Also, the above-described method of removing noise in the block distortion region in the second embodiment may be used as the method of removing noise in the block distortion region in the first embodiment. Good. Further, the method for removing noise in the block distortion region in the first embodiment may be used as the method for removing noise in the block distortion region in the second embodiment.
- the DCT transform is used as an example of the orthogonal transform.
- the block distortion removal processing method described in the first and second embodiments uses the discrete sine transform (DST; It is also effective when removing block distortion in a reconstructed image obtained from compressed data that has been subjected to orthogonal transform such as iscrete Sine Transform (DFT) or discrete Fourier transform (DFT).
- DFT discrete Sine Transform
- DFT discrete Fourier transform
- the DCT coefficient is used as an example of the orthogonal transform coefficient.
- DST coefficients or DFT coefficients are used as the orthogonal transform coefficients.
- Embodiments 1 and 2 described above JPEG is used as an example of encoding.
- the block distortion removal processing method described in Embodiments 1 and 2 uses a code such as MPEG or H.261. It is also effective when removing block distortion in a restored image obtained from compressed data subjected to quantization.
- each component of the image processing apparatus according to the first and second embodiments described above may be configured by hardware or may be configured by software.
- a program for causing a computer to function as a component of all or a part of the image processing apparatus according to the first and second embodiments is applied to a predetermined computer, and the computer executes the above-described embodiment. It is also possible to realize the functions of all or some of the components of the image processing device in 1 and 2.
- Specific examples of the embodiment of the program include recording the program on a recording medium such as a CD-ROM, transferring a recording medium on which the program is recorded, and using the Internet. Communication of the above-mentioned program by the communication means in the above is included. It also includes installing the program on a computer.
- the present invention can provide an image processing apparatus that identifies a block distortion region in a restored image by analyzing the restored image and removes block distortion.
- the present invention can provide an image processing apparatus which removes block distortion by reproducing a smooth gradation change even at each corner of each block constituting a restored image.
- the present invention can provide an image processing apparatus that performs efficient noise removal processing suitable for outputting a restored image.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Of Band Width Or Redundancy In Fax (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Image Processing (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/477,317 US7561752B2 (en) | 2001-05-10 | 2002-05-10 | Image processing apparatus |
EP02724766A EP1401210A4 (en) | 2001-05-10 | 2002-05-10 | IMAGING DEVICE |
JP2002590677A JP4145665B2 (ja) | 2001-05-10 | 2002-05-10 | 画像処理装置及び画像処理方法 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001139764 | 2001-05-10 | ||
JP2001-139764 | 2001-05-10 | ||
JP2001-357139 | 2001-11-22 | ||
JP2001357139 | 2001-11-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002093935A1 true WO2002093935A1 (en) | 2002-11-21 |
Family
ID=26614873
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2002/004596 WO2002093935A1 (en) | 2001-05-10 | 2002-05-10 | Image processing apparatus |
Country Status (5)
Country | Link |
---|---|
US (1) | US7561752B2 (ja) |
EP (1) | EP1401210A4 (ja) |
JP (1) | JP4145665B2 (ja) |
CN (1) | CN1260978C (ja) |
WO (1) | WO2002093935A1 (ja) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1560438A1 (en) * | 2004-01-27 | 2005-08-03 | Canon Kabushiki Kaisha | Resolution conversion method and device |
WO2005086357A1 (ja) * | 2004-03-08 | 2005-09-15 | Mitsubishi Denki Kabushiki Kaisha | 符号化データの復号プログラム及び方法並びに装置 |
JP2006055393A (ja) * | 2004-04-22 | 2006-03-02 | Shimadzu Corp | 放射線撮像装置および放射線検出信号処理方法 |
JP2006127292A (ja) * | 2004-10-29 | 2006-05-18 | Kyocera Corp | 画像処理装置およびその方法 |
JP2007511941A (ja) * | 2003-10-29 | 2007-05-10 | ヒューレット−パッカード デベロップメント カンパニー エル.ピー. | 画像のノイズを除去するための変換 |
JP2008130095A (ja) * | 2006-11-17 | 2008-06-05 | Shindorico Co Ltd | フォトプリンティングのための顔領域検出装置及び補正方法 |
JP2008311951A (ja) * | 2007-06-14 | 2008-12-25 | Sony Corp | 画像処理装置、及び、画像処理方法 |
US7526133B2 (en) | 2003-01-09 | 2009-04-28 | Ricoh Company, Ltd. | Image processing apparatus, image processing program, and storage medium |
JP2010021668A (ja) * | 2008-07-08 | 2010-01-28 | Sharp Corp | 画像処理方法、画像処理装置、ならびにそれを備える画像形成装置、画像処理プログラムおよび記録媒体 |
JP2015526046A (ja) * | 2012-08-09 | 2015-09-07 | トムソン ライセンシングThomson Licensing | 処理方法及び処理装置 |
CN106846262A (zh) * | 2016-12-23 | 2017-06-13 | 中国科学院自动化研究所 | 去除蚊式噪声的方法及系统 |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040091173A1 (en) * | 2002-07-17 | 2004-05-13 | Hiroshi Akimoto | Method, apparatus and system for the spatial interpolation of color images and video sequences in real time |
KR100472464B1 (ko) * | 2002-07-20 | 2005-03-10 | 삼성전자주식회사 | 직렬로 스케일링하는 장치 및 방법 |
US7242819B2 (en) * | 2002-12-13 | 2007-07-10 | Trident Microsystems, Inc. | Method and system for advanced edge-adaptive interpolation for interlace-to-progressive conversion |
US7379587B2 (en) * | 2004-02-12 | 2008-05-27 | Xerox Corporation | Systems and methods for identifying regions within an image having similar continuity values |
GB2415565B (en) * | 2004-06-24 | 2007-10-31 | Hewlett Packard Development Co | Image processing |
JP4483501B2 (ja) * | 2004-09-22 | 2010-06-16 | 株式会社ニコン | 静止画を動画再生するための前処理を行う画像処理装置、プログラム、および方法 |
US20060092440A1 (en) * | 2004-11-01 | 2006-05-04 | Bagai Farhan A | Human visual system based design of color signal transformation in a color imaging system |
US8194757B2 (en) * | 2005-01-28 | 2012-06-05 | Broadcom Corporation | Method and system for combining results of mosquito noise reduction and block noise reduction |
JP4520880B2 (ja) * | 2005-02-17 | 2010-08-11 | 富士フイルム株式会社 | しみ検査方法及びしみ検査装置 |
WO2006137309A1 (ja) * | 2005-06-21 | 2006-12-28 | Nittoh Kogaku K.K | 画像処理装置 |
JP3895357B2 (ja) * | 2005-06-21 | 2007-03-22 | 日東光学株式会社 | 信号処理装置 |
TWI332351B (en) * | 2006-10-05 | 2010-10-21 | Realtek Semiconductor Corp | Image processing method and device thereof for reduction mosquito noise |
US8269886B2 (en) | 2007-01-05 | 2012-09-18 | Marvell World Trade Ltd. | Methods and systems for improving low-resolution video |
CN101669361B (zh) | 2007-02-16 | 2013-09-25 | 马维尔国际贸易有限公司 | 用于改善低分辨率和低帧速率视频的方法和系统 |
US7813588B2 (en) * | 2007-04-27 | 2010-10-12 | Hewlett-Packard Development Company, L.P. | Adjusting source image data prior to compressing the source image data |
JP5125294B2 (ja) * | 2007-07-31 | 2013-01-23 | 株式会社ニコン | プログラム、画像処理装置、撮像装置および画像処理方法 |
TWI343207B (en) * | 2007-09-07 | 2011-06-01 | Lite On Technology Corp | Device and method for obtain a clear image |
CN102187664B (zh) * | 2008-09-04 | 2014-08-20 | 独立行政法人科学技术振兴机构 | 影像信号变换系统 |
JP5552290B2 (ja) * | 2008-11-04 | 2014-07-16 | キヤノン株式会社 | 画像処理装置、画像処理方法、フィルタ装置、その制御方法及びプログラム |
KR20100050655A (ko) * | 2008-11-06 | 2010-05-14 | 삼성전자주식회사 | 블록 노이즈 감소 시스템 및 방법 |
JP4788792B2 (ja) * | 2009-03-11 | 2011-10-05 | カシオ計算機株式会社 | 撮像装置、撮像方法、及び撮像プログラム |
CN101909145B (zh) * | 2009-06-05 | 2012-03-28 | 鸿富锦精密工业(深圳)有限公司 | 影像杂讯过滤系统及方法 |
JP5747378B2 (ja) * | 2011-03-18 | 2015-07-15 | 株式会社日立国際電気 | 画像転送システム、画像転送方法、画像受信装置、画像送信装置、及び、画像撮像装置 |
JP5564553B2 (ja) | 2012-10-22 | 2014-07-30 | Eizo株式会社 | 画像処理装置、画像処理方法及びコンピュータプログラム |
JP2015122618A (ja) * | 2013-12-24 | 2015-07-02 | キヤノン株式会社 | 情報処理装置、情報処理方法、およびプログラム |
US9552628B2 (en) * | 2014-03-12 | 2017-01-24 | Megachips Corporation | Image processor |
US10003758B2 (en) | 2016-05-02 | 2018-06-19 | Microsoft Technology Licensing, Llc | Defective pixel value correction for digital raw image frames |
US10230935B2 (en) * | 2016-10-11 | 2019-03-12 | Marvel Digital Limited | Method and a system for generating depth information associated with an image |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63209274A (ja) * | 1987-02-25 | 1988-08-30 | Canon Inc | 画信号処理装置 |
JPH03166825A (ja) * | 1989-11-27 | 1991-07-18 | Ricoh Co Ltd | 画像処理方法及び装置 |
JPH04209073A (ja) * | 1990-09-07 | 1992-07-30 | Ricoh Co Ltd | 画像処理方法 |
JPH05308623A (ja) * | 1992-04-30 | 1993-11-19 | Olympus Optical Co Ltd | 画像信号復号化装置 |
JPH0723227A (ja) * | 1993-06-18 | 1995-01-24 | Sharp Corp | ノイズ除去装置 |
JPH07170512A (ja) * | 1993-11-24 | 1995-07-04 | Matsushita Electric Ind Co Ltd | 画像信号復号化装置におけるポストフィルタ処理方法およびポストフィルタ処理装置 |
JPH08307870A (ja) * | 1995-04-29 | 1996-11-22 | Samsung Electron Co Ltd | ブロッキング効果除去回路 |
JP2001078187A (ja) * | 1999-09-01 | 2001-03-23 | Matsushita Electric Ind Co Ltd | 画像復号化装置 |
JP2001086367A (ja) * | 1999-09-10 | 2001-03-30 | Matsushita Electric Ind Co Ltd | 映像表示装置及びその制御方法 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5625714A (en) * | 1991-01-10 | 1997-04-29 | Olympus Optical Co., Ltd. | Image signal decoding device capable of removing block distortion with simple structure |
US5479211A (en) * | 1992-04-30 | 1995-12-26 | Olympus Optical Co., Ltd. | Image-signal decoding apparatus |
JP3466705B2 (ja) | 1993-05-28 | 2003-11-17 | ゼロックス・コーポレーション | 圧縮画像の圧縮解除方法 |
JPH08186714A (ja) * | 1994-12-27 | 1996-07-16 | Texas Instr Inc <Ti> | 画像データのノイズ除去方法及びその装置 |
EP0721286A3 (en) * | 1995-01-09 | 2000-07-26 | Matsushita Electric Industrial Co., Ltd. | Video signal decoding apparatus with artifact reduction |
KR0165497B1 (ko) * | 1995-01-20 | 1999-03-20 | 김광호 | 블럭화현상 제거를 위한 후처리장치 및 그 방법 |
JPH08214309A (ja) | 1995-02-07 | 1996-08-20 | Canon Inc | 画像信号復号装置 |
US6463182B1 (en) * | 1995-12-28 | 2002-10-08 | Canon Kabushiki Kaisha | Image processing apparatus and method for removing noise near an edge of an image |
JP3800704B2 (ja) * | 1997-02-13 | 2006-07-26 | ソニー株式会社 | 映像信号処理装置及び方法 |
EP2190205A3 (en) * | 1997-05-28 | 2010-10-06 | Sony Corporation | Method and apparatus for reducing block distortion and method and apparatus for encoding data |
AU717480B2 (en) * | 1998-08-01 | 2000-03-30 | Korea Advanced Institute Of Science And Technology | Loop-filtering method for image data and apparatus therefor |
US6748113B1 (en) * | 1999-08-25 | 2004-06-08 | Matsushita Electric Insdustrial Co., Ltd. | Noise detecting method, noise detector and image decoding apparatus |
US6823089B1 (en) * | 2000-09-28 | 2004-11-23 | Eastman Kodak Company | Method of determining the extent of blocking and contouring artifacts in a digital image |
-
2002
- 2002-05-10 US US10/477,317 patent/US7561752B2/en not_active Expired - Fee Related
- 2002-05-10 WO PCT/JP2002/004596 patent/WO2002093935A1/ja active Application Filing
- 2002-05-10 CN CNB028096169A patent/CN1260978C/zh not_active Expired - Fee Related
- 2002-05-10 EP EP02724766A patent/EP1401210A4/en not_active Withdrawn
- 2002-05-10 JP JP2002590677A patent/JP4145665B2/ja not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS63209274A (ja) * | 1987-02-25 | 1988-08-30 | Canon Inc | 画信号処理装置 |
JPH03166825A (ja) * | 1989-11-27 | 1991-07-18 | Ricoh Co Ltd | 画像処理方法及び装置 |
JPH04209073A (ja) * | 1990-09-07 | 1992-07-30 | Ricoh Co Ltd | 画像処理方法 |
JPH05308623A (ja) * | 1992-04-30 | 1993-11-19 | Olympus Optical Co Ltd | 画像信号復号化装置 |
JPH0723227A (ja) * | 1993-06-18 | 1995-01-24 | Sharp Corp | ノイズ除去装置 |
JPH07170512A (ja) * | 1993-11-24 | 1995-07-04 | Matsushita Electric Ind Co Ltd | 画像信号復号化装置におけるポストフィルタ処理方法およびポストフィルタ処理装置 |
JPH08307870A (ja) * | 1995-04-29 | 1996-11-22 | Samsung Electron Co Ltd | ブロッキング効果除去回路 |
JP2001078187A (ja) * | 1999-09-01 | 2001-03-23 | Matsushita Electric Ind Co Ltd | 画像復号化装置 |
JP2001086367A (ja) * | 1999-09-10 | 2001-03-30 | Matsushita Electric Ind Co Ltd | 映像表示装置及びその制御方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1401210A4 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7526133B2 (en) | 2003-01-09 | 2009-04-28 | Ricoh Company, Ltd. | Image processing apparatus, image processing program, and storage medium |
JP2007511941A (ja) * | 2003-10-29 | 2007-05-10 | ヒューレット−パッカード デベロップメント カンパニー エル.ピー. | 画像のノイズを除去するための変換 |
US7536063B2 (en) | 2004-01-27 | 2009-05-19 | Canon Kabushiki Kaisha | Resolution conversion method and device |
EP1560438A1 (en) * | 2004-01-27 | 2005-08-03 | Canon Kabushiki Kaisha | Resolution conversion method and device |
US8000394B2 (en) | 2004-03-08 | 2011-08-16 | Mitsubishi Denki Kabushiki Kaisha | Program, method, and apparatus for decoding coded data |
WO2005086357A1 (ja) * | 2004-03-08 | 2005-09-15 | Mitsubishi Denki Kabushiki Kaisha | 符号化データの復号プログラム及び方法並びに装置 |
JP2006055393A (ja) * | 2004-04-22 | 2006-03-02 | Shimadzu Corp | 放射線撮像装置および放射線検出信号処理方法 |
JP2006127292A (ja) * | 2004-10-29 | 2006-05-18 | Kyocera Corp | 画像処理装置およびその方法 |
JP2008130095A (ja) * | 2006-11-17 | 2008-06-05 | Shindorico Co Ltd | フォトプリンティングのための顔領域検出装置及び補正方法 |
JP2012185827A (ja) * | 2006-11-17 | 2012-09-27 | Shindorico Co Ltd | フォトプリンティングのための顔領域検出装置及び補正方法 |
JP2008311951A (ja) * | 2007-06-14 | 2008-12-25 | Sony Corp | 画像処理装置、及び、画像処理方法 |
JP4609457B2 (ja) * | 2007-06-14 | 2011-01-12 | ソニー株式会社 | 画像処理装置、及び、画像処理方法 |
JP2010021668A (ja) * | 2008-07-08 | 2010-01-28 | Sharp Corp | 画像処理方法、画像処理装置、ならびにそれを備える画像形成装置、画像処理プログラムおよび記録媒体 |
JP2015526046A (ja) * | 2012-08-09 | 2015-09-07 | トムソン ライセンシングThomson Licensing | 処理方法及び処理装置 |
US9715736B2 (en) | 2012-08-09 | 2017-07-25 | Thomson Licensing | Method and apparatus to detect artificial edges in images |
CN106846262A (zh) * | 2016-12-23 | 2017-06-13 | 中国科学院自动化研究所 | 去除蚊式噪声的方法及系统 |
CN106846262B (zh) * | 2016-12-23 | 2020-02-28 | 中国科学院自动化研究所 | 去除蚊式噪声的方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
EP1401210A4 (en) | 2011-12-21 |
US20040165785A1 (en) | 2004-08-26 |
CN1260978C (zh) | 2006-06-21 |
CN1507749A (zh) | 2004-06-23 |
JPWO2002093935A1 (ja) | 2004-09-02 |
EP1401210A1 (en) | 2004-03-24 |
US7561752B2 (en) | 2009-07-14 |
JP4145665B2 (ja) | 2008-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4145665B2 (ja) | 画像処理装置及び画像処理方法 | |
JP4870743B2 (ja) | デジタルイメージに対する選択的なクロミナンスデシメーション | |
JP3001094B2 (ja) | リンギングノイズの減少のための信号適応フィルタリング方法及び信号適応フィルター | |
JP2960386B2 (ja) | 信号適応フィルタリング方法及び信号適応フィルター | |
US7570834B2 (en) | Image de-ringing filter | |
US6845180B2 (en) | Predicting ringing artifacts in digital images | |
US20050100235A1 (en) | System and method for classifying and filtering pixels | |
KR101112139B1 (ko) | 부호화된 영상의 확대비 및 노이즈 강도 추정장치 및 방법 | |
US6823089B1 (en) | Method of determining the extent of blocking and contouring artifacts in a digital image | |
JPH08186714A (ja) | 画像データのノイズ除去方法及びその装置 | |
JP2004166266A (ja) | 圧縮画像中のアーティファクトを除去する方法及びシステム | |
JPH07131757A (ja) | 画像処理装置 | |
JP2000232651A (ja) | ブロック変換符号化された画像表現から復号した電子的画像の中の歪を除去する方法 | |
US8340404B2 (en) | Image processing device and method, learning device and method, program, and recording medium | |
JP4097587B2 (ja) | 画像処理装置および画像処理方法 | |
EP1168823A2 (en) | A method of determining the extent of blocking artifacts in a digital image | |
JP2000059782A (ja) | 空間領域デジタル画像の圧縮方法 | |
JP2005318614A (ja) | 入力画像中のアーチファクトを低減する方法 | |
JP4053460B2 (ja) | 画像処理装置、画像形成装置、画像処理方法、画像処理プログラム、および記録媒体 | |
JP4040528B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、画像処理プログラムを記録した記録媒体、および画像処理装置を備えた画像形成装置 | |
JPH07307942A (ja) | 画像雑音除去装置 | |
JP2001346208A (ja) | 画像信号復号化装置および方法 | |
EP1182885A2 (en) | Method and apparatus for image quality enhancement | |
JP5267140B2 (ja) | 画像圧縮装置及び画像圧縮方法 | |
JPH08163375A (ja) | 画像圧縮方法及びその装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE GB |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2002590677 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10477317 Country of ref document: US Ref document number: 028096169 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2002724766 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2002724766 Country of ref document: EP |