WO2006117844A1 - 画像処理装置、画像処理方法、及び情報端末装置 - Google Patents
画像処理装置、画像処理方法、及び情報端末装置 Download PDFInfo
- Publication number
- WO2006117844A1 WO2006117844A1 PCT/JP2005/007999 JP2005007999W WO2006117844A1 WO 2006117844 A1 WO2006117844 A1 WO 2006117844A1 JP 2005007999 W JP2005007999 W JP 2005007999W WO 2006117844 A1 WO2006117844 A1 WO 2006117844A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- feature
- selection information
- corrected
- output
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 110
- 238000003672 processing method Methods 0.000 title claims description 6
- 238000012937 correction Methods 0.000 claims abstract description 48
- 238000001514 detection method Methods 0.000 claims abstract description 22
- 238000003702 image correction Methods 0.000 claims abstract description 22
- 230000008859 change Effects 0.000 claims description 20
- 238000001914 filtration Methods 0.000 claims description 11
- 238000003708 edge detection Methods 0.000 description 65
- 238000000034 method Methods 0.000 description 43
- 230000008569 process Effects 0.000 description 27
- 238000005316 response function Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 11
- 238000003384 imaging method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 239000013598 vector Substances 0.000 description 4
- AYFVYJQAPQTCCC-GBXIJSLDSA-N L-threonine Chemical compound C[C@@H](O)[C@H](N)C(O)=O AYFVYJQAPQTCCC-GBXIJSLDSA-N 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- GGCZERPQGJTIQP-UHFFFAOYSA-N sodium;9,10-dioxoanthracene-2-sulfonic acid Chemical compound [Na+].C1=CC=C2C(=O)C3=CC(S(=O)(=O)O)=CC=C3C(=O)C2=C1 GGCZERPQGJTIQP-UHFFFAOYSA-N 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/682—Vibration or motion blur correction
- H04N23/683—Vibration or motion blur correction performed by a processor, e.g. controlling the readout of an image memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20201—Motion blur correction
Definitions
- Image processing apparatus image processing method, and information terminal apparatus
- the present invention relates to an image processing apparatus and an image processing method that reduce the effects of camera shake during imaging and camera shake caused by movement of the imaging system itself.
- the present invention is applicable to, for example, an imaging device, a digital still camera, a digital movie camera, and the like mounted on an information terminal device such as a mobile phone device or a PDA (Personal Digital Assistant). Background art
- Patent Document 1 JP-A-6-27512 (Page 7, Fig. 1)
- the present invention has been made in view of such problems, and an object of the present invention is to obtain an image with reduced noise in an image processing apparatus that performs blur correction by image processing.
- An image processing apparatus includes:
- Image correction means for performing blur correction processing on the input image
- Corrected image feature detecting means for detecting a feature of the corrected image after the blur correction processing, and based on the detection result, either the input image or the corrected image Image output means to output
- an image with reduced noise such as ringing can be obtained.
- FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
- FIG. 2 is a diagram showing edge components in the horizontal direction of an image.
- FIG. 3 is a diagram showing edge components in the vertical direction of an image.
- FIG. 4 is a diagram showing a distribution of edge detection correction images.
- FIG. 5 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 2 of the present invention.
- FIG. 6 is a diagram showing a distribution of an edge detection input image.
- FIG. 7 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 3 of the present invention.
- FIG. 8 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 4 of the present invention. Explanation of symbols
- the image processing apparatus 10 selects and outputs one of the input image correction images based on the information of the correction image obtained by performing image processing on the input image.
- the corrected image that has undergone image processing has reduced blurring.
- Image processing may cause noise such as ringing.
- the image processing apparatus according to the present embodiment replaces pixels determined to have noise in the corrected image with pixels in the input image, and reduces noise in the output image.
- FIG. 1 is a block diagram showing a configuration of an image processing apparatus 10 according to the first embodiment.
- an input image force that can obtain a camera or imaging device equal force is input to and stored in the input image frame memory 101.
- the input image output from the input image frame memory 101 is input to the image correction unit 102, and the image correction unit 102 performs image processing for blur correction.
- the image correction algorithm of the image correction unit 102 will be described later.
- corrected image The image corrected by the image correcting unit 102 (hereinafter referred to as “corrected image”) is input to the corrected image frame memory 103 and stored. Further, the corrected image output from the corrected image frame memory 103 is input to the corrected image edge detecting unit 104, and the corrected image edge detecting unit 104 detects the edge component of the corrected image.
- the edge detection process in the corrected image edge detection unit 104 will be described later.
- a corrected image in which an edge component is detected (hereinafter referred to as “edge detection corrected image”) is input to the first edge determination means 105.
- the first edge determination means 105 determines, for each pixel, whether or not the force that is the original edge portion of the image and the force that is noise such as a pseudo edge that is not the original edge of the image based on the information of the edge detection corrected image. The determination is made, and the determination result is output to the image output means 106.
- the pixel determined not to be the original edge part of the image may be a picture composition part or a flat part in the picture.
- the edge determination process in the first edge determination means 105 will also be described later.
- the image output unit 106 selects either an input image or a corrected image for each pixel based on the determination result input from the first edge determination unit 105, and outputs the image as an output image. Process.
- Image correction algorithm of image correction means 102 in the present embodiment Uses an inverse filter derived from the constrained least squares method.
- this image correction algorithm is characterized by using an impulse response function indicating the amount of shake, and further using an inverse filter of the impulse response function calculated from the impulse response function.
- the inverse filtering algorithm only needs to solve the problem of minimizing the square norm of the feature-extracted image.
- the square norm is calculated as the sum of squares of the elements of a vector.
- the edge information extracted by the Laplacian 'operator is employed as the feature quantity. That is, the image correction means 102 calculates a shake correction image as a solution to the square norm minimization problem of the edge information extracted by the Laplacian 'operator.
- the method of calculating the innounce response function may be a method of mathematically estimating the force of two captured images with different shutter speeds, or by processing a physical signal obtained from an acceleration sensor or the like. Good.
- detailed description of the calculation method of the impulse response function is omitted.
- Equation (1) the input image and the corrected image are converted into a vector / matrix representation using the mathematical image model expressed by Equation (1).
- the blur is an image with reduced blur, that is, a pixel signal of a corrected image, an image in which blur is generated, that is, a pixel signal of an input image, and V is an attached calorie at the time of imaging,
- h is an impulse response function indicating the amount of shake.
- (2K + 1) is calculated from the vertical order of the impulse response function, L.
- (2L + 1) denote the order of the impulse response function in the horizontal direction of the screen.
- the average value of the input image and the corrected image is assumed to be zero.
- the elements of the corrected image and the input image to be obtained are extracted in a lexicographic array, defined as column vectors by vs and vt, respectively, and a matrix corresponding to the impulse response function is represented by H.
- the Laplacian operator is also represented by the matrix C like an impulse response function.
- the matrices H and C are block Toplitz matrices and can be approximately diagonalized using a two-dimensional discrete Fourier transform (hereinafter referred to as “DFT”) matrix. Therefore, high-speed arithmetic processing can be performed.
- DFT discrete Fourier transform
- II ⁇ II represents the 2-norm of '.
- Equation (2) the correlation between adjacent pixels is strong and the amount of edge extraction is small. Therefore, it is sufficient to minimize the J represented by Equation (2) using the constrained least square method.
- Equation (1) holds in the input image. Therefore, it is considered that the relationship expressed by Equation (3) exists with the power of the additional noise. Therefore, the image can be corrected by solving the minimization problem in which the constraint condition of Equation (3) is added to Equation (2).
- the estimated value vss of the corrected image to be obtained can be expressed as in Equation (5) by performing partial differentiation on the corrected image vs to be obtained from Equation (4).
- Equation (5) includes a large size and a block toplitz matrix, a large amount of calculation is required to obtain an exact solution.
- approximate diagonalization using the DFT matrix can provide a solution with a practical amount of computation.
- T represents the transpose of a matrix, expressed on the right shoulder of the variable in equation (5).
- the vector elements obtained by DFT of the corrected image vs and the input image vt are represented by VSS (V, V), ⁇ ( ⁇ , ⁇ ), D
- the diagonal elements of the matrix obtained by approximately diagonalizing the matrices H and C with the FT matrix are ⁇ , V), C (
- the corrected image is calculated by performing a two-dimensional inverse discrete Fourier transform on VSS, V).
- the above-described method is used as the image correction algorithm in the image correction unit 102.
- the method described in the following document, etc. May be used.
- the corrected image edge detection unit 104 performs an edge detection process for detecting an edge component of the corrected image.
- the corrected image is a color image
- the corrected image edge detection unit 104 of the present embodiment performs edge detection processing on the luminance signal of the corrected image.
- the edge detection process is performed for each of the horizontal direction and the vertical direction of the image.
- FIG. 2 An example of the horizontal edge component of the image is shown in FIG. 2, and an example of the vertical edge component is shown in FIG.
- the horizontal edge component indicates that the edge extends in a substantially horizontal direction
- the vertical edge component indicates that the edge extends in a substantially vertical direction.
- the Sobel filter H expressed by Equation 7 is used for detecting the edge component in the horizontal direction.
- the filtering process is performed on the luminance signal for each pixel of the corrected image using the filter of Expression 7, and the horizontal edge component ImgH 1 is obtained for each pixel.
- the Sobel filter V expressed by Expression 8 that is, the transposed matrix H ′ of the matrix H of Expression 7 is used to detect the edge component in the vertical direction. That is, filtering processing is performed on the luminance signal for each pixel of the corrected image using the filter of Expression 7, and the vertical edge component ImgVl is obtained for each pixel.
- V H 2 0 -2 (8)
- the corrected image edge detection means 104 After obtaining the horizontal edge component ImgHl and the vertical edge component ImgVl for each pixel, the corrected image edge detection means 104 outputs an edge detection corrected image ImgSl expressed by Equation 9. As shown in Equation 9, the edge detection correction image ImgSl is the sum of absolute values for each pixel of the horizontal edge component ImgH 1 and the vertical edge component ImgV 1.
- the first edge determination means 105 performs edge determination processing for determining whether noise is a noise or a pixel of an original edge portion of the image for each pixel of the edge detection correction image.
- the edge determination process in the first edge determination unit 105 is performed by the following process. First, as shown in Expression 10, the edge detection correction image ImgSl is compared with the first threshold value thre-res. If the edge detection correction image ImgSl is smaller than the first threshold thre_res, it is determined that the pixel is a pseudo edge such as ringing or noise. On the other hand, when the edge detection corrected image ImgSl is larger than the first threshold thre-res, it is determined that the pixel is the original edge portion of the image.
- the edge detection correction image ImgSl is smaller than the first threshold thre-res, that is, when it is determined to be a pseudo edge or noise such as ringing
- the means 105 outputs a pixel value “1” for the pixel.
- the edge detection correction image ImgSl is larger than the first threshold thre-res, that is, when it is determined that the image is the original edge portion
- the first edge determination means 105 performs the processing for the pixel.
- the pixel value “0” is output. That is, a binary image obtained by binarization is output from the first edge determination means 105.
- the image output means 106 described later uses this binary image as image selection information.
- the first threshold thre-res for edge determination processing is determined from image information such as an image storage format, image file size, and image content that may use predetermined constants. You can do it.
- the vertical axis represents the number of pixels and the horizontal axis represents the edge detection corrected image ImgSl.
- the first threshold is determined to be a false edge or noise such as pixel power ringing in each region 1 on the left side of the value thre-res, and is on the right side of the first threshold! / Value thre-res.
- the pixel in area 2 is determined as the original edge of the image.
- the edge detection correction image ImgSl distribution shape and edge detection correction image From the statistics such as the average value of the image ImgS 1, the ratio of pixels entering the region 1 may be determined adaptively.
- the first threshold thre-res of the functional force having the magnitude of the shake amount may be used.
- the first threshold thre-res is determined by a value that also determines the blur amount x and the coefficient k force.
- a blur amount function other than Equation 11 may be used.
- This image output means 106 uses the binary image output from the first edge determination means 105 as selection information, and selects and outputs either the input image or the corrected image for each pixel. Perform image output processing.
- the first determination unit 105 outputs a pixel value “1” for a pixel determined to be a pseudo edge noise such as ringing. Also, a pixel value “0” is output for the pixel determined to be the original edge portion of the image. Based on this pixel value, the image output means 105 selects an input image in the input image frame memory 101 and outputs it as an output image for the pixel having the pixel value “1”.
- a corrected image in the corrected image frame memory 103 is selected and output as an output image.
- the image processing apparatus 10 is based on the information of the corrected image obtained by performing image processing on the input image! Selected to output. Accordingly, it is possible to reduce noise such as ringing generated by image processing, and an image with good image quality can be obtained.
- the corrected image edge detection means of the present embodiment has the power described for the case where the SOBEL filter is used as the edge detection processing for performing edge detection.
- the present invention is not limited to this. That's fine.
- either of the forces described in the case where the filtering process is performed in the horizontal direction and the vertical direction may be performed.
- edge detection processing is performed on a luminance signal has been described, it is also possible to use color difference signals or R, G, B color signals, or a combination of these color information! ,.
- the force described in the case where the threshold is set based on the absolute value of the edge detection correction image in the edge detection processing is not limited to this. It is sufficient if the same effect can be obtained. Further, although the pixel value of the pixel determined to be noise is set to “1” and the pixel value of the pixel at the original edge portion of the image is set to “0”, the opposite may be possible. Furthermore, a signal may be output only in either case.
- each of the processes described as V may be performed in units of a predetermined area such as a macroblock or a block.
- a predetermined area such as a macroblock or a block.
- an average value of pixel values of all pixels in a predetermined area, a pixel value of representative pixels, or the like is used.
- the method of performing processing using the edge of the image may represent the characteristics of the image.
- the image processing apparatus 10 according to the first embodiment is applicable to the case of the shake other than the hand shake, which can be applied when correcting the shake caused by the hand shake. Further, for example, the imaging system itself It can also be applied to shakes that occur when the hand shake.
- the image processing apparatus 20 according to the second embodiment is configured by adding an input image edge detection unit 204 to the image processing apparatus 10 according to the first embodiment. Further, a second edge determination unit 205 is provided as a component corresponding to the first edge determination unit 105. The second edge determination unit 205 performs an edge determination process based on the output from the input image edge detection unit 204 and the output from the corrected image edge detection unit 104.
- FIG. 5 is a block diagram showing a configuration of the image processing device 20 according to the second embodiment. In FIG. 5, the components denoted by the same reference numerals as those in FIG. 1 are the same or corresponding components as those of the image processing apparatus 10 of the first embodiment.
- the input image edge detection means 204 performs an edge detection process for detecting an edge component of the input image, and outputs an image in which the edge component is detected (hereinafter referred to as “edge detection input image”). Note that the edge detection processing performed by the input image edge detection unit 204 is the same as the edge detection processing performed by the correction image edge detection unit 104 described in the first embodiment.
- the input image edge detection unit 204 obtains the edge component ImgH2 in the horizontal direction and the edge component ImgV2 in the vertical direction for each pixel of the input image, and then detects the edge detection input image ImgS2 represented by Expression 12. Is output.
- ImgS2 abs (ImgH2) + abs (TmgV2) (1 2)
- the second edge determination unit 205 Based on the edge detection input image output from the input image edge detection unit 204 and the edge detection input image output from the corrected image edge detection unit 104, the second edge determination unit 205 performs noise reduction for each pixel. Or edge determination processing for determining whether the image is a pixel of the original edge portion of the image.
- the edge determination process in the second edge determination unit 205 is performed by the following process. First, the edge detection corrected image ImgSl is compared with the first threshold thre-res. Sarako, edge detection input image ImgS2 is compared with the second threshold thre-deg. Then, when Expression 13 is satisfied, that is, when both the edge detection correction image and the edge detection input image are smaller than the respective threshold values, it is determined that the noise is generated by the image processing of the image correction unit 102.
- the second determination unit 204 outputs a pixel value “1” for the pixel. On the other hand, in other cases, the pixel value “0” is output.
- the vertical axis represents the number of pixels
- the horizontal axis represents the edge detection corrected image ImgSl.
- the second threshold shown in Figs. 6 (a) and 6 (b) is in the region 1 on the left side of the threshold thre-res of 1, with the vertical axis representing the number of pixels and the horizontal axis representing the edge detection input image ImgS2. Only the pixel in the region 1 on the left side of the value thre—deg is determined to be a noise generated by the correction processing of the image correcting unit 102, and the second determining unit 204 determines that the pixel value “1” "
- the edge detection input image ImgS2 is larger than the second threshold value thre-deg, it can be considered that the pixel is the original edge portion of the image.
- the edge detection correction image ImgSl may be smaller than the first threshold value thre-res.
- the output image is determined from the relationship between the edge detection corrected image ImgSl and the first threshold value thre-res, an input image in which blurring occurs is output. Become. Therefore, noise such as ringing in the output image may be blurred in the image outline due to the effect of shake that reduces it.
- the pixel determined to be the original edge portion of the image that is, the edge detection input image ImgS2 is determined from the second threshold thre-deg.
- the edge detection input image ImgS2 is determined from the second threshold thre-deg.
- the image output means 106 selects the input image in the input image frame memory 101 for the pixel having the pixel value “1” for the pixel having the pixel value “0”, as in the first embodiment.
- the corrected images in the corrected image frame memory 103 are selected and output as output images. That is, the binary edge image output from the second edge determination means 205 is a binary key image.
- the image processing apparatus 20 performs the edge determination process based on the input image and the corrected image. Therefore, since the corrected image is output to the original edge portion of the image, an output image with improved image quality can be obtained. It should be noted that a method for determining the second threshold thre-deg is the same as the method for determining the first threshold value thre-res described in the first embodiment.
- the present invention is not limited to this as long as the same effect can be obtained.
- the force of setting the pixel value of the pixel determined to be noise to “1” and the pixel value of the pixel at the original edge of the image to “0” may be reversed.
- a signal may be output only in either case.
- the image processing device 30 according to the third embodiment is configured by adding first selection information changing means 307 to the image processing device 20 according to the second embodiment.
- the first selection information changing unit 307 appropriately changes the selection information output from the second edge determination unit 205 and outputs the changed change information to the image output unit 106.
- the image output means 106 determines whether the input image or the corrected image is based on this change information! Select either / to output.
- FIG. 7 is a block diagram showing a configuration of the image processing device 30 according to the third embodiment.
- the components denoted by the same reference numerals as those in FIG. 5 are the same or corresponding components as those in the image processing device 20 of the second embodiment.
- the second edge determination unit 205 performs edge determination from the edge detection input image and the edge detection correction image.
- a pixel determined to be noise (hereinafter referred to as “isolated pixel”) may be mixed into a group of pixels determined to be the original edge portion of the image.
- the input image since the input image is mixed in the corrected image as the output image, the input image, that is, the isolated pixel may be determined as visual noise.
- the first selection information changing unit 307 performs such isolated pixel removal processing that removes such isolated pixels.
- the second edge determination unit 205 sets the pixel value of the pixel determined as noise to “1”, and sets the pixel value of the pixel determined to be the original edge portion of the image to “0”.
- the first selection information changing means 307 performs a filtering process using a median filter.
- the median filter detects pixel values of pixels in a set area and selects pixel values occupying a large number. For example, if the median filter setting area is 3 pixels x 3 pixels, and the pixel value in that area is 0, 0, 0, 0, 0, 1, 1, 1, 1 ⁇ , the pixel value “ Since there are five 0's and four pixel values '1', all the pixel values after filtering by the median filter are 0, that is, (0, 0, 0, 0, 0, 0, 0 , 0, 0 ⁇ . In this way, the first selection information changing unit 307 removes isolated pixels using the median filter.
- the first selection information changing unit 307 changes the binary image output from the second edge determination unit 205, that is, the selection information with reference to the selection information of other pixels.
- the changed information is output.
- the image processing apparatus 30 according to Embodiment 3 performs the removal of isolated pixels by the first selection information changing unit 307. Therefore, the quality of the output image is improved.
- the median filter used in the first selection information changing means 307 is not limited to the median filter having a size of 3 pixels in the vertical direction and 3 pixels in the horizontal direction.
- the median filter may be used.
- the median filter may have a shape other than a rectangle.
- any device that can remove isolated pixels may be used.
- the image processing apparatus 30 has been described as being configured by adding the first selection information changing unit 307 to the image processing apparatus 20 according to Embodiment 2, the first selection information changing unit 30 is described. 7 may be added to the image processing apparatus 10 according to the first embodiment.
- the image processing apparatus 40 according to the fourth embodiment is configured by adding second selection information changing means 407 to the image processing apparatus 20 according to the second embodiment. Further, the portion corresponding to the image output means 106 is divided into the first gain controller 409, the second gain controller 410, An adder 411 is used.
- FIG. 8 is a block diagram showing a configuration of the image processing apparatus 40 according to the fourth embodiment.
- the constituent elements denoted by the same reference numerals as those in FIG. 5 are constituent elements that are the same as or correspond to those of the image processing apparatus 20 of the second embodiment.
- the second selection information changing unit 407 As described in the third embodiment, there may be an isolated pixel in the binary image output from the second edge determination unit 205.
- the second selection information changing means 407 performs a low-pass filter process as a power isolated pixel removing process that performs an isolated pixel removing process for removing the isolated pixel. For example, if the pixel value is ⁇ 0, 1, 0, 1, 1, 0 ⁇ and the 3-tap low-pass filter ⁇ 1Z4, 1/2, 1Z4 ⁇ is applied, the pixel value after filtering becomes ⁇ 0. 25, 0. 5, 0. 5, 0. 75, 0. 75, 0. 25 ⁇ .
- the binary image output from the second edge half IJ determining means 205 is converted into a grayscale image having a real value K having a pixel value of 0 to 1 by the second selection information changing means 407. Become. Note that i, j indicates the position of the pixel. That is, second,
- the selection information changing means 407 changes the binary image output from the second edge determination means 205, that is, the selection information with reference to the selection information of other pixels, and outputs the changed change information. Is.
- the first gain controller (first image output unit) 408, the second gain controller (second image output unit) 409, and the adder 411 (third image output unit) will be described.
- K output from the second selection information changing means 407 is input to the first gain controller 408,
- the gain controller 408 of 1 the process represented by Expression 14 is performed.
- the second gain controller 409 performs the processing expressed by Equation 15.
- ImgOut clip (Imgl'ij + ImgC , ij ) (1 6)
- the first gain controller 408 corrects the corrected image with reference to the change information output from the second selection information changing means 407, and outputs the corrected image after correction. Functions as an output device.
- the second gain controller 409 corrects the input image with reference to the change information output from the second selection information changing means 407, and outputs the input image after the correction. Function as.
- the adder 411 adds the corrected image output from the first gain controller 408 and the corrected input image output from the second gain controller 409, and adds the output image. Functions as a third image output device.
- the selection information input to the second selection information changing means 407 is binarized information of “0” and “1”, but the second selection information changing means 407 is the binary information.
- ⁇ Information may be regarded as an integer with multiple gradations, for example, an integer with 256 gradations, and low-pass filtering may be performed internally. That is, “0” is projected to “0”, “1” is projected to “255”, and low-pass filter processing is performed.
- the filter processing may be performed by floating point arithmetic or approximate integer arithmetic. In particular, when filtering is performed with integer arithmetic, it is possible to improve the filtering speed.
- the second selection information changing means 407 performs projection to 256 gradations. In this case, it is explained that K and 1— ⁇ are converted into real numbers and output.
- the real number is not processed, and the integer value from ⁇ 0 '' to ⁇ 255 '' is output to the first gain controller 408 and the second gain controller 409, and divided by 255 within each gain controller. Let's go with a real number ⁇ .
- the image processing device 40 performs the isolated pixel removal processing by the low-pass filter, so that the effect of the isolated pixel removal processing can be distributed to neighboring pixels. The quality of the output image is improved.
- ⁇ is output from the second selection information changing means 407 to the first gain controller 408, and 1-K is output to the second gain controller 409. ,
- the output from the changing unit 407 may be ⁇ , and 1 ⁇ K may be calculated in the second gain controller 409.
- the image processing device 40 has been described as configured by adding or replacing the second selection information changing unit 407 and the like to the image processing device 20 according to the second embodiment. May be applied to the image processing apparatus 10.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Studio Devices (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05736817.7A EP1876564B1 (en) | 2005-04-27 | 2005-04-27 | Image processing device, image processing method, and information terminal |
JP2006549200A JP3959547B2 (ja) | 2005-04-27 | 2005-04-27 | 画像処理装置、画像処理方法、及び情報端末装置 |
PCT/JP2005/007999 WO2006117844A1 (ja) | 2005-04-27 | 2005-04-27 | 画像処理装置、画像処理方法、及び情報端末装置 |
US11/919,119 US8319840B2 (en) | 2005-04-27 | 2005-04-27 | Image processing device, image processing method, and information terminal apparatus |
US13/618,543 US8542283B2 (en) | 2005-04-27 | 2012-09-14 | Image processing device, image processing method, and information terminal apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2005/007999 WO2006117844A1 (ja) | 2005-04-27 | 2005-04-27 | 画像処理装置、画像処理方法、及び情報端末装置 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/919,119 A-371-Of-International US8319840B2 (en) | 2005-04-27 | 2005-04-27 | Image processing device, image processing method, and information terminal apparatus |
US13/618,543 Division US8542283B2 (en) | 2005-04-27 | 2012-09-14 | Image processing device, image processing method, and information terminal apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006117844A1 true WO2006117844A1 (ja) | 2006-11-09 |
Family
ID=37307651
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/007999 WO2006117844A1 (ja) | 2005-04-27 | 2005-04-27 | 画像処理装置、画像処理方法、及び情報端末装置 |
Country Status (4)
Country | Link |
---|---|
US (2) | US8319840B2 (ja) |
EP (1) | EP1876564B1 (ja) |
JP (1) | JP3959547B2 (ja) |
WO (1) | WO2006117844A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010131296A1 (ja) * | 2009-05-14 | 2010-11-18 | 株式会社 東芝 | 画像処理装置 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8559746B2 (en) * | 2008-09-04 | 2013-10-15 | Silicon Image, Inc. | System, method, and apparatus for smoothing of edges in images to remove irregularities |
JP6316010B2 (ja) * | 2014-01-30 | 2018-04-25 | キヤノン株式会社 | 画像処理装置、画像処理方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0411469A (ja) * | 1990-04-29 | 1992-01-16 | Canon Inc | 撮像装置 |
JPH0627512A (ja) | 1992-07-13 | 1994-02-04 | Olympus Optical Co Ltd | ぶれ画像復元装置 |
JPH0765163A (ja) * | 1993-08-05 | 1995-03-10 | Sony United Kingdom Ltd | 映像データ処理方式 |
JPH10320535A (ja) * | 1997-05-22 | 1998-12-04 | Mitsubishi Heavy Ind Ltd | 背景画像自動生成装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0188269B1 (en) * | 1985-01-16 | 1994-04-20 | Mitsubishi Denki Kabushiki Kaisha | Video encoding apparatus |
EP0455445B1 (en) * | 1990-04-29 | 1998-09-09 | Canon Kabushiki Kaisha | Image pick-up apparatus |
EP0455444B1 (en) * | 1990-04-29 | 1997-10-08 | Canon Kabushiki Kaisha | Movement detection device and focus detection apparatus using such device |
JP3912869B2 (ja) | 1997-10-17 | 2007-05-09 | 三菱重工業株式会社 | 交通流計測装置 |
US6411741B1 (en) * | 1998-01-14 | 2002-06-25 | Kabushiki Kaisha Toshiba | Image processing apparatus |
JP4473363B2 (ja) | 1999-05-26 | 2010-06-02 | 富士フイルム株式会社 | 手振れ補正装置およびその補正方法 |
JP2002300459A (ja) * | 2001-03-30 | 2002-10-11 | Minolta Co Ltd | 反復法による画像復元装置、画像復元方法、プログラム及び記録媒体 |
JP2004186901A (ja) | 2002-12-02 | 2004-07-02 | Sony Corp | 撮像装置及び方法、プログラム及び記録媒体 |
EP1434425A3 (en) * | 2002-12-25 | 2006-04-26 | Hitachi Ltd. | Video signal processing apparatus |
JP4817000B2 (ja) * | 2003-07-04 | 2011-11-16 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
JP4250583B2 (ja) | 2004-09-30 | 2009-04-08 | 三菱電機株式会社 | 画像撮像装置及び画像復元方法 |
-
2005
- 2005-04-27 US US11/919,119 patent/US8319840B2/en not_active Expired - Fee Related
- 2005-04-27 JP JP2006549200A patent/JP3959547B2/ja not_active Expired - Fee Related
- 2005-04-27 WO PCT/JP2005/007999 patent/WO2006117844A1/ja active Application Filing
- 2005-04-27 EP EP05736817.7A patent/EP1876564B1/en not_active Not-in-force
-
2012
- 2012-09-14 US US13/618,543 patent/US8542283B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0411469A (ja) * | 1990-04-29 | 1992-01-16 | Canon Inc | 撮像装置 |
JPH0627512A (ja) | 1992-07-13 | 1994-02-04 | Olympus Optical Co Ltd | ぶれ画像復元装置 |
JPH0765163A (ja) * | 1993-08-05 | 1995-03-10 | Sony United Kingdom Ltd | 映像データ処理方式 |
JPH10320535A (ja) * | 1997-05-22 | 1998-12-04 | Mitsubishi Heavy Ind Ltd | 背景画像自動生成装置 |
Non-Patent Citations (3)
Title |
---|
K.-C. TAN ET AL.: "Restoration of Real-World Motion-Blurred Images", GRAPHICAL MODELS AND IMAGE PROCESSING, vol. 53, no. 3, May 1991 (1991-05-01), pages 291 - 299, XP000200580, DOI: doi:10.1016/1049-9652(91)90051-K |
O. HADAR ET AL.: "Restoration of images degraded by extreme mechanical vibrations", OPTICS AND LASER TECHNOLOGY, vol. 29, no. 4, pages 171 - 177, XP004084885, DOI: doi:10.1016/S0030-3992(97)00002-9 |
See also references of EP1876564A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010131296A1 (ja) * | 2009-05-14 | 2010-11-18 | 株式会社 東芝 | 画像処理装置 |
Also Published As
Publication number | Publication date |
---|---|
US20130010142A1 (en) | 2013-01-10 |
EP1876564B1 (en) | 2013-10-23 |
US20100060745A1 (en) | 2010-03-11 |
EP1876564A4 (en) | 2010-04-14 |
JPWO2006117844A1 (ja) | 2008-12-18 |
US8542283B2 (en) | 2013-09-24 |
JP3959547B2 (ja) | 2007-08-15 |
EP1876564A1 (en) | 2008-01-09 |
US8319840B2 (en) | 2012-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108694705B (zh) | 一种多帧图像配准与融合去噪的方法 | |
US9262811B2 (en) | System and method for spatio temporal video image enhancement | |
JP3770271B2 (ja) | 画像処理装置 | |
US9454805B2 (en) | Method and apparatus for reducing noise of image | |
EP2075756B1 (en) | Block-based image blending for camera shake compensation | |
US10491815B2 (en) | Image-processing apparatus, image-processing method, and non-transitory computer readable medium storing image-processing program | |
JPH07274044A (ja) | 時間可変フィルタ係数を用いたビデオ信号ノイズ低減システムおよびノイズ低減方法 | |
EP3944603A1 (en) | Video denoising method and apparatus, and computer-readable storage medium | |
WO2006038334A1 (ja) | 画像撮像装置及び画像復元方法 | |
CN105931213B (zh) | 基于边缘检测和帧差法的高动态范围视频去鬼影的方法 | |
WO2017038395A1 (ja) | 画像処理装置、画像処理方法及びプログラム | |
WO2014054273A1 (ja) | 画像ノイズ除去装置、および画像ノイズ除去方法 | |
JP2004088234A (ja) | ノイズ低減装置 | |
CN111383190A (zh) | 图像处理设备和方法 | |
WO2006117844A1 (ja) | 画像処理装置、画像処理方法、及び情報端末装置 | |
JP2016201037A (ja) | 画像処理装置、画像処理方法及びプログラム | |
CN115035013A (zh) | 图像处理方法、图像处理装置、终端及可读存储介质 | |
JP7039215B2 (ja) | 画像処理装置、画像処理方法、およびプログラム | |
EP2835963B1 (en) | Image processing device and method, and image processing program | |
US12125173B2 (en) | Video denoising method and device, and computer readable storage medium | |
Xu et al. | Interlaced scan CCD image motion deblur for space-variant motion blurs | |
JP3639640B2 (ja) | 動きベクトル検出装置 | |
JP3067833B2 (ja) | 画像処理装置及び方法 | |
JP4753522B2 (ja) | 符号化前にビデオ画像を動き補償再帰フィルタリングする装置、方法及びそれに対応する符号化システム | |
JP7118818B2 (ja) | 画像処理方法、画像処理装置、撮像装置、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2006549200 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005736817 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
NENP | Non-entry into the national phase |
Ref country code: RU |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: RU |
|
WWP | Wipo information: published in national office |
Ref document number: 2005736817 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11919119 Country of ref document: US |