WO2010095460A1 - 画像処理システム、画像処理方法および画像処理プログラム - Google Patents
画像処理システム、画像処理方法および画像処理プログラム Download PDFInfo
- Publication number
- WO2010095460A1 WO2010095460A1 PCT/JP2010/001109 JP2010001109W WO2010095460A1 WO 2010095460 A1 WO2010095460 A1 WO 2010095460A1 JP 2010001109 W JP2010001109 W JP 2010001109W WO 2010095460 A1 WO2010095460 A1 WO 2010095460A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- image
- target image
- pixels
- reference image
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 130
- 238000003672 processing method Methods 0.000 title description 18
- 239000013598 vector Substances 0.000 claims abstract description 410
- 230000008859 change Effects 0.000 claims abstract description 200
- 238000004364 calculation method Methods 0.000 claims abstract description 194
- 238000000034 method Methods 0.000 claims description 73
- 239000000203 mixture Substances 0.000 claims description 64
- 230000008569 process Effects 0.000 claims description 38
- 238000006073 displacement reaction Methods 0.000 claims description 16
- 238000000605 extraction Methods 0.000 description 56
- 239000011159 matrix material Substances 0.000 description 46
- 238000010586 diagram Methods 0.000 description 23
- 230000014509 gene expression Effects 0.000 description 19
- PEDCQBHIVMGVHV-UHFFFAOYSA-N Glycerine Chemical compound OCC(O)CO PEDCQBHIVMGVHV-UHFFFAOYSA-N 0.000 description 17
- 238000012937 correction Methods 0.000 description 16
- 230000000694 effects Effects 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 238000011156 evaluation Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 7
- 230000006872 improvement Effects 0.000 description 6
- 239000000470 constituent Substances 0.000 description 5
- 230000005055 memory storage Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000000513 principal component analysis Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000002939 conjugate gradient method Methods 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000011426 transformation method Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/251—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Definitions
- the present invention relates to an image processing system, an image processing method, and an image processing program, and in particular, correspondence between pixels between images calculated based on an assumed geometric deformation model due to local movement of a subject.
- the multiple frame degradation inverse transformation method As a method of generating a higher quality image using a plurality of frame images, for example, there is a multiple frame degradation inverse transformation method (see, for example, Patent Document 1).
- difference per subpixel means the shift
- the multiple frame degradation inverse transformation method generates a higher quality image from pixel values of multiple images in which the same part of the subject is imaged by estimating the positional deviation of the subject with high accuracy that is less than the pixel interval. It is.
- FIG. 20 shows an example of an image for estimating the positional deviation amount.
- An image 101 illustrated in FIG. 20A indicates a reference image serving as a reference among a plurality of input images
- an image 102 illustrated in FIG. 20B is an input image other than the reference image.
- the building 103 and the house 104 in the reference image 101 and the building 105 and the house 106 in the other images 102 are the same subject.
- the position and orientation of the camera at the time of capturing the image 102 are different from those at the time of capturing the reference image 101. For this reason, a shift occurs in the position of the pixel representing the same location between the images 101 and 102.
- a geometric deformation model is assumed in advance, and the amount of displacement is calculated for each pixel based on the deformation model.
- the pixel value of a higher quality image is obtained from a plurality of input images based on the amount of displacement.
- the ML method Maximum (Likelihood method)
- the MAP method Maximum A Posteriori method
- Non-Patent Document 2 describes a method for detecting a pixel with an inaccurate misregistration amount. In the method described in Non-Patent Document 2, a pixel with an inaccurate positional deviation is detected based on the magnitude of the pixel difference between images.
- Patent Document 2 also describes a method for generating a high-resolution image in consideration of a moving part in an image.
- the maximum value and the minimum value of the luminance value are calculated between a determination pixel that performs motion determination in a target image other than the reference frame and each pixel in the reference frame that surrounds the determination pixel.
- the maximum value is Vmax
- the minimum value is Vmin.
- the luminance value of the determination pixel is Vtest.
- the threshold value is ⁇ Vth.
- Japanese Unexamined Patent Publication No. 2000-188680 pages 3-7) Japanese Patent Laying-Open No. 2005-130443 (paragraphs 0091, 0110, 0112)
- FIG. 21 is an image when the moon is also photographed as a subject in addition to the building and house shown in FIG.
- FIG. 21A shows the reference image 201
- FIG. 21B shows another image 202.
- the same reference numerals as those in FIG. 20 are assigned to the same objects as those in FIG.
- the building 103 and the house 104 in the reference image 201 are represented as the building 105 and the house 106 that are uniformly deformed in the other images 201.
- Non-Patent Document 2 and Patent Document 2 specify a region where such a positional deviation estimation amount is inaccurate. By excluding such a region and generating a high-quality image, the image quality can be improved as compared with the case where the movement of the moving object is not taken into consideration.
- Non-Patent Document 2 if a change in pixel value between adjacent pixels is large, a high-quality image is originally generated. Even an area suitable for processing is likely to be determined as an area not suitable for high-quality image generation processing. By excluding even such a region, there is a problem that the image quality improvement effect is reduced in a region where the pixel value changes drastically.
- FIG. 22 shows an example in which the pixel value between corresponding pixels changes greatly.
- FIG. 22A shows a subject 301 composed of black and white regions. Suppose that the subject is photographed at different camera positions and postures, and images 302 and 303 are obtained.
- the pixel 304 at the upper right of the image 302 is black.
- the upper right pixel 305 of the image 303 is gray, which is an intermediate color between black and white, by shifting 0.5 pixels from the image 302.
- the pixels 304 and 305 are both pixels that represent the same portion of the image 301, but the pixel 304 is black and the pixel 305 is gray, so the difference between the pixel values is large. For this reason, areas such as pixels 304 and 305 that originally represent the same part are likely to be determined as areas inappropriate for high-quality image processing, and are not used in high-quality image processing.
- FIG. 23 is an explanatory diagram showing an example of such a state.
- the horizontal axis shown in FIG. 23 is the coordinate axis, and the vertical axis represents the luminance value of the pixel.
- a circle represents a pixel of the reference image, and a square represents a pixel of another image.
- the pixel 150 is a determination pixel and the pixels 160 to 162 are pixels surrounding the pixel 150.
- FIG. 23B shows a region determined not to be a motion image. As shown in FIG.
- the luminance value of the determination pixel is included in the range of (Vmin ⁇ Vth) to (Vmax + ⁇ Vth). If not, the determination pixel is determined as a pixel in an inappropriate area. Therefore, the determination pixel 150 illustrated in FIG. 23 is determined as a pixel in an inappropriate area. However, the luminance value of the determination pixel 150 is close to the pixel 160, and the pixel 150 can be said to be a suitable pixel for high-quality image processing. In the method described in Patent Document 2, such a pixel 150 also determines a pixel inappropriate for high-quality image processing.
- an object of the present invention is to provide an image processing system, an image processing method, and an image processing program that can determine a local region that does not follow an assumed change with high accuracy.
- An image processing system includes a misregistration amount calculation means for calculating a misregistration amount in a misregistration between a target image and a reference image, which is an image for which the presence or absence of a local region that does not follow an assumed change with respect to a reference image is determined
- the pixel of the target image and the pixel of the reference image are associated with each other by specifying the pixel of the reference image that is closest to the pixel position of the target image when the target image is corrected so as to eliminate the positional deviation.
- a corresponding pixel difference vector that is a difference vector between vectors corresponding to each pixel is calculated, and based on the corresponding pixel difference vector and an ellipsoid corresponding to the pixel of the reference image in a predetermined space, the target image And pixel calculating means for determining whether or not the pixel is a pixel in a local region.
- the image processing method calculates a positional deviation amount in a positional deviation between the target image and the reference image, which is an image for which the presence / absence of a local region that does not follow the assumed change with respect to the reference image is determined.
- the pixel of the target image and the pixel of the reference image are associated with each other, and according to each associated pixel
- the corresponding pixel difference vector which is a difference vector between the two vectors, is calculated, and based on the corresponding pixel difference vector and an ellipsoid corresponding to the pixel of the reference image in a predetermined space, the pixel of the target image is a local region It is characterized in that it is determined whether or not it is a pixel.
- the image processing program allows a computer to calculate a positional deviation amount in a positional deviation between a target image and a reference image, which is an image for which the presence or absence of a local region that does not follow an assumed change with respect to the reference image is determined.
- a positional deviation amount in a positional deviation between a target image and a reference image which is an image for which the presence or absence of a local region that does not follow an assumed change with respect to the reference image is determined.
- the pixel of the target image and the pixel of the reference image are determined.
- a corresponding pixel difference vector which is a difference vector between vectors corresponding to each other, is calculated, and the corresponding pixel difference vector and an ellipsoid corresponding to the pixel of the reference image in a predetermined space are calculated.
- pixel calculation processing for determining whether or not the pixel of the target image is a pixel in the local region is executed.
- FIG. 1 is a block diagram illustrating an example of an image processing system according to a first embodiment of the present invention. It is explanatory drawing which shows the example of tolerance space. It is explanatory drawing which shows the maximum change vector. It is explanatory drawing which shows the pixel of the object image after correction
- FIG. 21 is an image when the moon is photographed as a subject in addition to the building and house shown in FIG. 20. It is explanatory drawing which shows the example which the pixel value between corresponding pixels changes large. It is explanatory drawing which shows the example which determines with an inappropriate pixel when it should determine with a suitable pixel for a high quality image formation process.
- FIG. FIG. 1 is a block diagram showing an example of an image processing system according to the first embodiment of the present invention.
- the image processing system of this embodiment includes a computer (central processing unit; processor; data processing unit) 400 that operates by program control, an image input unit 410, and an image output unit 420.
- the computer 400 includes a positional deviation amount estimation unit 401, an inappropriate area determination unit 402, and an image restoration unit 403.
- the inappropriate area determination unit 402 includes an allowable area calculation unit 404, an image difference calculation unit 405, an inappropriate region extraction unit 406, and an inappropriate region storage unit 407.
- Y ki is the Y signal (luminance) of the i th pixel
- U ki and V ki are the U signal and V signal (color difference signal) of the i th pixel.
- t means a transposed matrix.
- the image may be expressed in another format.
- it may be expressed in the RGB format
- x ki is a three-dimensional vector having R, G, and B components as elements.
- the dimension of the vector x ki is not limited to three dimensions.
- the vector x ki is a one-dimensional vector.
- the vector x ki is a multidimensional vector larger than three dimensions.
- the color space is an r-dimensional space
- the vector x ki is an r-dimensional vector.
- a case where an image is expressed in the YUV format is taken as an example.
- an image serving as a reference in the positional deviation amount estimation calculation is referred to as a reference image.
- an image for calculating how much the pixel is deviated from the reference image is referred to as a target image.
- the K images input to the image input means 410 one is a reference image, and the remaining images are target images.
- any image may be used as the reference image.
- the reference image input unit 411 receives a reference image that is used as a reference in the positional deviation amount estimation calculation, and the reference image input unit 411 stores the reference image.
- the target image is input to the target image input unit 412 and the target image input unit 412 stores the target image.
- the positional deviation amount estimation unit 401 Based on the pixel value of the reference image input to the reference image input unit 411 and the pixel value of the target image input to the target image input unit 412, the positional deviation amount estimation unit 401 compares the relative positional shift between the two. The amount is calculated with sub-pixel accuracy (accuracy of less than one pixel), and the amount of displacement is stored. This misregistration is a misregistration caused by a change represented by the assumed geometric deformation model. Further, the positional deviation amount estimation unit 401 may also specify a conversion method for changing the pixel position of the reference image to what the target image looks like.
- the fluctuation of the target image with respect to the reference image is a non-uniformly deformed geometric expressed by a uniform geometric deformation model such as parallel movement or rotational movement or an interpolation function such as a B-spline function. It is expressed by a simple deformation model or a combination of these geometric deformation models. That is, the mode of change represented by the assumed geometric deformation model includes a uniform change of the entire image and a non-uniform change of the entire image.
- the positional deviation amount estimation means 401 may store the image conversion method and the parameter representing the variation amount in advance, and the positional deviation amount may be estimated using the information.
- the misregistration amount estimation means 401 converts the reference image by each variation method, calculates the difference between the pixel value of the converted image and the pixel value of the target image for each pixel, and obtains the sum of the differences. . Then, the amount of misregistration may be determined by specifying a conversion method and a variation amount when the sum is minimized.
- the geometric deformation model (the type of conversion method for the reference image) is determined in advance, and the displacement amount estimation unit 401 estimates the conversion amount (that is, the displacement amount of the displacement) in the deformation model.
- the positional deviation between the reference image and the target image based on the difference in the camera position can be expressed by a predetermined conversion method such as parallel movement, and the positional deviation amount estimation unit 401 uses the fluctuation amount (for example, parallel movement). Amount) may be estimated.
- the positional deviation amount estimation unit 401 uses a gradient method or the like to calculate a variation amount that minimizes the sum of squares of the difference between the pixel value of the pixel of the converted reference image and the pixel value of the target image.
- the positional deviation amount may be determined by calculating and specifying a parameter indicating the fluctuation amount. A method for specifying the amount of displacement in this way is described in Reference Document 1 shown below.
- the geometric deformation model is predetermined and the positional deviation amount estimation means 401 estimates the variation amount in the deformation model will be described as an example. That is, the conversion method from the reference image to the target image itself is known in advance, and an example of performing estimation regarding the amount of variation is taken as an example.
- the positional deviation amount estimation means 401 obtains the positional deviation amount from the reference image for each target image.
- the inappropriate area determination unit 402 uses the pixel value of the reference image stored in the reference image input unit 411 and the positional deviation amount between images obtained by the positional deviation amount estimation unit 401 for each target image. An area inappropriate for high-quality image generation processing is identified, and the area is stored.
- the allowable area calculation unit 404 calculates and stores an allowable area using the pixel value of the reference image input to the reference image input unit 411.
- the allowable region is a region in the color space for determining whether or not a pixel in the target image is a pixel in a region following a change represented by an assumed geometric deformation model. In this example, as described above, a geometric deformation model is assumed in advance.
- the allowable area is determined for each individual pixel of the reference image. By correcting the target image so as to eliminate the positional deviation from the reference image, and specifying the pixel in the reference image that is closest to the pixel of the target image at the time of correction, the pixel of the target image and the reference image Assume that pixels are associated with each other.
- the pixel in the target image is It can be said that it is a pixel in a region following a change represented by a geometric deformation model assumed in advance (in other words, a region in which local fluctuation due to movement of a fixed object or the like does not occur).
- the allowable area calculation unit 404 calculates such an allowable area for each pixel of the reference image. Note that the correction of the target image as described above (correction for eliminating misalignment) is performed separately from the allowable area calculation, and the target image is not used in the calculation of the allowable area itself.
- FIG. 2 is an explanatory diagram illustrating an example of the allowable space.
- a position 1403 (x 1i ) is a position in the color space indicated by the pixel value of one pixel in the reference image.
- An ellipsoid 1401 is an allowable area obtained for the pixels of the reference image.
- a position 1402 (x kj ) is a position in the color space indicated by the pixel in the target image. Since the position 1402 exists in the allowable region 1401, it is determined that the pixel in the target image is a region that follows a change represented by a geometric deformation model assumed in advance.
- the position in the color space indicated by the pixel value of the pixel in the target image is the position 1404 (x kl ) shown in FIG. 2, the position 1404 is outside the allowable area 1401, so the pixel in the target image Is determined not to be a region following a change represented by a previously assumed geometric deformation model.
- the allowable area calculation unit 404 sequentially selects the pixels of the reference image input to the reference image input unit 411, and calculates the allowable area for each pixel.
- the permissible area calculation unit 404 obtains the maximum change vector in the selected pixel when the pixel is selected. Further, the allowable area calculation unit 404 uses an ellipsoid whose radius in the central axis direction (hereinafter referred to as the central axis radius) is the size of the maximum change vector and whose direction of the central axis is the same direction as the maximum change vector. .
- the n-dimensional ellipsoid in the n-dimensional color space has n principal axes, but the radius of each of the n ⁇ 1 axes other than the central axis is determined in advance.
- the central axis is the axis having the longest radius in the principal axis direction among the n principal axes of the n-dimensional ellipsoid.
- the allowable area calculation unit 404 calculates the maximum length of the predetermined radii as the central axis radius.
- the minor axis radius is determined in advance.
- the allowable area calculation unit 404 determines an area where the maximum change vector has a major axis radius as an allowable area. At this time, if the magnitude of the maximum change vector is smaller than the minor axis radius, an ellipsoid having the major axis radius equal to the minor axis radius is determined as the allowable region.
- FIG. 3 is an explanatory diagram showing the maximum change vector.
- FIG. 3A shows a pixel i selected from the reference image and surrounding pixels adjacent to the pixel i. There are eight pixels around the selected pixel i as shown in FIG.
- the allowable area calculation unit 404 obtains a difference vector between the selected pixel i and the surrounding pixels.
- the allowable area calculation unit 404 obtains a vector obtained by subtracting a vector having the pixel value of the pixel i as an element from a vector having the pixel value of the pixel around the pixel i as an element as a difference vector. For example, if the vectors of the pixels i and j shown in FIG.
- the allowable area calculation unit 404 obtains x 1j ⁇ x 1i as the difference vector.
- the direction from the position where the pixel value of the pixel i is the coordinate value to the position where the pixel value of the pixels around the pixel i is the coordinate value is called a pixel value gradient. It can be said that the direction of the difference vector is a pixel value gradient.
- the allowable area calculation unit 404 obtains eight difference vectors.
- the allowable area calculation unit 404 identifies a difference vector having the maximum size among the difference vectors. This vector is referred to as a maximum change vector. Of the pixels around the selected pixel, a pixel forming the maximum change vector is referred to as a maximum change pixel.
- the difference vector obtained by the allowable area calculation unit 404 is a difference vector between a pixel in the reference image and its surrounding pixels, and can be called a difference vector between adjacent pixels.
- the pixel j is the maximum change pixel.
- a position in the color space (YUV space in this example) indicated by the pixel value of the selected pixel is defined as a position 1703 (see FIG. 3B).
- a position in the color space indicated by the pixel value of the maximum change pixel j is defined as a position 1706 (see FIG. 3B).
- the maximum change vector is a vector 1705 shown in FIG.
- the maximum change vector obtained for the pixel i in the reference image is denoted by ⁇ x 1i .
- the allowable area calculation unit 404 is centered on the position in the color space indicated by the pixel value of the selected pixel i, the size of the maximum change vector ⁇ x 1i is the center axis radius, and the direction of the center axis is the same direction as the maximum change vector Is calculated, and the parameter is stored.
- the major axis is the central axis.
- the allowable area calculation unit 404 sets the ellipsoid as an allowable area.
- the permissible area is specified such that the distance from the center to each vertex on the central axis of the permissible area is equal, with the position in the color space indicated by the pixel value of the selected pixel i as the center. To do. That is, there is no bias in the spread from the center point of the ellipsoid to the center axis direction. For example, in the example shown in FIG. 3A, among the pixels surrounding the pixel i, even if the pixel value other than the pixel j is a pixel value close to the pixel i, one long axis is long and the other long axis is short. There is no stipulation.
- the direction of the maximum change vector differs for each selected pixel. Further, the greater the change in pixel value between the pixel i and the surrounding pixels, the greater the maximum change vector. Therefore, the allowable area calculation unit 404 determines an allowable range according to the direction and size of the maximum change vector for each pixel in the reference image.
- the position in the color space indicated by the pixel value of the selected pixel i is set as the center of the ellipsoid, and the direction of the center axis and the radius of the center axis of the ellipsoid are determined by the maximum change vector.
- the allowable area calculation unit 404 obtains the average of the pixel values of the selected pixel i and the surrounding eight pixels. For example, if it is a YUV format pixel, the average value of the Y signal, the average value of the U signal, and the average value of the V signal in the selected pixel and the surrounding pixels are calculated.
- the allowable area calculation unit 404 may determine the position in the color space indicated by the average values of the Y, U, and V signals as the center position of the ellipsoid. Further, the allowable region calculation unit 404 calculates a covariance matrix, and obtains the direction of the first principal axis (center axis) and the first principal component score by principal component analysis. The allowable area calculation unit 404 determines an ellipsoid (allowable area) as the largest one of the first principal component scores from which the size of the central axis radius is obtained.
- the allowable area calculation unit 404 obtains the average of the pixel values of the selected pixel i and the surrounding eight pixels, and determines the position in the color space indicated by each average value as an ellipsoid. Determined as the center position. Then, the allowable area calculation unit 404 calculates a covariance matrix, obtains the n-th principal axis and the n-th principal component score by principal component analysis, and sets the n-th principal axis direction of the ellipsoid as the direction of the n-th principal axis. decide.
- n 1,.
- L is a constant that determines the size of the radius and is predetermined.
- the pixel difference calculation unit 405 corrects the target image so as to eliminate the positional deviation estimated by the positional deviation amount estimation unit 401. Since the misregistration amount estimation unit 401 estimates the misregistration amount, the pixel difference calculation unit 405 performs reverse conversion of the conversion from the reference image to the target image so as to eliminate the misregistration amount, and the misregistration amount is calculated. What is necessary is just to obtain
- the difference vector calculated by the pixel difference calculation means 405 is a difference vector of pixels corresponding between the reference image and the target image, and can be called a corresponding pixel difference vector.
- the positional deviation amount estimation means 401 estimates the positional deviation amount with subpixel accuracy. Therefore, when the target image is corrected so as to eliminate the positional deviation, the pixel of the corrected target image does not always match the pixel of the reference image.
- FIG. 4 is an explanatory diagram illustrating the pixel of the target image and the pixel of the reference image after correction. FIG. 4 shows a two-dimensional plane indicating where the pixel exists in the image. Pixels indicated by circles are reference image pixels. Pixels indicated by triangles are pixels of the target image after correction, and similarly pixels indicated by squares are pixels of another target image after correction. Since correction is performed so as to eliminate the positional deviation estimated with subpixel accuracy, the pixels of each target image do not necessarily match the pixels of the reference image, as shown in FIG.
- the pixel difference calculation unit 405 specifies a pixel in the reference image that is closest to the pixel position (position in the image) of the target image after correction, and determines that the two pixels correspond to each other. As a result, the correspondence between the pixel of the target image and the pixel of the reference image is determined. Then, a difference vector of vectors (vectors in the color space) having the pixel values of the associated pixels as elements is calculated. For example, the pixel difference calculation unit 405 identifies the pixel 1502 in the reference image closest to the pixel 1501 in the target image shown in FIG. 4 and associates the pixels 1501 and 1502 with each other.
- the pixel difference calculation unit 405 calculates a difference vector obtained by subtracting a vector having the pixel value of the pixel 1502 as an element from a vector having the pixel value of the pixel 1501 as an element.
- the pixel difference calculation unit 405 calculates a difference vector for each pixel of each target image.
- the pixel difference calculation means 405 obtains a pixel value at a pixel position in the target image by interpolating from the pixel of the reference image instead of directly calculating the difference vector, and a vector having the pixel value obtained by the interpolation as an element
- a difference vector from the vector indicated by the pixel value in the reference image may be calculated.
- the pixel value in the pixel 1501 shown in FIG. 4 may be calculated by bilinear interpolation or bicubic interpolation using the pixel value of the reference image. Then, a difference vector between a vector whose element is the pixel value and a vector whose element is the pixel value of the pixel 1502 may be calculated.
- the inappropriate region extraction unit 406 uses the allowable region obtained by the pixel difference calculation unit 405, and each pixel in the target image is a pixel in a region that does not follow the change represented by the assumed geometric deformation model. It is determined whether or not. More specifically, the inappropriate area extraction unit 406 uses the allowable area obtained for the pixel in the reference image corresponding to the pixel in the target image, and the difference vector obtained for the two pixels, to obtain the target image. It is determined whether or not the position in the color space indicated by the pixel value of the middle pixel is outside the allowable area.
- the inappropriate region extraction unit 406 determines that the pixel in the target image is a geometric deformation model assumed in advance. It is determined that the pixel is in a region not following the change represented by. On the other hand, if the position in the color space indicated by the pixel value of the pixel in the target image is within or on the allowable region, the inappropriate region extraction unit 406 determines that the pixel in the target image has been assumed in advance. The pixel is determined to be a pixel in the area following the change represented by the geometric deformation model.
- the region that does not follow the change represented by the assumed geometric deformation model means that the region is inappropriate for high-quality image generation. This area is simply referred to as an inappropriate area. Also, the fact that the area follows the change represented by the assumed geometric deformation model means that the area is appropriate for high-quality image generation. This area is simply referred to as an appropriate area.
- the inappropriate area storage unit 407 stores the pixel number of the inappropriate area calculated by the inappropriate area extraction unit 406.
- This pixel number is a pixel number for identifying a pixel in the image and may be determined in advance for each pixel.
- the pixel number may be stored in the inappropriate region storage unit 407. Further, information for specifying pixels in the inappropriate area may be stored in a mode other than the pixel number.
- the inappropriate region extraction unit 406 calculates the pixel in the image restoration unit 403 based on the calculation result of whether or not the pixel in the target image corresponds to the inappropriate region. The reliability regarding whether it is the appropriate area
- the image restoration unit 403 includes a misregistration estimation amount calculated by the misregistration estimation unit 410, a pixel number of an inappropriate region stored in the inappropriate region storage unit 407, and a plurality of images stored in the image input unit 410. Based on the pixel value, a higher quality image is restored. This high-quality image is referred to as a restored image.
- the image restoration unit 403 will be described more specifically.
- a column vector in which only luminance values (Y signals in YUV format) are arranged in a predetermined order among pixel values of the k-th input image (k is an integer from 1 to K) in the K input images is defined as I k . If the number of pixels in each input image is N, I k is an N-dimensional vector of (Y k1 , Y k2 ,..., Y ki ,..., Y kN ) t .
- a column vector in which only the luminance values (Y signals in YUV format) of the restored image restored by the image restoration unit 403 are arranged in a predetermined order is set as a vector T.
- the restored image is an image having a higher resolution than the input image (that is, an image having a larger number of pixels), and the number of pixels of the restored image is M.
- the vector T is an M-dimensional vector.
- the order in which the luminance values are arranged in I k and T may be, for example, a raster scan order or another order.
- the image restoration unit 403 obtains the luminance value (Y signal) of each pixel of the restored image by obtaining a vector T that minimizes the value of the evaluation function E [T] expressed by the following equation (1). .
- D, B, and Q are predetermined matrices.
- D and B are matrices representing low-pass filters representing downsampling and camera blur, respectively.
- Q is a matrix representing a high-pass filter.
- M k is a matrix that represents a geometric deformation of the restored image, and specifically, a matrix that represents a transformation that applies a geometric deformation (for example, translation or rotation) of the target image to the reference image on the restored image. It is.
- the image restoration unit 403 may determine a matrix M k that causes the positional deviation amount estimated by the positional deviation amount estimation unit 401 to be generated in the restored image.
- M k itself does not involve enlargement or reduction of the image, and reducing the image by reducing the number of pixels of the restored image to the number of pixels of the input image is represented by a matrix D. Therefore, the correspondence between the pixel value of the target image (in this example, the luminance value) and the pixel value of the restored image is determined by the combination of D and Mk .
- the matrix S k in the equation (1) is a diagonal matrix
- the first term on the right side of Equation (1) is a term called an error term, which is actually obtained as a value obtained by subjecting the luminance value (vector T) of the restored image to positional deviation correction, camera blur, and downsampling. This is a term for minimizing the difference from the brightness value (vector I k ) of the kth image .
- the second term on the right side of Equation (1) is called a regularization term and is a term for preventing numerical instability.
- ⁇ is a weight representing the strength of the regularization term.
- the image restoration unit 403 may use, for example, a conjugate gradient method or a Gauss-Newton method as an optimization method for minimizing the evaluation function E [T] of Expression (1).
- the image restoration means 403 may perform calculation until E [T] sufficiently converges to the minimum value using the conjugate gradient method or Gauss-Newton method, and obtain the vector T (that is, each Y signal of the restored image).
- the image restoration unit 403 may obtain the color difference component of the restored image, that is, the U signal and V signal of each pixel of the restored image as follows. That is, the image restoration unit 403 enlarges the input image by performing bilinear interpolation or bicubic interpolation on the reference image, and converts the U signal and V signal of each pixel in the enlarged image to the U signal of each pixel in the restored image. A signal and a V signal may be used.
- the image restoration unit 403 may enlarge any one target image.
- the image restoration unit 403 performs enlargement by performing bilinear interpolation or bicubic interpolation on the target image. Then, only a U signal is extracted from the enlarged image, a column vector arranged in a predetermined order (for example, raster scan order) is created, and the conversion matrix M k is multiplied by the column vector and included in the resulting vector. The value of each element is the U signal for each pixel of the restored image.
- a predetermined order for example, raster scan order
- the image restoration means 403 creates a column vector in which only the V signal is extracted from the enlarged image and arranged in a predetermined order, and the above-described transformation matrix M k is multiplied by the column vector. Is the V signal of each pixel of the restored image.
- the U and V signals of the restored image are determined based on the enlarged image obtained by interpolating the reference image or the target image
- the U signal and the V signal are calculated in the same manner as the luminance value (Y signal).
- k-th (k is 1 ⁇ K an integer) the vector obtained by arranging only the U signal among the pixel values of the input image in a predetermined order as I k
- column vector obtained by arranging only the U signal in the restored image in a predetermined order May be obtained as a vector T
- a U signal of each pixel of the restored image may be obtained by obtaining a vector T that minimizes the value of the evaluation function E [T].
- the Y signal of each pixel of the restored image may be obtained similarly.
- the image output unit 420 outputs (for example, displays) the restored image generated by the image restoration unit 403.
- the image output means 420 is realized by a display device, for example.
- the positional deviation amount estimation unit 401, the allowable region calculation unit 404, the image difference calculation unit 405, the inappropriate region extraction unit 406, and the image restoration unit 403 are realized by a CPU of a computer that operates according to an image processing program, for example. That is, the CPU reads an image processing program stored in a program storage means (not shown), and in accordance with the program, a positional deviation amount estimation means 401, an allowable area calculation means 404, an image difference calculation means 405, an inappropriate area extraction means. 406 may operate as the image restoration unit 403. Alternatively, each means may be realized by a dedicated circuit.
- FIG. 5 is an explanatory diagram showing an example of processing progress in the first embodiment.
- the reference image input unit 411 stores the reference image (step S501).
- the allowable area calculation unit 404 calculates a parameter representing the allowable area for each pixel of the reference image, and stores the parameter (step S502). For example, as described above, the position in the color space indicated by the pixel selected from the reference image is the center, the direction of the maximum change vector for the pixel is the center axis direction, and the size of the maximum change vector is the center axis radius. And the parameter representing the allowable area is stored.
- the target image input unit 412 stores the target image (step S503).
- the misregistration amount estimation means 401 estimates the misregistration amount of the target image with respect to the reference image with subpixel accuracy and stores it (step S504).
- the pixel difference calculation unit 405 corrects the target image so as to eliminate the positional shift estimated by the positional shift amount estimation unit 401, and associates the corrected pixel in the target image with the pixel in the reference image. And the difference vector in the group of the matched pixel is calculated (step S505).
- the inappropriate area extraction unit 406 uses, for each pixel in the target image input in step S503, the difference vector calculated for that pixel and the allowable area of the pixel in the reference image corresponding to that pixel. To determine whether the area is inappropriate. Then, the pixel determined to be an inappropriate area is stored in the inappropriate area storage unit 407 (step S506). For example, the pixel number is stored.
- step S507 If the target image subjected to the processing in steps S504 to S506 is not the last target image (No in step S507), the processing from step S503 is repeated. For example, if the next target image is input to the target image input unit 412, the target image input unit 412 stores the target image (step S503), and performs the processing of steps S504 to S506 for the target image.
- the image restoration unit 403 is stored in the inappropriate area storage unit 407.
- a composite image (high-resolution image) is generated using the inappropriate region information, the reference image and each target image stored in the image input unit 410 (step S508).
- the image restoration unit 403 outputs the generated restored image to the image output unit 420. For example, the restored image is displayed on the display device.
- an allowable area corresponding to the luminance value of the pixel and the surrounding pixels is defined as an ellipsoid in the color space.
- the allowable area is individually determined according to the change in the color of each pixel in the reference image, the area in which the pixel value in the image changes drastically is represented by the assumed geometric deformation model. It is possible to prevent the region from being easily determined as a local region (inappropriate region) that does not follow the change. As a result, it is possible to prevent the image quality improvement effect from being reduced when the restored image is created.
- an ellipsoid centered on the position in the color space indicated by the pixel value of the pixel of the reference image is set as the allowable space. Therefore, it is possible to more appropriately determine whether the region is a local region that does not follow the change represented by the assumed geometric deformation model. For example, in the example shown in FIG. 23B, the determination pixel 150 corresponding to the pixel 160 can be determined to be an image of an appropriate region.
- a value representing the feature of the image may be used as an element of the vector x ki .
- a value representing the feature of the image there is a value obtained by differentiating the pixel value of each pixel in the x direction or the y direction, and this may be used as an element of the vector x ki .
- a vector having both the feature value and the pixel value itself as an element may be used as the vector x ki .
- the space that defines the ellipsoid that is the allowable region may be determined in advance as a space that can express such a vector.
- the allowable area calculation unit 404 calculates the allowable area for the reference image.
- the allowable area calculation unit 404 calculates the allowable area for the target image, and calculates the allowable area. It may be used to determine an inappropriate area.
- the allowable area calculation unit 404 calculates an allowable area for each pixel of the target image.
- the inappropriate region extraction unit 406 may determine the inappropriate region using the allowable region calculated for each pixel of the target image and the difference vector calculated by the pixel difference calculation unit 405.
- the calculation of the allowable area for each pixel of the target image is the same as the calculation of the allowable area for each pixel of the reference image.
- the allowable area calculation unit 404 may calculate the allowable area for all the pixels of the reference image and the target image, and determine the inappropriate area based on the allowable area.
- Ra be an allowable area calculated for each pixel of the reference image.
- Rb be an allowable area calculated for each pixel of the target image.
- the inappropriate region extraction unit 406 uses the determination result using the allowable region Ra and the difference vector calculated by the pixel difference calculation unit 405, and the difference vector calculated by the allowable region Rb and the pixel difference calculation unit 405. If the determination result is a determination result indicating that both are pixels in the inappropriate region, the inappropriate region extraction unit 406 may determine that the pixel of interest is a pixel in the inappropriate region.
- the inappropriate region extraction unit 406 is used. May determine that the pixel of interest is a pixel in an inappropriate area.
- FIG. FIG. 6 is a block diagram showing an example of an image processing system according to the second embodiment of the present invention. Constituent elements similar to those in the first embodiment are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
- the image processing system according to the second embodiment includes a computer (central processing unit; processor; data processing unit) 400a that operates under program control, an image input unit 410, and an image output unit 420.
- the computer 400a includes a positional deviation amount estimation unit 401, an inappropriate area determination unit 402a, and an image restoration unit 403.
- the inappropriate region determination unit 402a includes an allowable region calculation unit 404a, an image difference calculation unit 405a, an inappropriate region extraction unit 406, an inappropriate region storage unit 407, and a reference high resolution image generation unit 608. It is determined whether or not a pixel in the target image corresponds to an inappropriate area using an image obtained by increasing the resolution of the image.
- the inappropriate area extraction unit 406 and the inappropriate area storage unit 407 are the same as those in the first embodiment.
- the image input unit 410, the positional deviation amount estimation unit 401, the image restoration unit 403, and the image output unit 420 are the same as those in the first embodiment.
- the reference high-resolution image generation unit 608 acquires the reference image input to the reference image input unit 411, and performs an interpolation (for example, bilinear interpolation or bicubic interpolation) on the reference image to increase the resolution of the reference image. Is generated and stored. This image is referred to as a reference high resolution image.
- an interpolation for example, bilinear interpolation or bicubic interpolation
- the allowable area calculation unit 404a of the present embodiment determines an allowable area for each pixel of the reference high resolution image stored in the reference high resolution image generation unit 608, and stores a parameter representing the allowable area. Except for the reference high-resolution image instead of the reference image, this is the same as the allowable area calculation unit 404 of the first embodiment. That is, the method for determining the permissible area for each pixel is the same as in the first embodiment.
- the pixel difference calculation unit 405a corrects the target image so as to eliminate the positional deviation estimated by the positional deviation amount estimation unit 401 (that is, to eliminate the estimated positional deviation amount). Then, the pixel difference calculation unit 405a identifies a pixel in the reference high-resolution image corresponding to the pixel in the target image after correction, and represents a vector (a vector having the pixel value as an element) indicated by the pixel value of the corresponding pixel. Calculate the difference vector.
- the second embodiment is the same as the first embodiment except that the target corresponding to the pixel in the target image is a reference high resolution image.
- FIG. 7 is an explanatory diagram showing the pixel of the target image after correction and the pixel of the reference high-resolution image, and shows a two-dimensional plane indicating where the pixel exists in the image. Pixels indicated by circles are pixels of the reference image, and pixels indicated by triangles are pixels of the target image after correction. Since the reference high-resolution image is an image obtained by increasing the resolution of the reference image by interpolation, the number of pixels is larger than that of the target image. In addition, since the pixel difference calculation unit 405a corrects the target image so as to eliminate the positional deviation estimated with sub-pixel accuracy, the pixel of each target image does not always match the pixel of the reference high resolution image.
- the pixel difference calculation unit 405a identifies the pixel in the reference high-resolution image that is closest to the pixel position (position in the image) of the target image after correction, and determines that the two pixels correspond to each other. Then, the pixel difference calculation unit 405a calculates a difference vector of vectors (vectors in the color space) whose elements are pixel values of the associated pixels. For example, the pixel difference calculation unit 405a identifies the pixel 1602 of the reference high-resolution image closest to the pixel 1601 in the target image shown in FIG. 7, and associates the pixels 1601 and 1602.
- the pixel difference calculating unit 405a calculates a difference vector obtained by subtracting a vector having the pixel value of the pixel 1602 as an element from a vector having the pixel value of the pixel 1601 as an element.
- the pixel difference calculation unit 405a calculates a difference vector for each pixel of each target image.
- a pixel in the reference high-resolution image newly generated by interpolation may be associated with a pixel in the target image.
- the inappropriate region extraction unit 406 uses the allowable region obtained by the pixel difference calculation unit 405a, and each pixel in the target image is a pixel in a region that does not follow the change represented by the assumed geometric deformation model. It is determined whether or not. This operation may be performed in the same manner as in the first embodiment.
- the inappropriate region extraction unit 406 uses the allowable region obtained for the pixel in the reference high-resolution image corresponding to the pixel in the target image and the difference vector obtained for the two pixels to calculate the pixel in the target image. It may be determined whether or not the position in the color space indicated by the pixel value is outside the allowable area.
- the reference high-resolution image generation unit 608, the allowable area calculation unit 404a, and the pixel difference calculation unit 405a are realized by a CPU that operates according to a program, for example. Alternatively, it may be realized by a dedicated circuit.
- FIG. 8 is a flowchart illustrating an example of processing progress in the second embodiment.
- the reference image input unit 411 stores the reference image.
- the reference high-resolution image generation unit 608 performs bilinear interpolation or bicubic interpolation on the reference image to generate and store a reference high-resolution image (step S701).
- the allowable area calculation unit 404a calculates a parameter representing the allowable area for each pixel of the reference high-resolution image, and stores the parameter (step S702).
- the target image input unit 412 stores the target image (step S703).
- the misregistration amount estimation means 401 estimates the misregistration amount of the target image with respect to the reference image with subpixel accuracy and stores it (step S704).
- the misregistration amount estimation unit 401 may use a reference image stored in the reference image input unit 411. Steps S703 and S704 are the same as steps S503 and S504.
- the pixel difference calculation unit 405a corrects the target image so as to eliminate the positional shift estimated by the positional shift amount estimation unit 401, and associates the pixel in the target image with the pixel in the reference high resolution image. And the difference vector in the group of the matched pixel is calculated (step S705).
- the inappropriate area extraction unit 406 calculates, for each pixel in the target image input in step S703, the difference vector calculated for the pixel and the tolerance obtained for the pixel in the reference high-resolution image corresponding to the pixel. The area is used to determine whether the area is inappropriate. Then, the pixel determined to be an inappropriate area is stored in the inappropriate area storage unit 407 (step S706).
- step S703 If the target image subjected to the processing of steps S704 to S706 is not the last target image (No in step S707), the processing after step S703 is repeated. For example, if the next target image is input, the target image input unit 412 stores the target image (step S703), and performs the processes of steps S704 to S706 on the target image.
- the image restoration unit 403 is stored in the inappropriate area storage unit 407.
- a composite image is generated using the inappropriate region information, the reference image and each target image stored in the image input unit 410, and is output to the image output unit 420 (step S708).
- the operations in steps S706 to S707 are the same as those in steps S506 to S508.
- the second embodiment as in the first embodiment, it is possible to appropriately determine whether or not the region is a local region that does not follow the change represented by the assumed geometric deformation model. It is possible to prevent a reduction in image quality improvement effect when creating a restored image.
- interpolation is performed when generating a reference high-resolution image from the reference image, and therefore, intermediate pixels of a plurality of original pixels are inserted between pixels in the reference image. Therefore, the distance in the color space (for example, YUV space) between adjacent pixels is shorter in the reference high-resolution image.
- the allowable area calculation unit 404a determines the allowable area for each pixel of the reference high resolution image, it is possible to determine the allowable area that reflects the color change between the pixels in the image more than in the first embodiment. . As a result, it is possible to more appropriately determine whether the region is a local region that does not follow the change represented by the assumed geometric deformation model. In addition, the image quality improvement effect in the restored image can be further enhanced.
- a feature value such as a value obtained by differentiating a pixel value in the x direction or the y direction may be used as an element of the vector x ki .
- the allowable area may be calculated for the target image.
- the image processing system includes target image high-resolution means (not shown) that interpolates the target image to increase the resolution, and the target image high-resolution means operates similarly to the reference high-resolution image generation means 608. Therefore, the resolution of the target image may be increased.
- the pixel difference calculation unit 405a corrects the high-resolution target image so as to eliminate the positional deviation, and the pixel closest to the pixel position of the reference image is selected from the pixels of the corrected image. By specifying, the pixel of the target image and the pixel of the reference image that have been increased in resolution may be associated with each other.
- the pixel difference calculation unit 405a calculates a corresponding pixel difference vector between vectors corresponding to the associated pixels.
- the allowable area calculation unit 404a sequentially selects the pixels of the target image, specifies the maximum change vector between the selected pixel and the surrounding pixels, and selects the size and direction of the maximum change vector.
- An ellipsoid determined by the pixel value of the selected pixel may be determined as the determination region. This operation is the same as that for obtaining an allowable area for the reference image.
- the allowable area (Ra) is calculated for each pixel of the reference image
- the allowable area (Rb) is calculated for each pixel of the target image
- the inappropriate area extracting unit 406 is calculated.
- FIG. FIG. 9 is a block diagram showing an example of an image processing system according to the third embodiment of the present invention. Constituent elements similar to those in the first embodiment are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
- the image processing system of the third embodiment includes a computer (central processing unit; processor; data processing unit) 400b that operates by program control, an image input unit 410, and an image output unit 420.
- the computer 400b includes a positional deviation amount estimation unit 401, an inappropriate area determination unit 402b, and an image restoration unit 403b.
- the inappropriate region determination unit 402b includes an allowable region calculation unit 404, an image difference calculation unit 405, an inappropriate region extraction unit 406, and a use image generation unit 807, and the pixels of the inappropriate region in the target image. By replacing the value with the pixel value of the pixel of the corresponding reference image, an image used for creating a restored image is generated.
- the allowable area calculation unit 404, the image difference calculation unit 405, and the inappropriate area extraction unit 406 are the same as those in the first embodiment.
- the image input unit 410, the positional deviation amount estimation unit 401, and the image output unit 420 are the same as those in the first embodiment.
- the inappropriate area extraction unit 406 sends information (for example, pixel number) of pixels corresponding to the inappropriate area of each target image to the use image generation unit 807.
- the use image generation unit 807 determines the pixel value of the pixel determined as the inappropriate region in the target image based on the information of the pixel determined as the inappropriate region, as the pixel of the reference image corresponding to the pixel. Replace with a value.
- the reference image and the target image whose pixel values have been replaced by the use image generation unit 807 are used for generating a restored image.
- FIG. 10 is an explanatory diagram showing an example of replacement of pixel values of the target image.
- FIG. 10A is a reference image obtained by photographing the house and the moon
- FIG. 10B is a target image obtained by photographing the same house and the moon.
- the moon and the house shown in each image are shown by solid lines, and the areas corresponding to the areas showing the moons in other images are shown by broken lines.
- a region 1909 shown in FIG. 10B corresponds to a region 1901 representing the moon in the reference image, and corresponds to the background in the target image.
- the target image has a uniform change with respect to the reference image due to the position of the camera at the time of shooting.
- the pixels representing the moon are not associated with each other.
- the pixels in the region 1909 shown in FIG. 10B are associated with the pixels in the region 1901 shown in FIG. 10A, and similarly, the pixels in the region 1902 shown in FIG. Corresponding to the pixels in the area 1908 shown in FIG. Then, the pixels in the areas 1909 and 1902 in the target image are determined to be pixels in the inappropriate area by the inappropriate area extraction unit 406.
- the used image generation means 807 replaces the pixel value of the pixel determined to be a pixel in the inappropriate region as described above with the pixel value of the pixel in the corresponding reference image.
- the use image generation unit 807 replaces the pixel value of the pixel in the area 1909 illustrated in FIG. 10B with the pixel value of the pixel in the area 1901 of the reference image.
- the pixel value of the pixel in the area 1902 shown in (b) is replaced with the pixel value of the pixel in the area 1908 of the reference image.
- the target image shown in FIG. 10B is the image shown in FIG.
- the region 1909 is a region representing the moon
- the region 1902 is a background region.
- the used image generation means 807 stores the reference image and each target image. For the target image with the pixel value replaced, the target image after the pixel value replacement is stored.
- the image restoration unit 403b generates a restored image using the reference image and the target image stored in the use image generation unit 807. However, when the image restoration unit 403b determines the pixel value of the restored image (for example, Y signal in YUV expression), the value of the evaluation function E [T] expressed by the following equation (2) is minimized. By obtaining the vector T, the luminance value of each pixel of the restored image may be obtained.
- the image restoration unit 403b is the same as the image restoration unit 403 in the first embodiment except that the expression (2) is used as the evaluation function E [T].
- the use image generation unit 807 replaces the pixel value of the inappropriate area of the target image with the pixel value of the pixel of the corresponding reference image, so that the inappropriate area does not exist in the target image. Therefore, the diagonal component may be set to 1 in the diagonal matrix S k in equation (1).
- Expression (2) can be said to be an expression when all the diagonal components of the diagonal matrix S k of Expression (1) are set to 1.
- the used image generation unit 807 and the image restoration unit 403b are realized by a CPU that operates according to a program, for example. Alternatively, it may be realized by a dedicated circuit.
- FIG. 11 is a flowchart illustrating an example of processing progress in the third embodiment. Processing until the reference image is input to the reference image input unit 411 and the allowable region calculation unit 404 calculates and stores the parameter indicating the allowable region for each pixel of the reference image (steps S901 and S902) is the first implementation. This is the same as steps S501 and S502 of the embodiment.
- the target image input unit 412 stores the target image (step S903).
- the misregistration amount estimation unit 401 estimates and stores the misregistration amount of the target image with respect to the reference image with subpixel accuracy (step S904).
- the pixel difference calculation unit 405 corrects the target image so as to eliminate the positional shift, and associates the pixel in the target image with the pixel in the reference image.
- the difference vector in the group of the matched pixel is calculated (step S905).
- the inappropriate area extraction unit 406 determines, for each pixel in the target image, whether or not it is an inappropriate image area using the difference vector and the allowable area calculated for the pixel. Then, the pixel determined to be an inappropriate area is sent to the use image generation means 807 (step S906).
- the processing of steps S903 to S906 is the same as that of steps S503 to S506 of the first embodiment.
- the use image generation unit 807 converts the pixel value of each pixel determined to be a pixel in the inappropriate region among the pixels of the target image input in step S903 to the pixel value of the pixel in the reference image corresponding to the pixel. Replace (step S907). For example, if the target image illustrated in FIG. 10B is input in step S903, the use image generation unit 807 sets the pixel values of the pixels 1902 and 1909 of the target image to the reference image (FIG. 10A). )) In the areas 1908 and 1901. The used image generation unit 807 stores the reference image and the target image with the pixel value replaced. When there is no inappropriate area in the target image and the pixel value is not replaced, the target image is stored as it is.
- step S908 If the target image subjected to the processing in steps S904 to S907 is not the last target image (No in step S908), the processing from step S903 is repeated. For example, if the next target image is input, the target image input unit 412 stores the target image (step S903), and performs the processing of steps S904 to S907 for the target image.
- the image restoration unit 403b uses the reference stored in the used image generation unit 807. Using the image and the target image, a restored image is generated and output to the image output unit 420 (step S909).
- the operation in step S909 is the same as that in step S508 except that Expression (2) is used as the evaluation function E [T].
- the use image generation unit 807 replaces the pixel value for the area determined to be an inappropriate area in the target image, stores the target image, and the image restoration unit 403b stores the pixel value after the pixel value replacement. A restored image is generated using the target image. Therefore, the image restoration unit 403b does not need information on an area determined to be an inappropriate area, and it is not necessary to provide a memory (unsuitable area storage unit 407) for storing the information. Therefore, according to the third embodiment, high-quality image generation processing can be performed with a smaller memory storage amount.
- the inappropriate region determination unit 402b includes the reference high resolution image generation unit 608 described in the second embodiment, and the pixel difference calculation unit 405 calculates a difference vector from the reference high resolution image and the target image. May be calculated.
- the used image generation unit 807 may replace the pixel value of each pixel determined to be a pixel in the inappropriate region among the pixels of the target image with the pixel value of the pixel in the reference high-resolution image corresponding to the pixel.
- Embodiment 4 In each of the first to third embodiments, the configuration in the case of generating a high-resolution image having a larger number of pixels than the input image has been described.
- a blend having the same resolution as each input image is described.
- a configuration for generating an image will be described.
- a blend image is an image obtained by calculating an average of pixel values of corresponding pixels between a plurality of images for each pixel.
- FIG. 12 is a block diagram showing an example of an image processing system according to the fourth embodiment of the present invention. Constituent elements similar to those in the first embodiment are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
- the image processing system of the fourth embodiment includes a computer (central processing unit; processor; data processing unit) 400c that operates under program control, an image input unit 410, and an image output unit 420.
- the computer 400c includes a positional deviation amount estimation unit 401, an inappropriate region determination unit 402, and a blend image generation unit 1003.
- the inappropriate region determination unit 402 includes an allowable region calculation unit 404, an image difference calculation unit 405, an inappropriate region extraction unit 406, and an inappropriate region storage unit 407.
- the image input unit 410, the positional deviation amount estimation unit 401, the inappropriate region determination unit 402, and the image output unit 420 are the same as those in the first embodiment.
- Blend image generation means 1003 generates a blend image by calculating an average value of pixel values of corresponding pixels in the reference image and each target image for each pixel. For example, for each pixel of the reference image, a blend image is generated by obtaining an average value of pixel values of the pixel itself and each pixel of each target image corresponding to the pixel. However, the blend image generation means 1003 does not obtain the pixel value of the area representing the moving object in the blend image by calculating the average value, but instead calculates the pixel value of the area representing the moving object in the reference image. Set as value. Further, when calculating the average value of the pixel values of the corresponding pixels in the reference image and each target image, the pixels determined to fall under the inappropriate region are excluded from the average value calculation target and the average value of the pixel values is calculated. calculate.
- FIG. 13 is an explanatory diagram showing an example of blend image generation by the blend image generation means 1003.
- An image P0 shown in FIG. 13 is a reference image
- images P1 and P2 are target images
- image B is a blend image.
- the moon and the house represented in each image are illustrated by solid lines
- the region corresponding to the region representing the moon in other images is illustrated by a broken line.
- the regions 2002a and 2004b in the target image P1 correspond to the regions 2002 and 2004 representing the moon in the reference image P0 and the target image P2.
- the pixels in the regions 2002a and 2003 are determined as pixels in the inappropriate region
- the target image P2 the pixels in the regions 2002b and 2004 are determined as pixels in the inappropriate region.
- a change represented by a geometric deformation model assumed in advance occurs, and is determined as an appropriate region.
- the pixels in the region 2002 of the reference image P0 are associated with the pixels in the regions 2002a and 2002b of the target images P1 and P2.
- pixels in the areas 2003, 2003a, and 2003b are associated with each other, and pixels in the areas 2004, 2004a, and 2004b are associated with each other.
- the pixels in the areas 2005, 2006, and 2007 representing the house are associated with each other.
- the blend image generating unit 1003 and each target corresponding to the pixel in the reference image An average value of the pixel values of the pixels in the image is calculated and set as a pixel value in the blend image.
- the areas 2006 and 2007 of the target images P1 and P2 corresponding to the area 2005 of the reference image P0 are not inappropriate areas. Accordingly, the blend image generation unit 1003 calculates the average value of the pixel values of the corresponding pixels in the areas 2005, 2006, and 2007, and sets the average value as the pixel value in the blend image B. In this way, for example, the pixel value of the region 2008 in the blend image B is obtained.
- the blend image generation unit 1003 uses the pixel value of the pixel in the region 2002 representing the moving object in the reference image as the pixel value of the pixel in the region representing the moving object in the blend image. Therefore, the pixels in the regions 2002, 2002a, and 2002b are associated, but the average value of the pixel values is not calculated.
- the pixel value at the same position 2009 as the moving object region 2002 in the reference image is The pixel value in the region 2002 may be set to the same value.
- what is necessary is just to perform the determination whether it is a pixel of the area
- the blend image generation unit 1003 determines that the pixel of the reference image is a pixel in the region representing the moving object. do it. However, the blend image generation unit 1003 may determine the pixels in the region representing the moving object in the reference image by another method.
- the blend image generation unit 1003 if there is a pixel determined to be a pixel in the inappropriate region in the pixel field of the target image corresponding to the pixel of the reference image, and the target corresponding to the pixel of the reference image An average value of pixel values of pixels determined to be pixels in an appropriate region among the pixels of the image is calculated and set as a pixel value in the blend image.
- the pixels in the region 2003a of the reference image P0 correspond to the pixels in the regions 2003 and 2003b of the target images P1 and P2.
- the blend image generation unit 1003 blends the average value of the pixels in the region 2003a and the pixels in the region 2003b corresponding to the pixel.
- the pixel value of image B is determined.
- the pixels in the area 2004a of the reference image P0 correspond to the pixels in the areas 2004b and 2004 of the target images P1 and P2. Since the pixels in the area 2004 are determined as the pixels in the inappropriate area, the blend image generation unit 1003 determines the average value of the pixels in the area 2004a and the pixels in the area 2004b corresponding to the pixels as the pixel value of the blend image B. It is determined.
- Blend image generation means 1003 is realized by a CPU that operates according to a program, for example. Alternatively, it may be realized by a dedicated circuit.
- FIG. 14 is a flowchart showing an example of processing progress in the fourth embodiment.
- the same processes as those in the first embodiment are denoted by the same reference numerals as those in FIG.
- the operations (steps S501 to S507) from the input of the reference image to the storage of the information on the pixels in the inappropriate area in each target image in the inappropriate area storage unit 407 are the same as in the first embodiment.
- the blend image generation unit 1003 applies an inappropriate area in each target image.
- a blend image is generated from the reference image and each target image (step S600).
- the blend image generation unit 1003 outputs the blend image to the image output unit 420.
- a region where the pixel value in the image changes drastically is a local region that does not follow the change represented by the assumed geometric deformation model. It can be prevented that it is easily determined that the area is inappropriate. Therefore, pixels in such a region where the pixel value changes drastically can also be used for the blend image generation process, and the image quality of the generated blend image can be improved. Further, the blend image generation unit 1003 does not use the pixel values of the pixels in the inappropriate region in the target image for calculating the average value when generating the blend image. This also improves the image quality of the blended image.
- the inappropriate region determination unit 402 includes the reference high resolution image generation unit 608 described in the second embodiment, and uses the reference high resolution image generated by the reference high resolution image generation unit 608.
- the configuration may be such that an allowable area is determined and an image of an inappropriate area is determined.
- the inappropriate area determination unit 402 includes the use image generation unit 807 described in the third embodiment, and the use image generation unit 807 sets the pixel value of the pixel in the inappropriate area of each target image to the corresponding reference. You may replace with the pixel value of the pixel in an image. As a result, there is no inappropriate area from the target image. Therefore, the blend image generation unit 1003 may calculate the average value of the pixel values of the corresponding pixels in the reference image and the target image, determine the pixel value as the pixel value of the blend image, and generate the blend image. .
- FIG. FIG. 15 is a block diagram illustrating an example of an image processing system according to the fifth embodiment of the present invention. Constituent elements similar to those in the first embodiment and the fourth embodiment are denoted by the same reference numerals as those in FIGS. 1 and 12, and detailed description thereof is omitted.
- the image processing system of the fifth embodiment includes a computer (central processing unit; processor; data processing unit) 400d that operates under program control, an image input unit 410, and an image output unit 420.
- the computer 400d includes a misregistration amount estimation unit 401, an inappropriate area determination unit 402d, and a blend image generation unit 1003.
- the image input unit 410, the image output unit 420, the positional deviation amount estimation unit 401, and the blend image generation unit 1003 are the same as those in the fourth embodiment.
- the inappropriate region determination unit 402d includes a maximum change pixel calculation unit 1208, an allowable region calculation unit 1240, a pixel difference calculation unit 405, an inappropriate region extraction unit 406, and an inappropriate region storage unit 407.
- the pixel difference calculation unit 405, the inappropriate region extraction unit 406, and the inappropriate region storage unit 407 are the same as those in the first and fourth embodiments.
- the inappropriate region determination unit 402d uses the pixel value of the reference image stored in the reference image input unit 411 and the positional deviation amount between images obtained by the positional deviation amount estimation unit 401 for each target image. An inappropriate area is identified and the area is stored.
- the maximum change pixel calculation means 1208 specifies the maximum change pixel for each pixel of the reference image stored in the reference image input means 411 and stores the maximum change pixel.
- the maximum change pixel calculation means 1208 may determine the maximum change pixel by specifying the maximum change vector for each pixel of the reference image.
- the maximum change vector may be specified in the same manner as the allowable area calculation unit 404 in the first embodiment.
- the permissible area calculation means 1240 determines the permissible area for each pixel of the reference image using the maximum change pixel specified for the pixel.
- the allowable area calculation unit 1240 calculates a difference vector (that is, the maximum change vector) from the pixel of the reference image and the maximum change pixel of the pixel. Then, the permissible area calculation unit 1240 has the position in the color space indicated by the pixel value of the pixel of the reference image as the center, the magnitude of the maximum change vector as the center axis radius, and the direction of the center axis as the maximum change vector. What is necessary is just to calculate the parameter showing the ellipsoid which is a direction. However, the allowable area calculation unit 1240 does not store the parameter representing the allowable area.
- the maximum change pixel calculation means 1208 may store the maximum change vector itself instead of storing the maximum change pixel. Then, the allowable area calculation unit 1240 may determine the allowable area using the maximum change vector.
- the maximum change pixel calculation means 1208 and the allowable area calculation means 1240 are realized by a CPU that operates according to a program, for example. Alternatively, it may be realized by a dedicated circuit.
- the maximum change pixel calculation means 1208 and the allowable area calculation means 1240 are elements that divide the functions of the allowable area calculation means 404 of the first embodiment. However, in the first embodiment, when a reference image is input, an allowable area is obtained for each pixel of the reference image, and parameters representing the allowable area are stored. On the other hand, in the fifth embodiment, when the reference image is input, only the maximum change pixel for each pixel is specified and stored, and the parameter of the allowable area is not calculated. Then, when determining whether or not the pixel of the target image is a pixel in the inappropriate region, the allowable region calculation unit 1240 calculates the allowable region using information on the maximum change pixel for each target image. To do. By calculating the allowable area for each target image, the memory capacity can be reduced without storing the parameter indicating the allowable area.
- the process of the fifth embodiment will be described with reference to a flowchart.
- FIG. 16 is a flowchart showing an example of processing progress in the fifth embodiment.
- symbol same as FIG. 5, FIG. 14 is attached
- subjected and description is abbreviate
- the reference image input unit 411 stores the reference image (step S501).
- the maximum change pixel calculating means 1208 specifies and stores the maximum change pixel for each pixel of the reference image (step S502a).
- the maximum change pixel calculation unit 1208 sequentially selects pixels of the reference image. Then, if the selected pixel is i, the difference vector between the pixel i and each of the surrounding pixels is calculated, and among the pixels around the pixel i, the pixel having the largest difference vector (the maximum change vector is formed). Pixel) may be specified as the maximum change pixel.
- the maximum change pixel calculation unit 1208 does not calculate the allowable region. Further, the maximum change pixel calculation unit 1208 may store pixel identification information (for example, a pixel number) as information on the maximum change pixel.
- step S503 From the input of the target image (step S503), the target image is corrected so as to eliminate the positional deviation, the pixels in the target image are associated with the pixels in the reference image, and a difference vector in the set of associated pixels is calculated.
- the processing up to the processing to be performed (step S505) is the same as in the fourth embodiment.
- the allowable area calculation unit 1204 reads, for each pixel of the reference image input to the reference image input unit 1211, which pixel is the maximum change pixel of the pixel from the maximum change pixel calculation unit 1208, A parameter representing the area is calculated (step S505a).
- the inappropriate area extraction unit 406 uses, for each pixel in the target image input in step S503, the difference vector calculated for that pixel and the allowable area of the pixel in the reference image corresponding to that pixel. To determine whether the area is inappropriate. Then, the pixel determined to be an inappropriate area is stored in the inappropriate area storage unit 407 (step S506).
- step S507 If the target image subjected to the processing in steps S504 to S506 is not the last target image (No in step S507), the processing from step S503 is repeated. For example, if the next target image is input to the target image input unit 412, the target image input unit 412 stores the target image (step S503), and performs the processing of steps S504 to S506 for the target image. Also, when the target image that has undergone the processing of steps S504 to S506 is the last target image (for example, when the next target image is not input), the blend image generation unit 1003 applies an inappropriate area in each target image. With reference to the information of the corresponding pixel, a blend image is generated and output to the image output means 420 as in the fourth embodiment.
- step S506 every time the allowable area calculation unit 1204 moves to step S505b, a parameter representing the allowable area of each pixel of the reference image is calculated, and the inappropriate area extraction unit 406 uses the parameter.
- the determination in step S506 is performed. Therefore, it is not necessary to store a parameter representing the allowable area, and the memory capacity for storing the allowable area can be reduced.
- a region in which the pixel value in the image changes drastically is a local region that does not follow the change represented by the assumed geometric deformation model (non- It is possible to prevent the region from being easily determined as an appropriate region. Then, the image quality of the generated blend image can be improved.
- the parameter indicating the allowable area is recalculated for each target image, it is not necessary to store the parameter indicating the allowable area, and it is only necessary to store information on the maximum change pixel. Can be reduced.
- the inappropriate region determination unit 402d includes the reference high-resolution image generation unit 608 described in the second embodiment, and the maximum change pixel calculation unit 1208 has a maximum for each pixel of the reference high-resolution image.
- the configuration may be such that the change pixel is specified.
- the allowable area calculation unit 1204 may calculate a parameter representing the allowable area for each pixel of the reference high resolution image in step S505a.
- the inappropriate area determination unit 402d includes the use image generation unit 807 described in the third embodiment, and the use image generation unit 807 sets the pixel value of the pixel in the inappropriate area of each target image to the corresponding reference. You may replace with the pixel value of the pixel in an image. As a result, there is no inappropriate area from the target image. Therefore, the blend image generation unit 1003 may calculate the average value of the pixel values of the corresponding pixels in the reference image and the target image, determine the pixel value as the pixel value of the blend image, and generate the blend image. .
- the configuration in the case of generating a blend image is shown.
- the maximum change pixel calculation means is used instead of the allowable area calculation means 404 and 404a.
- 1208 and permissible area calculating means 1204 may be provided, and the maximum permissible pixel may be specified at the time of inputting the reference image, and the permissible area for each pixel may be calculated in a loop process for each target image. Even in this case, the memory capacity can be reduced.
- a feature value such as a value obtained by differentiating a pixel value in the x direction or the y direction is used as an element of the vector x ki. It may be used.
- the allowable area is set for the target image. You may calculate. In addition, an allowable area (Ra) is calculated for each pixel of the reference image, an allowable area (Rb) is calculated for each pixel of the target image, and the inappropriate area extraction unit 406 obtains the reference image and the target image, respectively. You may determine whether the pixel of an object image is a pixel of an improper area using permissible field Ra and Rb.
- the image processing system does not include the image restoration means 403 and 403b or the blend image generation means 1003 and the image output means 420, and does not generate a restored image or a blend image. It may be a configuration. Even with such a configuration, it is possible to determine with high accuracy whether each pixel of the target image is a local region that does not follow the change represented by the assumed geometric deformation model. . In this case, another device may generate the restored image or the blend image.
- Embodiment 6 when it is determined whether or not the pixel of the target image is a pixel in the inappropriate region, an ellipsoid relating to the pixel in the reference image corresponding to the pixel of the target image is used. It showed the case of judging.
- each position in the color space for a plurality of pixels in the target image corresponds to the plurality of pixels. Calculate a value that represents the degree to which each pixel in the image exists inside or outside the ellipsoid in the color space, and based on that value, one pixel in the target image You may determine whether it is a pixel.
- a sixth embodiment an embodiment in which determination is performed for each pixel in the target image will be described.
- FIG. 17 is a block diagram showing an example of an image processing system according to the sixth embodiment of the present invention. Constituent elements similar to those in the first embodiment are denoted by the same reference numerals as those in FIG. 1, and detailed description thereof is omitted.
- the image processing system according to the sixth embodiment includes a computer 400e that operates under program control, an image input unit 410, and an image output unit 420.
- the computer 400e includes a positional deviation amount estimation unit 401, an inappropriate area determination unit 402e, and an image restoration unit 403.
- the inappropriate area determination unit 402e includes a pixel difference calculation unit 405, an inappropriate area extraction unit 500, and an inappropriate area storage unit 407, and determines whether a pixel in the target image corresponds to an inappropriate area. To do.
- the pixel difference calculation unit 405 is the same as the pixel difference calculation unit 405 in the first embodiment, and calculates a corresponding pixel difference vector (corresponding pixel difference vector) between the reference image and the target image.
- the inappropriate area storage unit 407 is the same as the inappropriate area storage unit 407 in the first embodiment.
- the inappropriate area extraction unit 500 determines whether each pixel in the target image is a pixel in the inappropriate area.
- each position in the color space regarding the plurality of pixels in the target image corresponds to the plurality of pixels.
- a value (hereinafter referred to as a determination index value) representing the degree of existence inside or outside the ellipsoid in the color space for each pixel in the image is calculated using the corresponding pixel difference vector, Based on the determination index value, it is determined whether or not the pixel in the target image is a pixel in an inappropriate area.
- the positional deviation amount estimation unit 401, the pixel difference calculation unit 405, and the inappropriate region extraction unit 500 are realized by a CPU that operates according to a program, for example. Alternatively, it may be realized by a dedicated circuit.
- ⁇ is an axial radius other than the rotation axis of the ellipsoid.
- ⁇ is the minor axis radius of the ellipsoid.
- ⁇ is determined in advance as a constant.
- I is a unit matrix having the same number of rows and columns as the dimension of the color space. In this example, I is a 3 ⁇ 3 unit matrix.
- ⁇ x 1i shown in Expression (3) is a maximum change vector of the pixel i in the reference image. That is, this is a vector having the largest magnitude among the difference vectors obtained by subtracting the vector indicated by the pixel value of the pixel i from the vector indicated by the pixel values of the surrounding eight pixels adjacent to the pixel i.
- D shown in Formula (3) is the center axis radius of the ellipsoid.
- d is the major axis radius of the ellipsoid.
- d is the magnitude of the maximum change vector ⁇ x 1i of the pixel i.
- the upper limit d max of the central axis radius is predetermined as a constant.
- the first image input to the image input unit 410 is input to the reference image input unit 411 as a reference image, and the second and subsequent images are input to the target image input unit 412 as target images.
- the reference image and the target image are expressed in the YUV format.
- the image processing system performs the following processing for each target image.
- the target image focused on as a processing target is the k-th (k ⁇ 2) input image.
- the positional deviation amount estimation means 401 uses the pixel value x i of the reference image as the first input image and the pixel value x k of the target image to calculate the positional deviation amount of the target image with respect to the reference image with sub-pixel accuracy. Calculate with The order of the target images in the entire input image is k. It can be said that the positional deviation amount estimation unit 401 calculates which position coordinate of the reference image each pixel of the target image exists. When estimating the amount of misalignment, the misregistration amount estimating means 401 performs calculation using, for example, a projective transformation model that is a deformation model assuming uniform deformation. In the case where the displacement amount of most of the region in the image can be expressed by uniform deformation, it is possible to estimate the displacement amount with a small error for the region. The positional deviation amount estimation unit 401 stores the estimated positional deviation amount.
- the pixel difference calculation unit 405 specifies a pixel in the reference image that is closest to the pixel of the k-th target image of interest based on the calculated positional deviation amount.
- a pixel in the target image is j, and a vector represented by the pixel value of the pixel j is xkj .
- a pixel in the reference image determined to be closest to the pixel j is i, and a vector represented by the pixel value is x ki .
- the pixel difference calculation unit 405 associates the pixel i with the pixel j.
- the pixel difference calculation unit 405 associates each pixel in the target vector with a pixel in the reference image and calculates a difference vector.
- the inappropriate area extraction unit 500 determines whether the pixel j in the target image is a pixel in the inappropriate area based on the determination index value.
- the determination index value is such that each position in the color space related to a plurality of pixels in the target image exists inside or outside the ellipsoid in the color space related to each pixel in the reference image corresponding to the plurality of pixels.
- a value representing the degree of existence It means that the smaller the value of the determination index value, the greater the degree that each position in the color space regarding a plurality of pixels in the target image exists inside each ellipsoid.
- the plurality of pixels in the target image include the pixel j itself.
- the plurality of pixels are a plurality of pixels centered on the pixel j.
- the distance from the pixel j to be determined is within a predetermined distance threshold. Only each pixel may be used.
- Inappropriate region extracting means 500 specifically, by performing the calculations exemplified in the following formula (4), calculates the determination index value D 1.
- j ′ represents a pixel selected from the target image of interest (kth input image).
- i ′ represents a pixel in the reference image corresponding to the pixel j ′ in the target image. That is, the inappropriate region extraction unit 500 selects individual pixels from the target image, calculates F (j, j ′) ⁇ C (k, i ′, j ′) for each selected pixel, and calculates the sum. the determination index value D 1.
- C (k, i ′, j ′) has a position in the color space relating to the pixel j ′ inside or outside the ellipsoid in the color space relating to the pixel in the reference image corresponding to the pixel j ′. It can be said that it is a value that represents.
- the inappropriate region extraction unit 500 calculates and stores the matrix A in advance for each pixel in the reference image.
- To calculate the matrix A means to define an ellipsoid.
- the inappropriate region extraction unit 500 selects the pixels of the reference image in order when inputting the reference image, and between adjacent pixels, which is a difference vector between the selected pixels and surrounding pixels. What is necessary is just to obtain
- F (j, j ′) in Expression (5) is a weighting coefficient according to the positional relationship (distance) between the pixel j that is a pixel to be determined whether or not it is a pixel in the inappropriate region and the pixel j ′. is there.
- the weighting factor F (j, j ′) may be determined so that the weighting factor increases as the distance between the pixels j and J ′ decreases.
- F (j, j ′) may be determined as shown in Expression (6) below.
- Equation (6) R j, j ′ is the distance between the pixels j, j ′.
- ⁇ in equation (6) is a parameter for adjusting the magnitude of F (j, j ′). As the value of ⁇ is set larger, the value of F (j, j ′) becomes larger even when the distance (R j, j ′ ) between the pixels j and j ′ is larger.
- F (j, j ′) may be determined as shown in the following formula (7).
- Inappropriate region extracting means 500 performs the calculation of equation (5), if the calculated determination index value D 1, is compared with the determined index value D 1, and a predetermined threshold. Then, if D 1 is greater than the threshold value, improper area extracting unit 500 determines the pixel j in the object image and the pixel of the inappropriate region. Conversely, if D 1 is equal to or smaller than the threshold, inappropriate region extracting means 500 determines that the pixel j in the object image is not a pixel of the inappropriate region.
- the inappropriate area extraction unit 500 performs the above determination on each pixel in the target image, and stores the pixels determined to be pixels in the inappropriate area in the inappropriate area storage unit 407.
- the image processing system repeats the same process for the next target image.
- the image restoration unit 403 restores the images.
- determination index value D 1 calculated by equation (4) may calculate the determination index value calculation method other than Equation (4).
- inappropriate region extracting means 500 obtains a D 2 by calculation of equation (8) shown below, may be used as determination index value D 2.
- D 2 also inappropriate region extracting means 500 compares the threshold value and D 2 a predetermined, greater than D 2 the threshold, the pixel j is inappropriate region is determined to be a pixel, if D 2 is less than the threshold value, it is determined that the pixel j is not a pixel of the inappropriate region.
- the inappropriate region extraction unit 500 previously calculates and stores the matrix A (in other words, an ellipsoid) for each pixel in the reference image has been shown.
- the matrix A itself is not calculated and stored in advance, but the maximum change pixel for each pixel in the reference image may be stored in advance as in the fifth embodiment.
- the matrix A may be calculated with reference to the maximum change pixel.
- the pixels of the reference image are selected in order, a difference vector between adjacent pixels is calculated from the selected pixel and the maximum change pixel of the selected pixel, and the matrix A is calculated from the difference vector between adjacent pixels.
- the maximum change pixel relating to each pixel in the reference image is stored, and the matrix A relating to each pixel in the reference image is calculated for each target image. The same effect as the embodiment can be obtained.
- the inappropriate area determination unit 402e includes a reference high-resolution image generation unit 608 (see FIG. 6), as in the second embodiment, and the reference high-resolution image generation unit 608 includes: A reference high resolution image may be generated. Then, the pixel difference calculation unit 405 and the inappropriate area extraction unit 500 may perform the same processing as described above using the reference high resolution image and the target image. In this case, the same effect as in the second embodiment can be obtained.
- the computer 400e includes the use image generation unit 807 and the image restoration unit 403b described in the third embodiment instead of the inappropriate area storage unit 407 and the image restoration unit 403. There may be. In this case, the same effect as the third embodiment can be obtained.
- the computer 400e may be configured to include the blend image generation unit 1003 shown in the fourth embodiment instead of the image restoration unit 403. In this case, the same effect as the fourth embodiment can be obtained.
- a target image high-resolution means for interpolating the target image to increase the resolution may be provided.
- the pixel difference calculation unit 405 identifies the pixel closest to the pixel position of the reference image from the pixels of the image that has been corrected so as to eliminate the positional deviation of the high-resolution target image.
- the pixel in the target image that has been converted into resolution may be associated with the pixel in the reference image.
- the pixel difference calculation unit 405 may calculate a corresponding pixel difference vector corresponding to each associated pixel.
- FIG. 1 An embodiment of the present invention will be described with reference to FIG. This example corresponds to the first embodiment.
- a video capture board capable of inputting and outputting NTSC (National Television System Committee) signals is used as the image input means 410.
- a display device is used as the image output means 420.
- an image processing board equipped with an image processing processor is used as the computer 400.
- the video capture board converts the input video signal into a YUV signal and sends it to the image processing board. Further, when the processing result in the image processing board is transferred, the video capture board converts the processing result into a video signal and displays it on the display device.
- the processing result of the image processing board is transferred to the video capture board, and the video capture board displays the image on the display device.
- the image processing board includes a positional deviation amount estimation unit 401, an inappropriate region determination unit 402 (allowable region calculation unit 404, image difference calculation unit 405, inappropriate region extraction unit 406, inappropriate region storage unit 407), and image restoration. Means 403.
- the reference image input unit 411 stores the value.
- the first image among a plurality of input images is used as a reference image.
- the reference image and the target image are expressed in the YUV format.
- the allowable area calculation unit 404 sequentially selects the pixels of the reference image.
- the allowable area calculation unit 404 obtains a parameter representing the allowable area as the matrix A shown in the already described equation (3).
- ⁇ is a radius in the axial direction other than the rotation axis of the ellipsoid serving as the allowable region.
- ⁇ is the minor axis radius of the ellipsoid.
- ⁇ is determined in advance as a constant.
- I is a unit matrix having the same number of rows and columns as the dimension of the color space. In this example, since the color space is a three-dimensional space of Y, U, and V, I is a 3 ⁇ 3 unit matrix.
- ⁇ x 1i shown in Expression (3) is the maximum change vector of the pixel i selected from the reference image. That is, this is a vector having the largest magnitude among the difference vectors obtained by subtracting the vector indicated by the pixel value of the pixel i from the vector indicated by the pixel values of the surrounding eight pixels adjacent to the pixel i.
- D shown in Expression (3) is the radius of the central axis of the ellipsoid serving as the allowable region. That is, in the example shown in FIG. 2, d is the major axis radius of the ellipsoid. d is the magnitude of the maximum change vector ⁇ x 1i of the pixel i. However, the upper limit d max of the central axis radius is predetermined as a constant.
- the positional deviation amount estimation unit 401 uses the pixel value x i of the reference image that is the first input image and the pixel value x k of the target image.
- the amount of positional deviation of the target image with respect to the reference image is calculated with subpixel accuracy.
- the order of the target images in the entire input image is k. It can be said that the positional deviation amount estimation unit 401 calculates which position coordinate of the reference image each pixel of the target image exists.
- the misregistration amount estimating means 401 performs calculation using, for example, a projective transformation model that is a deformation model assuming uniform deformation. In the case where the displacement amount of most of the region in the image can be expressed by uniform deformation, it is possible to estimate the displacement amount with a small error for the region.
- the positional deviation amount estimation unit 401 stores the estimated positional deviation amount.
- the pixel difference calculation unit 405 specifies a pixel in the reference image that is closest to the pixel of the k-th target image of interest based on the calculated positional deviation amount.
- the pixel in the target image is j, and the vector represented by the pixel value of the pixel j is xkj .
- a pixel in the reference image determined to be closest to the pixel j is i, and a vector represented by the pixel value is x ki .
- the pixel difference calculation unit 405 associates the pixel i with the pixel j.
- the pixel difference calculation unit 405 associates each pixel in the target vector with a pixel in the reference image and calculates a difference vector.
- the inappropriate area extraction unit 406 calculates the parameter A for the pixel in the reference image associated with the pixel and the difference vector ⁇ x (k, i, j calculated by the pixel difference calculation unit 405. ) To determine whether a pixel in the target image corresponds to an inappropriate area.
- the inappropriate area extraction unit 406 may determine whether or not the position in the color space indicated by the pixel value of the pixel j in the target image is outside the allowable area of the corresponding pixel i. Specifically, the inappropriate region extraction unit 406 calculates C (k, i, j) shown in the following equation (9), and whether or not C (k, i, j) is larger than the threshold value. What is necessary is just to determine. This threshold value may be determined in advance.
- the inappropriate area extraction unit 406 determines that the position in the color space indicated by the pixel value of the pixel j in the target image is the allowable value of the corresponding pixel i in the reference image. It is determined that the pixel j is out of the region and the pixel j is a pixel in the inappropriate region. On the other hand, if C (k, i, j) is equal to or smaller than the threshold value, the position in the color space indicated by the pixel value of the pixel j in the target image is within the allowable area of the corresponding pixel i in the reference image, It is determined that the pixel j is a pixel in the appropriate area.
- the inappropriate area extraction unit 406 stores information for specifying pixels in the inappropriate area in the inappropriate area storage unit 407.
- an N ⁇ N diagonal matrix is defined as follows, and the diagonal matrix is stored in the inappropriate area storage unit 407.
- the inappropriate region extraction unit 406 determines that the pixel is an inappropriate region with respect to the jth pixel of the target image of interest (kth input image)
- the jth pixel of the N ⁇ N diagonal matrix The value of the diagonal component of 0 is set to 0, and the value of the j-th diagonal component is set to 1 when it is determined as an appropriate region.
- Diagonal matrix of the thus-determined N ⁇ N corresponds to the matrix S k in the formula (1).
- FIG. 18 is an explanatory diagram illustrating an example of an inappropriate area.
- FIG. 18A shows a reference image
- FIG. 18B shows a target image.
- Houses 1801 and 1802 photographed with the reference image and the target image are stationary objects, and the region representing the house follows the transformation by the projective transformation model assumed by the positional deviation amount estimation means 401. Since the positions of the months 1803 and 1804 photographed with the reference image and the target image change with time, they do not follow the projective transformation model.
- C (k, i, j) calculated for the pixels in the moon region 1804 and the pixels in the region corresponding to the region 1803 is a large value equal to or greater than a certain value, and is determined to be an inappropriate region.
- pixels in the regions 1805 and 1806 shown in FIG. 18C among the regions in the target image are determined to be inappropriate regions.
- the inappropriate region extraction unit 406 sets a diagonal component corresponding to the pixels of the regions 1805 and 1806 in the target image to 0 and a diagonal component corresponding to other pixels in the target image to 1 (S k ). And stored in the inappropriate area storage means 407.
- the inappropriate area extraction unit 406 may determine the following values as the j-th diagonal component of this matrix.
- C (k, i, j) is a value obtained by the calculation of Expression (9) for the j-th pixel in the k-th target image.
- ⁇ is, C (k, i, j ) is a parameter indicating how much, whether to reflect on the diagonal elements of the matrix S k values of a predetermined constant.
- the diagonal component obtained by the above calculation using C (k, i, j) and ⁇ represents the reliability that the j-th pixel is an appropriate region.
- the image restoration unit 403 calculates the misalignment estimation amount calculated by the misalignment amount estimation unit 401 and the matrix determined for each target matrix.
- a high resolution image is generated using S k .
- the image restoration unit 403 may generate a high-resolution image by obtaining a matrix T that minimizes the evaluation function E [T] in Expression (1).
- the generated high resolution image is sent to the image output means 420 for display.
- Equation (3) When the reference image and the target image are YUV format images, a 3 ⁇ 3 matrix A is calculated using 3 ⁇ 3 units as the unit matrix I in Equation (3). In equation (9), a 3 ⁇ 3 matrix and a three-dimensional vector are calculated.
- the unit matrix I corresponding to the dimension of the color space is expressed in Equation (3). Use it. That is, when a pixel has r pixel values, the calculation of Expression (3) may be performed using an r ⁇ r unit matrix I.
- FIG. 19 is a block diagram showing the minimum configuration of the present invention.
- the image processing system of the present invention includes a positional deviation amount calculation unit 91 and a pixel calculation unit 95.
- the positional deviation amount calculation means 91 (for example, the positional deviation amount estimation means 401) is a positional deviation in the positional deviation between the target image and the reference image, which is an image for determining the presence or absence of a local region that does not follow the assumed change with respect to the reference image. Calculate the quantity.
- the pixel calculation means 95 (for example, the inappropriate area determination means 402, 402a, 402b, 402d, 402e) is a pixel of the reference image closest to the pixel position of the target image when the target image is corrected so as to eliminate the positional deviation.
- a corresponding pixel difference vector that is a difference vector between vectors corresponding to each of the associated pixels is calculated, Based on the ellipsoid corresponding to the pixel of the reference image in the defined space, it is determined whether or not the pixel of the target image is a pixel in the local area.
- the pixel calculation means includes a determination area (for example, an allowable area) for determining whether or not the pixel of the target image corresponding to the pixel of the reference image is a pixel in the local area.
- the determination area specifying means for obtaining an ellipsoid in a predetermined space (for example, color space) (for example, the allowable area calculating means 404 and 404a, or the maximum change pixel calculating means 1208 and the allowable area calculating means 1204). And specifying the pixel of the reference image that is closest to the pixel position of the target image when the target image is corrected so as to eliminate the positional deviation between the target image and the reference image.
- a difference calculation means for calculating a corresponding pixel difference vector that is a difference vector between vectors corresponding to each of the associated pixels (for example, a pixel difference calculator 405, 405a) and for each pixel of the target image, whether or not the position in the space corresponding to the pixel of the target image is outside the determination region of the pixel of the reference image corresponding to the pixel
- a configuration is disclosed that includes a local region determination unit (for example, an inappropriate region extraction unit 406) that determines whether a pixel of a target image is a pixel of a local region by determining using a difference vector. .
- the determination area specifying unit sequentially selects the pixels of the reference image, and according to the pixels between the selected pixel and the surrounding pixels.
- the difference vector between adjacent pixels is obtained, the maximum change vector that maximizes the size of the difference vector between adjacent pixels is identified, the size and direction of the maximum change vector, and the pixel value of the selected pixel.
- the determination area specifying unit sequentially selects the pixels of the reference image, and among the pixels around the selected pixels, Maximum change pixel specifying means (for example, maximum change pixel calculation means 1208) for obtaining the maximum change pixel in which the magnitude of the difference vector between adjacent pixels, which is the difference vector between the vectors in accordance with the pixel, becomes maximum, and local area determination means
- Maximum change pixel specifying means for example, maximum change pixel calculation means 1208 for obtaining the maximum change pixel in which the magnitude of the difference vector between adjacent pixels, which is the difference vector between the vectors in accordance with the pixel, becomes maximum
- local area determination means When determining for each target image whether or not the pixel of the target image is a pixel in the local area that does not follow the assumed change, the pixels of the reference image are selected in order, and the selected pixel and its maximum change
- a determination area calculation unit that calculates a difference vector between adjacent pixels from a pixel and uses an ellipsoid determined by the size and direction of the difference vector between adjacent pixels
- the determination region specifying unit sequentially selects the pixels of the target image, and according to the pixel between the selected pixel and the surrounding pixels.
- the difference vector between adjacent pixels is obtained, the maximum change vector that maximizes the size of the difference vector between adjacent pixels is identified, the size and direction of the maximum change vector, and the pixel value of the selected pixel.
- the pixel position of the target image when the pixel calculation unit (for example, the inappropriate region determination unit 402e) corrects the target image so as to eliminate the positional deviation between the target image and the reference image.
- the difference which calculates the corresponding pixel difference vector which is the difference vector of the vectors according to each pixel matched by matching the pixel of the target image and the pixel of the reference image by specifying the pixel of the reference image closest to
- a calculation means for example, pixel difference calculation means 405
- each position in a predetermined space corresponding to a plurality of pixels in the target image centered on the pixel is a plurality of the positions.
- a local region determination unit for example, an inappropriate region extraction unit 500 that determines whether or not the pixel of the target image is a pixel of the local region based on the reference index value is disclosed. Yes.
- the local region determination unit for example, the inappropriate region extraction unit 500 sequentially selects the pixels of the reference image, and the selected pixels and their surroundings. Find the difference vector between adjacent pixels that is the difference vector between the pixels according to the pixel, identify the maximum change vector that maximizes the size of the difference vector between adjacent pixels, and use the maximum change vector to create an ellipsoid
- the local region determination unit for example, the inappropriate region extraction unit 500
- the local region determination unit (for example, the inappropriate region extraction unit 500) sequentially selects the pixels of the reference image, and the selected pixels and their surroundings. Find the difference vector between adjacent pixels, which is the difference vector between the vectors according to the pixel, identify the maximum change pixel that maximizes the size of the difference vector between adjacent pixels, and make sure that the pixel of the target image is local.
- the pixels of the reference image are selected in order, a difference vector between adjacent pixels is calculated from the selected pixel and the maximum change pixel of the pixel, and the difference vector between the adjacent pixels is calculated.
- the image generation means (for example, image restoration) that generates one image from the reference image and the target image according to the information indicating where the pixel in the local region does not follow the assumed change.
- a configuration including means 403, 403b or blend image generation means 1003) is disclosed.
- the image generation unit (for example, the image restoration unit 403, 403b) generates a high-resolution image having more pixels than the reference image and the target image from the reference image and the target image. It is disclosed. According to such a configuration, since the local region that does not follow the assumed change is accurately determined, it is possible to prevent a reduction in the image quality improvement effect when generating a high-quality image.
- the pixel value of the pixel of the target image determined to be a pixel in the local region that does not follow the assumed change is replaced with the pixel value of the pixel of the reference image corresponding to the pixel.
- a configuration is disclosed that includes a replacement unit (for example, a use image generation unit 807), and the image generation unit generates a high-resolution image from the reference image and the target image whose pixel value is replaced by the pixel value replacement unit. . According to such a configuration, a high-quality image can be generated with a smaller memory storage amount.
- the image generation unit calculates the average value of the pixel values of the corresponding pixels of the reference image and the target image from the reference image and the target image, and the local region that does not follow the assumed change. Is excluded from the average value calculation target, and a configuration is disclosed in which a blend image having the calculated average value as the pixel value is generated. According to such a configuration, since the local region that does not follow the assumed change is determined with high accuracy, the image quality of the blended image can be improved.
- the pixel value of the pixel of the target image determined to be a pixel in the local region that does not follow the assumed change is replaced with the pixel value of the pixel of the reference image corresponding to the pixel.
- a replacement unit e.g., a use image generation unit 807, and the image generation unit includes a reference image and a target image whose pixel value is replaced by the pixel value replacement unit;
- a configuration for generating a blend image having an average pixel value as a pixel value is disclosed. According to such a configuration, it is possible to generate a blend image with good image quality with a smaller memory storage amount.
- the above embodiment includes a reference image high-resolution means (for example, a reference high-resolution image generation means 608) that interpolates the reference image to increase the resolution, and the difference calculation means cancels the positional deviation.
- the pixel of the target image and the pixel of the high-resolution reference image are associated with each other by specifying the pixel of the high-resolution reference image closest to the pixel position of the target image when the target image is corrected,
- the structure which calculates the corresponding pixel difference vector of the vectors according to each matched pixel is disclosed.
- the above-described embodiment includes a target image high-resolution means for interpolating the target image to increase the resolution, and the difference calculation means corrects the high-resolution target image so as to eliminate positional deviation.
- a positional deviation amount calculation unit for calculating a positional deviation amount between a target image, which is an image for which presence / absence of a local region not following an assumed change with respect to the reference image is determined, and the reference image, and canceling the positional deviation
- a target image which is an image for which presence / absence of a local region not following an assumed change with respect to the reference image is determined
- the reference image canceling the positional deviation
- a corresponding pixel difference vector which is a difference vector between vectors, is calculated, and based on the corresponding pixel difference vector and an ellipsoid corresponding to the pixel of the reference image in a predetermined space, the pixel of the target image is the local region
- An image processing system comprising: a pixel calculation unit that determines whether the pixel is a pixel.
- the pixel calculation unit obtains a determination area for determining whether or not the pixel of the target image corresponding to the pixel of the reference image is a pixel in the local area as an ellipsoid in a predetermined space.
- a pixel of the target image is specified by specifying a pixel of the reference image closest to the pixel position of the target image when the target image is corrected so as to eliminate the positional deviation between the target image and the reference image.
- a difference calculation unit that calculates a corresponding pixel difference vector that is a difference vector between vectors corresponding to each associated pixel, and a pixel of the target image for each pixel of the target image By determining whether the position in the corresponding space is outside the pixel determination region of the reference image corresponding to the pixel using the corresponding pixel difference vector, the pixel of the target image is the local region.
- the image processing system including a determining local region determining unit that determines whether the pixel.
- the determination area specifying unit sequentially selects the pixels of the reference image, and a difference vector between vectors corresponding to the pixels between the selected pixel and the surrounding pixels.
- An adjacent ellipsoid that is determined by the size and direction of the maximum change vector and the pixel value of the selected pixel is determined.
- An image processing system as a determination area.
- the determination area specifying unit sequentially selects pixels of the reference image, and among the pixels around the selected pixel, a vector corresponding to the selected pixel
- the maximum change pixel specifying unit that obtains the maximum change pixel that maximizes the size of the difference vector between adjacent pixels that is a difference vector between the local region determination unit
- the pixels of the reference image are selected in order, a difference vector between adjacent pixels is calculated from the selected pixel and the maximum change pixel of the pixel, and the difference between adjacent pixels is calculated.
- An image processing system including a determination area calculation unit that uses an ellipsoid determined by the size and direction of a vector and a pixel value of a selected pixel as a determination area.
- the determination area specifying unit sequentially selects the pixels of the target image, and a difference vector between vectors corresponding to the pixels between the selected pixel and the surrounding pixels. An adjacent ellipsoid that is determined by the size and direction of the maximum change vector and the pixel value of the selected pixel is determined. An image processing system as a determination area.
- the pixel calculation unit identifies the pixel of the reference image closest to the pixel position of the target image when correcting the target image so as to eliminate the positional deviation between the target image and the reference image.
- a difference calculation unit that associates a pixel with a pixel of a reference image, calculates a corresponding pixel difference vector that is a difference vector between vectors associated with each associated pixel, and for each pixel of the target image, Each position in a predetermined space corresponding to a plurality of pixels in the target image is present inside each ellipsoid in the space corresponding to each pixel in the reference image corresponding to the plurality of pixels.
- a local index that calculates whether or not a reference index value indicating the degree of existence of the target image is calculated using a corresponding pixel difference vector and determines whether or not the pixel of the target image is a pixel in the local area based on the reference index value Size
- the image processing system including a section.
- the local region determination unit sequentially selects pixels of the reference image, and a difference vector between vectors corresponding to the pixels between the selected pixel and the surrounding pixels.
- An image processing system that obtains a difference vector between adjacent pixels, specifies a maximum change vector that maximizes the size of the difference vector between adjacent pixels, and determines an ellipsoid by the maximum change vector.
- the local region determination unit sequentially selects the pixels of the reference image, and a difference vector between vectors corresponding to the pixels between the selected pixel and the surrounding pixels.
- the reference image An image processing system that sequentially selects the pixels, calculates a difference vector between adjacent pixels from the selected pixel and the maximum change pixel of the pixel, and determines an ellipsoid by the difference vector between the adjacent pixels.
- An image processing system including an image generation unit that generates one image from a reference image and a target image according to information indicating where a pixel in a local region that does not follow an assumed change.
- a pixel value replacement unit that replaces the pixel value of the pixel of the target image determined to be a pixel in the local region not following the assumed change with the pixel value of the pixel of the reference image corresponding to the pixel, An image processing system in which a generation unit generates a high-resolution image from a reference image and a target image whose pixel value has been replaced by a pixel value replacement unit.
- the image generation unit calculates an average value of pixel values of corresponding pixels of the reference image and the target image from the reference image and the target image, and calculates a pixel in the local region that does not follow the assumed change as an average value.
- An image processing system that generates a blended image that is excluded from calculation targets and uses a calculated average value as a pixel value.
- a pixel value replacement unit that replaces the pixel value of the pixel of the target image determined to be a pixel in the local region not following the assumed change with the pixel value of the pixel of the reference image corresponding to the pixel,
- the generation unit generates a blend image having a pixel value that is an average value of pixel values of corresponding pixels of the reference image and the target image from the reference image and the target image whose pixel value is replaced by the pixel value replacement unit.
- a reference image high-resolution unit that interpolates the reference image to increase the resolution is provided, and the difference calculation unit is the closest to the pixel position of the target image when the target image is corrected so as to eliminate the positional deviation.
- a target image high-resolution unit that interpolates the target image to increase the resolution
- the difference calculation unit corrects the high-resolution target image from the pixels of the image corrected so as to eliminate the positional deviation.
- a determination area for determining whether or not the pixel of the target image corresponding to the pixel of the reference image is a pixel in the local area is obtained as an ellipsoid in a predetermined space, and the target image and By specifying the pixel of the reference image closest to the pixel position of the target image when correcting the target image so as to eliminate the positional deviation from the reference image, the pixel of the target image is associated with the pixel of the reference image, A corresponding pixel difference vector, which is a difference vector between vectors corresponding to each associated pixel, is calculated, and for each pixel of the target image, the position in the space corresponding to the pixel of the target image corresponds to the pixel. It is determined whether or not the pixel of the target image is a pixel in the local region by determining whether or not the pixel is outside the determination region of the pixel of the reference image using the corresponding pixel difference vector.
- the pixels of the reference image are sequentially selected, and among the pixels around the selected pixel, the difference vector between the vectors corresponding to the selected pixel.
- the reference image Ellipse determined by the size and direction of the difference vector between adjacent pixels and the pixel value of the selected pixel.
- the pixel of the target image and the reference image A corresponding pixel difference vector that is a difference vector between vectors corresponding to each associated pixel, and for each pixel of the target image, a plurality of pixels in the target image centered on the pixel Depending on whether each position in a predetermined space exists inside or outside each ellipsoid in the space corresponding to each pixel in the reference image corresponding to the plurality of pixels.
- a misregistration amount calculation process for calculating a misregistration amount in a misregistration between a target image and a reference image, which is an image for which the presence or absence of a local region that does not follow an assumed change with respect to a reference image is determined in a computer, and The pixel of the target image is associated with the pixel of the reference image by specifying the pixel of the reference image closest to the pixel position of the target image when the target image is corrected so as to eliminate the positional deviation.
- a corresponding pixel difference vector which is a difference vector between vectors corresponding to each pixel, is calculated, and based on the corresponding pixel difference vector and an ellipsoid corresponding to the pixel of the reference image in a predetermined space, the target image
- An image processing program for executing a pixel calculation process for determining whether or not a pixel in the local area is a pixel in the local region.
- a determination area for determining whether or not the pixel of the target image corresponding to the pixel of the reference image is a pixel in the local area by pixel calculation processing in a computer in a predetermined space
- Target area by specifying the pixel of the reference image closest to the pixel position of the target image when correcting the target image so as to eliminate the positional deviation between the target image and the reference image
- a difference calculation process for associating a pixel of an image with a pixel of a reference image and calculating a corresponding pixel difference vector that is a difference vector between vectors corresponding to each of the associated pixels; It is determined using the corresponding pixel difference vector whether the position in the space corresponding to the pixel of the image is outside the determination region of the pixel of the reference image corresponding to the pixel.
- the Rukoto, image processing program according to Note 16, wherein the pixels of the target image to perform the determining local region determination process whether or not the pixel of the local region.
- a reference index value indicating the degree of presence inside or outside the body is calculated using a corresponding pixel difference vector, and based on the reference index value, the pixel of the target image is locally.
- the image processing program according to Note 16 to execute a determining local region determination process whether or not the pixels in the region.
- Supplementary note 24 Supplementary note from supplementary note 16 that causes the computer to execute image generation processing for generating one image from the reference image and the target image in accordance with information indicating where the pixel in the local region does not follow the assumed change. 23.
- the computer calculates an average value of pixel values of corresponding pixels of the reference image and the target image from the reference image and the target image, and calculates a local region that does not follow the assumed change. 25.
- Pixel value replacement processing for replacing the pixel value of the pixel of the target image determined to be a pixel in the local region not following the assumed change with the pixel value of the pixel of the reference image corresponding to the pixel
- a blend image is generated from the reference image and the target image whose pixel value has been replaced, with the average value of the pixel values of corresponding pixels of the reference image and the target image as the pixel value.
- the pixel of the target image is associated with the pixel of the high-resolution reference image, and the vectors corresponding to the associated pixels
- the image processing program according to any one of supplementary note 16 to supplementary note 28 for calculating a pixel difference vector.
- the target image high-resolution process which interpolates a target image and makes it high-resolution is performed to a computer, and it corrected so that the said positional offset might be eliminated by the difference calculation process
- the pixel closest to the pixel position of the reference image from among the pixels of the image
- the pixel of the target image and the pixel of the reference image that have been increased in resolution are associated with each other, and a vector corresponding to each associated pixel
- the image processing program according to any one of supplementary note 16 to supplementary note 28 for calculating a corresponding pixel difference vector between the supplementary note 16 and the supplementary note 28.
- the present invention can be applied to a process for generating a high-resolution image or a blend image from a plurality of images.
- misregistration amount estimation means 403 image restoration means 404 allowable area calculation means 405 image difference calculation means 406 inappropriate area extraction means 407 inappropriate area storage means 411 reference image input means 412 target image input means 608 reference high resolution image generation means 807 Use image generation means 1003 Blend image generation means 1204 Allowable area calculation means 1208 Maximum change pixel calculation means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Analysis (AREA)
Abstract
Description
Vtest<Vmax+ΔVth
図1は、本発明の第1の実施形態の画像処理システムの例を示すブロック図である。本実施形態の画像処理システムは、プログラム制御により動作するコンピュータ(中央処理装置;プロセッサ;データ処理装置)400と、画像入力手段410と、画像出力手段420とを備える。また、コンピュータ400は、位置ずれ量推定手段401と、不適切領域判定手段402と、画像復元手段403とを有する。さらに、不適切領域判定手段402は、許容領域計算手段404と、画像差分計算手段405と、不適切領域抽出手段406と、不適切領域記憶手段407とを含む。
ここで、位置ずれ量推定計算の際に基準となる画像を基準画像と記す。また、基準画像に対して画素がどれだけずれているかを計算される画像を対象画像と記す。画像入力手段410に入力されるK枚の画像のうち、1つが基準画像となり、残りの画像が対象画像である。K枚の入力画像のうち、どの画像を基準画像としてもよいが、本実施形態では、先頭の画像(k=1の画像)を基準画像とする場合を例にする。
Bruce D.Lucas, Takeo Kanade, “An Iterative Registration Technique with an Application to Stereo Vision”, Proceedings of Imaging Understanding Workshop, 1981年,pp.121-130
図6は、本発明の第2の実施形態の画像処理システムの例を示すブロック図である。第1の実施形態と同様の構成要素については、図1と同一の符号を付し、詳細な説明を省略する。第2の実施形態の画像処理システムは、プログラム制御により動作するコンピュータ(中央処理装置;プロセッサ;データ処理装置)400aと、画像入力手段410と、画像出力手段420とを備える。コンピュータ400aは、位置ずれ量推定手段401と、不適切領域判定手段402aと、画像復元手段403とを有する。
図9は、本発明の第3の実施形態の画像処理システムの例を示すブロック図である。第1の実施形態と同様の構成要素については、図1と同一の符号を付し、詳細な説明を省略する。第3の実施形態の画像処理システムは、プログラム制御により動作するコンピュータ(中央処理装置;プロセッサ;データ処理装置)400bと、画像入力手段410と、画像出力手段420とを備える。コンピュータ400bは、位置ずれ量推定手段401と、不適切領域判定手段402bと、画像復元手段403bとを有する。
第1から第3までの各実施形態では、入力画像よりも画素数の多い高解像度画像を生成する場合の構成を説明したが、第4の実施形態では、個々の入力画像と同じ解像度のブレンド画像を生成する場合の構成を説明する。ブレンド画像とは、複数の画像間の対応する画素の画素値の平均を画素毎に計算して得られる画像である。
図15は、本発明の第5の実施形態の画像処理システムの例を示すブロック図である。第1の実施形態、第4の実施形態と同様の構成要素については、図1、図12と同一の符号を付し、詳細な説明を省略する。第5の実施形態の画像処理システムは、プログラム制御により動作するコンピュータ(中央処理装置;プロセッサ;データ処理装置)400dと、画像入力手段410と、画像出力手段420とを備える。コンピュータ400dは、位置ずれ量推定手段401と、不適切領域判定手段402dと、ブレンド画像生成手段1003とを備える。画像入力手段410、画像出力手段420、位置ずれ量推定手段401、ブレンド画像生成手段1003は第4の実施形態と同様である。
第1から第5までの各実施形態では、対象画像の画素が不適切領域の画素であるか否かを判定する場合に、対象画像の画素に対応する基準画像内の画素に関する楕円体を用いて判定する場合を示した。対象画像内のある一つの画素が不適切領域の画素であるか否かを判定する場合に、対象画像内の複数の画素に関する色空間内での各位置が、その複数の画素に対応する基準画像内の各画素に関する色空間内の楕円体の内側に存在するか外側に存在するかの程度を表す値を計算し、その値に基づいて、対象画像内の一つの画素が不適切領域の画素であるか否かを判定してもよい。以下、第6の実施形態として、このように対象画像内の各画素について判定を行う実施形態を示す。
403 画像復元手段
404 許容領域計算手段
405 画像差分計算手段
406 不適切領域抽出手段
407 不適切領域記憶手段
411 基準画像入力手段
412 対象画像入力手段
608 基準高解像度画像生成手段
807 使用画像生成手段
1003 ブレンド画像生成手段
1204 許容領域計算手段
1208 最大変化画素計算手段
Claims (17)
- 基準画像に対する仮定された変化に従っていない局所領域の有無を判定される画像である対象画像と基準画像との位置ずれにおける位置ずれ量を計算する位置ずれ量計算手段と、
前記位置ずれを解消するように対象画像を補正したときにおける対象画像の画素位置に最も近い基準画像の画素を特定することによって、対象画像の画素と基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の差分ベクトルである対応画素差分ベクトルを計算し、対応画素差分ベクトルと、予め定められた空間内における基準画像の画素に応じた楕円体とに基づいて、対象画像の画素が前記局所領域の画素であるか否かを判定する画素演算手段とを備える
ことを特徴とする画像処理システム。 - 画素演算手段は、
基準画像の画素に対応する対象画像の画素が局所領域の画素であるか否かを判定するための判定用領域を、予め定められた空間内における楕円体として求める判定用領域特定手段と、
対象画像と基準画像との位置ずれを解消するように対象画像を補正したときにおける対象画像の画素位置に最も近い基準画像の画素を特定することによって、対象画像の画素と基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の差分ベクトルである対応画素差分ベクトルを計算する差分計算手段と、
対象画像の画素毎に、当該対象画像の画素に応じた前記空間内の位置が、当該画素に対応する基準画像の画素の判定用領域の外側であるか否かを前記対応画素差分ベクトルを用いて判定することにより、対象画像の前記画素が前記局所領域の画素であるか否かを判定する局所領域判定手段とを含む
請求項1に記載の画像処理システム。 - 判定用領域特定手段は、基準画像が入力されたときに、基準画像の画素を順に選択し、選択した画素とその周囲の画素との間で、画素に応じたベクトル同士の差分ベクトルである隣接画素間差分ベクトルを求め、隣接画素間差分ベクトルの大きさが最大となる最大変化ベクトルを特定し、最大変化ベクトルの大きさおよび方向と前記選択した画素の画素値とにより定まる楕円体を判定用領域とする
請求項2に記載の画像処理システム。 - 判定用領域特定手段は、
基準画像が入力されたときに、基準画像の画素を順に選択し、選択した画素の周囲の画素のうち、選択した画素との間で画素に応じたベクトル同士の差分ベクトルである隣接画素間差分ベクトルの大きさが最大となる最大変化画素を求める最大変化画素特定手段と、
局所領域判定手段が、対象画像の画素が仮定された変化に従っていない局所領域の画素であるか否かを対象画像毎に判定するときに、基準画像の画素を順に選択し、選択した画素と当該画素の最大変化画素とから隣接画素間差分ベクトルを計算し、隣接画素間差分ベクトルの大きさおよび方向と前記選択した画素の画素値とにより定まる楕円体を判定用領域とする判定用領域算出手段とを含む
請求項2に記載の画像処理システム。 - 判定用領域特定手段は、対象画像が入力されたときに、対象画像の画素を順に選択し、選択した画素とその周囲の画素との間で、画素に応じたベクトル同士の差分ベクトルである隣接画素間差分ベクトルを求め、隣接画素間差分ベクトルの大きさが最大となる最大変化ベクトルを特定し、最大変化ベクトルの大きさおよび方向と前記選択した画素の画素値とにより定まる楕円体を判定用領域とする
請求項2に記載の画像処理システム。 - 画素演算手段は、
対象画像と基準画像との位置ずれを解消するように対象画像を補正したときにおける対象画像の画素位置に最も近い基準画像の画素を特定することによって、対象画像の画素と基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の差分ベクトルである対応画素差分ベクトルを計算する差分計算手段と、
対象画像の画素毎に、当該画素を中心とする対象画像内の複数の画素に応じた、予め定められた空間内における各位置が、前記複数の画素に対応する基準画像内の各画素に応じた前記空間内における各楕円体の内側に存在するか外側に存在するかの程度を表す基準指標値を対応画素差分ベクトルを用いて計算し、前記基準指標値に基づいて、対象画像の前記画素が局所領域の画素であるか否かを判定する局所領域判定手段とを含む
請求項1に記載の画像処理システム。 - 局所領域判定手段は、基準画像が入力されたときに、基準画像の画素を順に選択し、選択した画素とその周囲の画素との間で、画素に応じたベクトル同士の差分ベクトルである隣接画素間差分ベクトルを求め、隣接画素間差分ベクトルの大きさが最大となる最大変化ベクトルを特定し、最大変化ベクトルによって楕円体を定める
請求項6に記載の画像処理システム。 - 局所領域判定手段は、
基準画像が入力されたときに、基準画像の画素を順に選択し、選択した画素とその周囲の画素との間で、画素に応じたベクトル同士の差分ベクトルである隣接画素間差分ベクトルを求め、隣接画素間差分ベクトルの大きさが最大となる最大変化画素を特定し、
対象画像の画素が局所領域の画素であるか否かを判定するときに、基準画像の画素を順に選択し、選択した画素と当該画素の最大変化画素とから隣接画素間差分ベクトルを計算し、当該隣接画素間差分ベクトルによって楕円体を定める
請求項6に記載の画像処理システム。 - 仮定された変化に従っていない局所領域の画素がどこであるかを示す情報に応じて、基準画像および対象画像から一つの画像を生成する画像生成手段を備えた
請求項1から請求項8のうちのいずれか1項に記載の画像処理システム。 - 画像生成手段は、基準画像および対象画像から、前記基準画像および対象画像よりも画素数の多い高解像度画像を生成する
請求項9に記載の画像処理システム。 - 仮定された変化に従っていない局所領域の画素であると判定された対象画像の画素の画素値を、当該画素に対応する基準画像の画素の画素値に置き換える画素値置換手段を備え、
画像生成手段は、基準画像と、画素値置換手段によって画素値が置き換えられた対象画像とから高解像度画像を生成する
請求項10に記載の画像処理システム。 - 画像生成手段は、基準画像および対象画像から、前記基準画像および対象画像の対応する画素同士の画素値の平均値を算出するとともに、仮定された変化に従っていない局所領域の画素を前記平均値の算出対象から除外し、算出した平均値を画素値とするブレンド画像を生成する
請求項9に記載の画像処理システム。 - 仮定された変化に従っていない局所領域の画素であると判定された対象画像の画素の画素値を、当該画素に対応する基準画像の画素の画素値に置き換える画素値置換手段を備え、
画像生成手段は、基準画像と、画素値置換手段によって画素値が置き換えられた対象画像とから、前記基準画像および対象画像の対応する画素同士の画素値の平均値を画素値とするブレンド画像を生成する
請求項9に記載の画像処理システム。 - 基準画像を補間して高解像度化する基準画像高解像度化手段を備え、
差分計算手段は、
位置ずれを解消するように対象画像を補正したときにおける対象画像の画素位置に最も近い高解像度化された基準画像の画素を特定することによって、対象画像の画素と高解像度化された基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の対応画素差分ベクトルを計算する
請求項1から請求項13のうちのいずれか1項に記載の画像処理システム。 - 対象画像を補間して高解像度化する対象画像高解像度化手段を備え、
差分計算手段は、
高解像度化された対象画像を位置ずれを解消するように補正した画像の画素の中から、基準画像の画素位置に最も近い画素を特定することによって、高解像度化された対象画像の画素と基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の対応画素差分ベクトルを計算する
請求項1から請求項13のうちのいずれか1項に記載の画像処理システム。 - 基準画像に対する仮定された変化に従っていない局所領域の有無を判定される画像である対象画像と基準画像との位置ずれにおける位置ずれ量を計算し、
前記位置ずれを解消するように対象画像を補正したときにおける対象画像の画素位置に最も近い基準画像の画素を特定することによって、対象画像の画素と基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の差分ベクトルである対応画素差分ベクトルを計算し、対応画素差分ベクトルと、予め定められた空間内における基準画像の画素に応じた楕円体とに基づいて、対象画像の画素が前記局所領域の画素であるか否かを判定する
ことを特徴とする画像処理方法。 - コンピュータに、
基準画像に対する仮定された変化に従っていない局所領域の有無を判定される画像である対象画像と基準画像との位置ずれにおける位置ずれ量を計算する位置ずれ量計算処理、および、
前記位置ずれを解消するように対象画像を補正したときにおける対象画像の画素位置に最も近い基準画像の画素を特定することによって、対象画像の画素と基準画像の画素とを対応付け、対応付けた各画素に応じたベクトル同士の差分ベクトルである対応画素差分ベクトルを計算し、対応画素差分ベクトルと、予め定められた空間内における基準画像の画素に応じた楕円体とに基づいて、対象画像の画素が前記局所領域の画素であるか否かを判定する画素演算処理
を実行させるための画像処理プログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP10743589.3A EP2400461A4 (en) | 2009-02-19 | 2010-02-19 | Image processing system, image processing method, and image processing program |
CN2010800086981A CN102326183A (zh) | 2009-02-19 | 2010-02-19 | 图像处理系统、图像处理方法和图像处理程序 |
US13/201,629 US8903195B2 (en) | 2009-02-19 | 2010-02-19 | Specification of an area where a relationship of pixels between images becomes inappropriate |
JP2011500530A JP5500163B2 (ja) | 2009-02-19 | 2010-02-19 | 画像処理システム、画像処理方法および画像処理プログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009-036826 | 2009-02-19 | ||
JP2009036826 | 2009-02-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010095460A1 true WO2010095460A1 (ja) | 2010-08-26 |
Family
ID=42633747
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/001109 WO2010095460A1 (ja) | 2009-02-19 | 2010-02-19 | 画像処理システム、画像処理方法および画像処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US8903195B2 (ja) |
EP (1) | EP2400461A4 (ja) |
JP (1) | JP5500163B2 (ja) |
CN (1) | CN102326183A (ja) |
WO (1) | WO2010095460A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109840890A (zh) * | 2019-01-31 | 2019-06-04 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
WO2019221149A1 (ja) * | 2018-05-15 | 2019-11-21 | 日本電信電話株式会社 | テンプレート姿勢推定装置、方法、及びプログラム |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012069833A1 (en) * | 2010-11-24 | 2012-05-31 | Blackford Analysis Limited | Process and apparatus for data registration |
WO2014178241A1 (ja) * | 2013-05-02 | 2014-11-06 | コニカミノルタ株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
JP2015012304A (ja) * | 2013-06-26 | 2015-01-19 | ソニー株式会社 | 画像処理装置、画像処理方法、及び、プログラム |
CN107025644A (zh) * | 2017-02-10 | 2017-08-08 | 马瑞强 | 影像降噪的图像位移量补差方法 |
CN107742280A (zh) * | 2017-11-02 | 2018-02-27 | 浙江大华技术股份有限公司 | 一种图像锐化方法及装置 |
JP6975070B2 (ja) * | 2018-02-27 | 2021-12-01 | シャープ株式会社 | 画像処理装置、画像処理方法、及び画像処理プログラム |
CN110349415B (zh) * | 2019-06-26 | 2021-08-20 | 江西理工大学 | 一种基于多尺度变换的行车速度测量方法 |
CN111353470B (zh) * | 2020-03-13 | 2023-08-01 | 北京字节跳动网络技术有限公司 | 图像的处理方法、装置、可读介质和电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000188680A (ja) | 1998-12-21 | 2000-07-04 | Sharp Corp | 高解像度画像の生成方法及びシステム |
JP2005130443A (ja) | 2003-09-30 | 2005-05-19 | Seiko Epson Corp | 低解像度の複数の画像に基づく高解像度の画像の生成 |
JP2007226656A (ja) * | 2006-02-24 | 2007-09-06 | Toshiba Corp | 画像の高解像度化方法及び装置 |
JP2008139074A (ja) * | 2006-11-30 | 2008-06-19 | Rozefu Technol:Kk | 画像の欠陥検出方法 |
JP2009036826A (ja) | 2007-07-31 | 2009-02-19 | Kyocera Mita Corp | 画像形成装置 |
JP2009140393A (ja) * | 2007-12-10 | 2009-06-25 | Hitachi Ltd | 映像処理装置、映像表示装置、映像処理方法および映像表示方法 |
JP2009217658A (ja) * | 2008-03-12 | 2009-09-24 | Hitachi Ltd | 表示装置および画像処理回路 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2336054B (en) * | 1998-04-01 | 2002-10-16 | Discreet Logic Inc | Processing image data |
US6956576B1 (en) * | 2000-05-16 | 2005-10-18 | Sun Microsystems, Inc. | Graphics system using sample masks for motion blur, depth of field, and transparency |
JP2002223366A (ja) * | 2001-01-26 | 2002-08-09 | Canon Inc | 画像処理装置及びその方法、及び画像処理システム |
US20020176604A1 (en) * | 2001-04-16 | 2002-11-28 | Chandra Shekhar | Systems and methods for determining eye glances |
US6970595B1 (en) * | 2002-03-12 | 2005-11-29 | Sonic Solutions, Inc. | Method and system for chroma key masking |
JP4375781B2 (ja) * | 2002-11-29 | 2009-12-02 | 株式会社リコー | 画像処理装置および画像処理方法並びにプログラムおよび記録媒体 |
CN100351868C (zh) * | 2003-04-17 | 2007-11-28 | 精工爱普生株式会社 | 利用多帧图像的静止图像的生成 |
US20050157949A1 (en) * | 2003-09-30 | 2005-07-21 | Seiji Aiso | Generation of still image |
WO2005046221A1 (ja) * | 2003-11-11 | 2005-05-19 | Seiko Epson Corporation | 画像処理装置、画像処理方法、そのプログラムおよび記録媒体 |
DE602005002576T2 (de) * | 2004-03-02 | 2008-06-19 | Seiko Epson Corp. | Erzeugung einer Bilddatei mit zusätzlichen Informationen zur Weiterverarbeitung aus einer Zeitfolge von Quellbilddaten |
JP4513357B2 (ja) * | 2004-03-02 | 2010-07-28 | セイコーエプソン株式会社 | 画像の出力に用いる出力装置に適した画像データの生成 |
JP4367264B2 (ja) * | 2004-07-12 | 2009-11-18 | セイコーエプソン株式会社 | 画像処理装置、画像処理方法、および、画像処理プログラム |
US7715620B2 (en) * | 2006-01-27 | 2010-05-11 | Lockheed Martin Corporation | Color form dropout using dynamic geometric solid thresholding |
US7808508B2 (en) * | 2006-07-21 | 2010-10-05 | Degudent Gmbh | Dental color system and method to produce dental prosthesis colors |
CN100530222C (zh) * | 2007-10-18 | 2009-08-19 | 清华大学 | 图像匹配方法 |
US8428351B2 (en) * | 2008-12-24 | 2013-04-23 | Brother Kogyo Kabushiki Kaisha | Image processing device |
-
2010
- 2010-02-19 WO PCT/JP2010/001109 patent/WO2010095460A1/ja active Application Filing
- 2010-02-19 EP EP10743589.3A patent/EP2400461A4/en active Pending
- 2010-02-19 US US13/201,629 patent/US8903195B2/en active Active
- 2010-02-19 CN CN2010800086981A patent/CN102326183A/zh active Pending
- 2010-02-19 JP JP2011500530A patent/JP5500163B2/ja active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000188680A (ja) | 1998-12-21 | 2000-07-04 | Sharp Corp | 高解像度画像の生成方法及びシステム |
JP2005130443A (ja) | 2003-09-30 | 2005-05-19 | Seiko Epson Corp | 低解像度の複数の画像に基づく高解像度の画像の生成 |
JP2007226656A (ja) * | 2006-02-24 | 2007-09-06 | Toshiba Corp | 画像の高解像度化方法及び装置 |
JP2008139074A (ja) * | 2006-11-30 | 2008-06-19 | Rozefu Technol:Kk | 画像の欠陥検出方法 |
JP2009036826A (ja) | 2007-07-31 | 2009-02-19 | Kyocera Mita Corp | 画像形成装置 |
JP2009140393A (ja) * | 2007-12-10 | 2009-06-25 | Hitachi Ltd | 映像処理装置、映像表示装置、映像処理方法および映像表示方法 |
JP2009217658A (ja) * | 2008-03-12 | 2009-09-24 | Hitachi Ltd | 表示装置および画像処理回路 |
Non-Patent Citations (4)
Title |
---|
BRUCE D. LUCAS, TAKEO KANADE: "An Iterative Registration Technique with an Application to Stereo Vision", PROCEEDINGS OF IMAGING UNDERSTANDING WORKSHOP, 1981, pages 121 - 130 |
See also references of EP2400461A4 * |
SUNG CHEOL PARK, MIN KYU PARK, MOON GI KANG: "Super-resolution image reconstruction: a technical overview", SIGNAL PROCESSING MAGAZINE, IEEE, vol. 20, no. 3, May 2003 (2003-05-01), pages 21 - 36, XP011097476, DOI: doi:10.1109/MSP.2003.1203207 |
ZORAN A. IVANOVSKI, LJUPCHO PANOVSKI, LINA J. KARAM: "Robust super-resolution based on pixel-level selectivity", PROC. SPIE, vol. 6077, 2006, pages 607707 |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019221149A1 (ja) * | 2018-05-15 | 2019-11-21 | 日本電信電話株式会社 | テンプレート姿勢推定装置、方法、及びプログラム |
US11651515B2 (en) | 2018-05-15 | 2023-05-16 | Nippon Telegraph And Telephone Corporation | Template orientation estimation device, method, and program |
CN109840890A (zh) * | 2019-01-31 | 2019-06-04 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN109840890B (zh) * | 2019-01-31 | 2023-06-09 | 深圳市商汤科技有限公司 | 图像处理方法及装置、电子设备和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP5500163B2 (ja) | 2014-05-21 |
EP2400461A4 (en) | 2017-08-02 |
JPWO2010095460A1 (ja) | 2012-08-23 |
CN102326183A (zh) | 2012-01-18 |
US8903195B2 (en) | 2014-12-02 |
EP2400461A1 (en) | 2011-12-28 |
US20110299795A1 (en) | 2011-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5500163B2 (ja) | 画像処理システム、画像処理方法および画像処理プログラム | |
CN110827200A (zh) | 一种图像超分重建方法、图像超分重建装置及移动终端 | |
JP5645051B2 (ja) | 画像処理装置 | |
JP4814840B2 (ja) | 画像処理装置又は画像処理プログラム | |
US20130084027A1 (en) | Image processing apparatus | |
KR20180122548A (ko) | 이미지를 처리하는 방법 및 장치 | |
JP2011227578A (ja) | 画像処理装置、撮像装置、プログラム及び画像処理方法 | |
CN111510691A (zh) | 颜色插值方法及装置、设备、存储介质 | |
CN109472752B (zh) | 基于航拍图像的多曝光融合系统 | |
CN111861888A (zh) | 图像处理方法、装置、电子设备及存储介质 | |
US20160005158A1 (en) | Image processing device and image processing method | |
JP5566199B2 (ja) | 画像処理装置およびその制御方法、並びにプログラム | |
US8472756B2 (en) | Method for producing high resolution image | |
CN111815511A (zh) | 一种全景图像拼接方法 | |
CN109325909B (zh) | 一种图像放大方法和图像放大装置 | |
CN113935934A (zh) | 图像处理方法、装置、电子设备和计算机可读存储介质 | |
JP2009100407A (ja) | 画像処理装置及びその方法 | |
JP5928465B2 (ja) | 劣化復元システム、劣化復元方法およびプログラム | |
JP5587322B2 (ja) | 画像処理装置、画像処理方法、及び画像処理プログラム | |
JPH0918685A (ja) | 画像合成方法 | |
JP2009065283A (ja) | 画像ぶれ補正装置 | |
CN106023127B (zh) | 一种基于多帧的鱼眼视频校正方法 | |
KR102480209B1 (ko) | 다중 카메라의 영상 합성 방법 및 다중 카메라의 영상 합성 장치 | |
JP2023132342A (ja) | 画像処理装置、画像処理方法及びプログラム | |
TW202338734A (zh) | 用於處理影像資料的方法及影像處理器單元 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080008698.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10743589 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011500530 Country of ref document: JP Ref document number: 2010743589 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13201629 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |