WO2005122083A1 - 撮像装置および画像の高解像度化方法 - Google Patents
撮像装置および画像の高解像度化方法 Download PDFInfo
- Publication number
- WO2005122083A1 WO2005122083A1 PCT/JP2005/010992 JP2005010992W WO2005122083A1 WO 2005122083 A1 WO2005122083 A1 WO 2005122083A1 JP 2005010992 W JP2005010992 W JP 2005010992W WO 2005122083 A1 WO2005122083 A1 WO 2005122083A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- resolution
- estimating
- frames
- unit
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 45
- 230000002708 enhancing effect Effects 0.000 title 1
- 238000012545 processing Methods 0.000 claims abstract description 135
- 238000006073 displacement reaction Methods 0.000 claims description 42
- 238000003384 imaging method Methods 0.000 claims description 21
- 230000002194 synthesizing effect Effects 0.000 claims description 8
- 238000012634 optical imaging Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims 1
- 230000002123 temporal effect Effects 0.000 claims 1
- 230000033001 locomotion Effects 0.000 abstract description 64
- 230000003287 optical effect Effects 0.000 abstract description 15
- 238000004364 calculation method Methods 0.000 description 38
- 238000010586 diagram Methods 0.000 description 25
- 238000000926 separation method Methods 0.000 description 21
- 230000006870 function Effects 0.000 description 9
- 230000015572 biosynthetic process Effects 0.000 description 7
- 238000003786 synthesis reaction Methods 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000013500 data storage Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000002945 steepest descent method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4061—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
Definitions
- the present invention relates to an imaging apparatus and an image resolution increasing method for efficiently acquiring a high resolution image from a plurality of low resolution images.
- an imaging method for generating a high-resolution image by synthesizing images having misalignment of a plurality of frames has been proposed.
- the displacement between the low-resolution images is detected with an accuracy of less than a pixel unit (in the specification of the present invention, this is sometimes referred to as a subpixel).
- the present invention has been made in view of the above-described problems, and calculates an amount of displacement between images (hereinafter referred to as motion) by separating images into bands, and performs high-resolution processing. It is an object of the present invention to provide an imaging device and a method for increasing the resolution of an image, which are configured to efficiently perform image processing. Disclosure of the invention
- An imaging apparatus that electronically obtains an image of a subject, comprising: an optical imaging unit that forms an image of the subject; Means for spatially discretizing an image formed as a spatial image and converting the image signal into a sampled image signal; means for separating the sampled image signal into image signals of a plurality of components according to a spatial frequency; Means for performing interpolation and enlarging processing on low-frequency component images separated by spatial frequency, means for estimating the relative displacement of the subject between each frame, and high-resolution images from multiple frames Means for selecting a frame to be subjected to estimation processing; high-resolution image estimating means for estimating a high-resolution image from high-frequency component images separated from image signals of a plurality of frames; And high-resolution image estimation Means for synthesizing the processed image.
- the invention of (1) corresponds to the first embodiment shown in FIG.
- the “optical imaging means for forming an image of a subject” corresponds to the optical system 101.
- the “means for spatially discretizing an optically formed image and converting it into a sampled image signal” corresponds to the imager 102.
- the “means for separating a sampled image signal into image signals of a plurality of components by a spatial frequency” corresponds to the band separation processing unit 105.
- the “means for performing the interpolation enlargement processing on the low-frequency component image separated by the spatial frequency” corresponds to the interpolation enlargement processing unit 109.
- the “means for estimating the relative displacement of the subject between the frames” corresponds to the motion estimation unit 107.
- the “means for selecting a frame for performing high-resolution image estimation processing from a plurality of frames” corresponds to the super-resolution target frame selection unit 106.
- the high-resolution image estimating unit 108 corresponds to “high-resolution image estimating means for estimating a high-resolution image from high-frequency component images respectively separated from image signals of a plurality of frames”.
- ⁇ Interpolation expansion Means for synthesizing the processed image and the image subjected to the high-resolution image estimation processing corresponds to the synthesis operation processing unit 110.
- the high-resolution image estimating means performs processing on the image signal that has passed through the means for separating the sampled image signal into a plurality of component image signals based on the spatial frequency. Is going. For this reason, it is not necessary to perform high-resolution image estimation processing with a heavy calculation load on all image data, and the amount of calculation can be reduced, so that the processing can be sped up.
- the sampled image signal is input to a means for estimating a relative displacement of a subject between the frames.
- the invention of (2) corresponds to a modification of the first embodiment shown in FIG. That is, as shown in FIG. 7, the means for estimating the relative displacement of the subject between the frames (motion estimation unit 107) includes the image sampled by imager 102. The signal is input before band separation by the band separation processing unit 105. Therefore, the relative displacement of the subject between the frames can be estimated for all of the high-frequency components and low-frequency components of the image signal sampled by the imager 102. Therefore, the accuracy of displacement estimation can be improved.
- the means for estimating a relative displacement amount of the subject between the frames includes an image signal of at least one component separated into the image signals of the plurality of components. Is used to estimate the relative displacement of the object between frames.
- the invention of (3) corresponds to the first embodiment shown in FIG.
- At least one of the image signals separated into a plurality of components by the band separation processing unit 105 is input to the motion estimation unit 107, and the relative displacement of the subject between frames is input. Estimate the amount.
- the band separation processing unit 105 An image signal of an appropriate component among the image signals separated into a plurality of components can be input to the motion estimating section 107 to estimate the relative displacement of the subject between the frames.
- An imaging apparatus is an imaging apparatus for electronically obtaining an image of a subject, wherein the imaging apparatus optically forms an image of the subject.
- High-resolution image estimating means for estimating a high-resolution image from the image signals of the plurality of image signals; and referring to at least one image signal of the image signals of the plurality of components separated by the spatial frequency to obtain image information.
- Image to determine Information determining means and means for setting an area in the image using information of the image determined by the image information determining means, wherein the high-resolution image estimating means obtains information on the area in the image.
- a high-resolution image is estimated using the image data.
- the invention of (4) corresponds to the second embodiment shown in FIG.
- the “optical imaging means for forming an image of a subject” corresponds to the optical system 101.
- the “means for spatially discretizing an optically formed image and converting it into a sampled image signal” corresponds to the imager 102.
- the “means for separating a sampled image signal into image signals of a plurality of components by a spatial frequency” corresponds to the band separation processing unit 105.
- the “means for estimating the relative displacement of the subject between the frames” corresponds to the motion estimation unit 107.
- the “image storage means for temporarily storing image signals” corresponds to the memory unit 113.
- the “means for selecting a frame for performing high-resolution image estimation processing from a plurality of frames” corresponds to the super-resolution target frame selection unit 106.
- the “high-resolution image estimating unit that estimates a high-resolution image from high-frequency components of image signals of a plurality of frames” corresponds to the high-resolution image estimating unit 108.
- “Image information discriminating means for discriminating image information by referring to at least one image signal among a plurality of component image signals separated by the spatial frequency” and "Image information discriminating means.
- the means for setting a region in an image using the information of the image "corresponds to the processing region determination unit 114.
- the means for performing the interpolation enlargement processing on the low-frequency component of the image separated by the spatial frequency of the invention of (1), and determining the region for performing the high-resolution processing from the image signal Means and means for synthesizing the interpolated and enlarged image and the image subjected to the high-resolution image estimation processing become unnecessary, and the scale of the processing can be reduced.
- the invention of (4) is characterized in that the image information discriminating means is means for extracting a high-frequency component of an image.
- the processing area determining unit 114 corresponding to “image information determining means” determines information of only high frequency components in a field image separated into high frequency components and low frequency components. According to this configuration, the motion estimation is performed using only a part of the image that includes many high-frequency components, and the high-resolution image estimation calculation is performed using the motion as the motion of the entire image. be able to.
- the processing area determining unit 114 corresponds to “image information determining means that refers to luminance information of at least one image signal among image signals of a plurality of components separated by the spatial frequency”. According to this configuration, it is possible to determine and cut out a region having a large number of high-frequency components from the luminance information, and send it to the motion estimating unit 107.
- the method for increasing the resolution of an image according to the first embodiment of the present invention is as follows.
- a method for increasing the resolution of a sampled image signal comprising the steps of separating the sampled image signal into image signals of a plurality of components according to a spatial frequency, and separating the sampled image signal into low-frequency signals separated by the spatial frequency.
- a step of performing an interpolation enlargement process on the component image a step of estimating a relative displacement amount between the frames by a displacement amount estimating unit, and a frame performing a high-resolution image estimation process from a plurality of frames.
- High-resolution image estimation for estimating a high-resolution image from high-frequency component images separated from image signals of multiple frames, and interpolation-enlarged image and high-resolution image estimation.
- a step of synthesizing the processed image comprising the steps of separating the sampled image signal into image signals of a plurality of components according to a spatial frequency, and separating the sampled image signal into low-frequency signals separated by the spatial
- the invention of (7) corresponds to a method for increasing the resolution of an image according to the configuration diagram of FIG.
- the “step of separating a sampled image signal into image signals of a plurality of components according to a spatial frequency” corresponds to the processing by the band separation processing unit 105.
- the “step of performing the interpolation enlargement processing on the low-frequency component image separated by the spatial frequency” corresponds to the processing by the interpolation enlargement processing unit 109.
- the “step of estimating the relative displacement between frames by the displacement estimating means” corresponds to the processing by the motion estimating unit 107.
- the “step of selecting a frame for performing high-resolution image estimation processing from a plurality of frames” corresponds to the processing by the super-resolution target frame selection unit 106.
- the “step of high-resolution image estimation for estimating a high-resolution image from high-frequency component images separated from image signals of a plurality of frames” corresponds to the processing by the high-resolution image estimating unit 108.
- the “step of synthesizing the interpolated and enlarged image and the image subjected to the high-resolution image estimation processing” corresponds to the processing by the synthesis operation processing unit 110.
- the sampled image signal is input to displacement amount estimating means for estimating a relative displacement amount of a subject between the frames.
- the invention (8) corresponds to a method for increasing the resolution of an image according to a modification of the first embodiment shown in FIG. According to this configuration, the accuracy of the displacement estimation can be improved when performing the high-resolution image estimation processing by software.
- the step of estimating the relative displacement of the object between the frames includes at least one of the plurality of image signals separated into image signals. It is characterized by estimating the relative displacement of the subject between the frames using the image signal of the component.
- the invention of (9) corresponds to the method of increasing the resolution of an image according to the configuration diagram of FIG. According to this configuration, when performing high-resolution image estimation processing by software, an image signal of an appropriate component among image signals separated into a plurality of components is compared with an image signal of an appropriate component in each frame. The relative displacement can be estimated.
- the method for increasing the resolution of an image according to the second embodiment of the present invention is a method for increasing the resolution of a sampled image signal, wherein a plurality of sampled image signals are separated by a spatial frequency. Separating the image signal into the image signals of the components, estimating the relative displacement between the frames, temporarily storing the image signal in the image storage means, and selecting from the plurality of frames. Selecting a frame to be subjected to high-resolution image estimation processing; high-resolution image estimation for estimating a high-resolution image from image signals of a plurality of frames; and a plurality of components separated by the spatial frequency.
- Determining at least one image signal of the image signals by referring to at least one of the image signals by the determining means; and setting an area in the image by the determining means.
- the step of the high-resolution image estimation to estimate a high-resolution image using the information of the area in the image is characterized by that.
- the invention of (10) corresponds to a method for increasing the resolution of an image according to the configuration diagram of the second embodiment shown in FIG.
- the “step of separating the sampled image signal into image signals of a plurality of components according to the spatial frequency” corresponds to the processing by the band separation processing unit 105.
- the “step of estimating the relative displacement between frames” corresponds to the processing by the motion estimating unit 107.
- the “step of temporarily storing the image signal in the image storage unit” corresponds to the processing by the memory unit 113.
- the “step of selecting a frame to be subjected to high-resolution image estimation processing from a plurality of frames” corresponds to the processing by the super-resolution target frame selection unit 106.
- the “step of high-resolution image estimation for estimating a high-resolution image from image signals of a plurality of frames” corresponds to the processing by the high-resolution image estimating unit 108.
- a step of referring to at least one image signal among the image signals of a plurality of components separated by the spatial frequency and discriminating image information by a discriminating means corresponds to the processing by the processing area determination unit 114.
- the processing speed when the resolution of an image is increased by software can be further increased.
- the invention of (10) is characterized in that, in the step of determining information of the image, a high-frequency component of the image is extracted. According to this configuration, when performing high-resolution image estimation processing by software, high-resolution image estimation calculation can be performed by motion estimation using only a part of the image that includes many high-frequency components. .
- the step of determining the information of the image comprises: at least one image signal among image signals of a plurality of components separated by the spatial frequency.
- the feature is to refer to the luminance information of This processing corresponds to the processing by the processing area determination unit 114.
- FIG. 1 is a configuration diagram of the first embodiment.
- FIG. 2 is a configuration diagram of the band processing unit.
- FIG. 3 is an explanatory diagram showing an example of an image before band separation.
- FIG. 4 is an explanatory diagram of an example in which a mouth-to-mouth filter process is performed on the image of FIG.
- FIG. 5 is an explanatory diagram of an example in which the processes of 1052 and 1053 are performed on the image of FIG.
- FIG. 6 is a characteristic diagram showing a histogram of the gradation of the image of FIG.
- FIG. 7 is a configuration diagram of a modified example of the first embodiment of the present invention.
- FIG. 8 is a flowchart of the motion estimation algorithm.
- FIG. 9 is a conceptual diagram showing the estimation of the optimal similarity in the motion estimation.
- FIG. 10 is a conceptual diagram of the high-resolution image calculation area determination.
- FIG. 11 is a flowchart of the high-resolution image estimation processing algorithm.
- FIG. 12 is a configuration diagram of the high-resolution image estimation calculation unit.
- FIG. 13 is a configuration diagram of the synthesis operation processing unit.
- FIG. 14 is a configuration diagram of the embodiment of the second invention. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a configuration diagram of the first embodiment.
- the optical system 101 is an image
- An optical image is formed on the jar 102, and the imager 102 converts the optically formed image into a sampled image signal by spatially discretizing the image.
- the image signal sampled by the imager 102 is separated into a high-frequency component image and a low-frequency component image by a band separation processing unit 105 according to a spatial frequency.
- the super-resolution processing is performed by a motion estimating unit 107 and a high-resolution image estimating unit 108 that estimates image data of a high-resolution pixel array.
- the high-frequency component image is transmitted to motion estimating section 107 for performing super-resolution processing.
- the super-resolution processing a plurality of images with misregistration at the sub-pixel level are taken, and these images are combined after canceling factors such as deterioration of the optical system, etc. This is a technique for forming a simple image.
- the super-resolution target frame selection unit 106 selects a target frame for performing the super-resolution processing. From the low-frequency component images separated by the band separation processing unit 105, a frame corresponding to a target frame on which super-resolution processing is to be performed is selected and transmitted to the interpolation enlargement processing unit 109.
- the interpolation enlargement processing unit 109 includes an interpolation process using, for example, pi cubic (bicubic), and performs an enlargement process of the low-frequency component image of the target frame.
- the high-resolution image calculation area determination unit 112 includes the high-frequency component image output from the band separation processing unit 105 and the super-resolution objective From the frame information, the region in the image where the high-resolution image estimation calculation is performed is determined.
- the high-resolution image estimation calculation unit 108 includes a plurality of motion information including motion information for each frame provided from the motion estimation unit 107 and calculation designation information for each region provided from the high-resolution image estimation calculation region determination unit 112. Performs high-resolution image estimation calculation from frame image data. As a result, the high-resolution image estimation calculation is performed only in the region having the high-frequency component.
- the configuration of the combining operation processing unit 110 will be described later with reference to FIG. In the configuration shown in FIG.
- the optical system 101 forms an optical image on the imager 102, and the imager 102 converts the optically formed image into a spatially discretized image signal.
- the image signals are not limited to those obtained by the optical system 101 and the imager 102. It is also possible to adopt a configuration in which the resolution of an image is increased using a sampled image signal recorded on an appropriate recording medium or the like. In this case, the sampled image signal recorded on the recording medium or the like is input to the band separation processing unit 105 and the super-resolution target frame selection unit 106.
- the configuration of FIG. 1 can implement the invention of the method for increasing the resolution of an image.
- FIG. 2 is a configuration diagram showing an example of the configuration of the band separation processing unit 105 described in FIG.
- the image signal output from the imager 102 is converted to a low-frequency image by a low-pass filter processing unit 1051, and the frame selected by the super-resolution target frame selection unit 106 in the low-frequency component image is interpolated and enlarged by the interpolation processing unit 109.
- Output to The high-frequency component image is subjected to predetermined Pierce processing in the bias addition processing unit 1052 on the image obtained in the low-pass filter processing unit 1051, and a difference operation with the original image is performed in the difference operation processing unit 1053.
- the bypass addition processing unit 1052 performs non-negative processing for storing an image of a high-frequency component in a memory having a predetermined unsigned bit width.
- the bias addition processing unit 1052 receives the signal of the bias level and the signal of the low-pass filter 1051.
- the signal of the bias addition processing unit 1052 and the image signal output by the imager 102 in FIG. 1 are input to the difference calculation processing unit 1053.
- the difference calculation processing unit 1053 outputs the output signal of the imager 102 and the signal obtained by adding the bias signal to the signal obtained by passing the output signal of the imager 102 through the low-pass filter 1051.
- the difference is output.
- the output signal of the difference calculation processing unit 1053 is input to the motion estimation unit 107 in FIG. 1 as a high-frequency component image.
- FIGS. 3 to 5 are explanatory diagrams showing examples of images in which band separation processing has been performed on the output signal of the imager 102 by the band separation processing unit 105.
- FIG. Figure 4 shows an image obtained by performing low-pass filtering on the original image signal ( Figure 3). That is, the image of FIG. 4 is input to the interpolation enlargement processing unit 109 of FIG.
- FIG. 5 shows a high-frequency component image obtained as a result of the processing performed by the bias addition processing 1052 and the difference calculation processing unit 1053 in FIG.
- a histogram of the gradation of the image of FIG. 5 is shown in the characteristic diagram of FIG.
- the horizontal axis represents an 8-bit image signal.
- the frequency of the difference is expressed in% on the left axis of the vertical axis.
- the right side of the vertical axis shows the accumulated value (absolute value) of the pixel frequency.
- a peak value of the difference frequency is formed at 128 around the center of the 8-bit image signal.
- 99.6% of the pixels are included in the 64 gradations between 96 and 160, and high-frequency component images can be represented by 6 bits.
- FIG. 7 is a configuration diagram showing a modification of the first embodiment. Only the differences from Fig. 1 will be explained.
- the signal of the imager 102 is directly input to the motion estimator 107. That is, regarding the motion estimation, the original image signal obtained by spatially discretizing and sampling the image optically formed by the imager 102 without using band separation as shown in FIG. 7 is used. May be.
- the accuracy of the displacement estimation can be improved.
- FIG. 8 is a flowchart showing details of the motion estimation algorithm.
- S1 motion estimation.
- S2 The reference image is deformed in multiple motions.
- S3 Read one reference image for motion estimation with the reference image.
- S4 Calculate the similarity value between the reference image and the image sequence obtained by deforming the reference image.
- S5 A discrete similarity map is created using the relationship between the parameters of the deformation motion and the calculated similarity value.
- S 6 By searching the extremum of the similarity map by complementing the discrete similarity map created in S 5, the extremum of the similarity map is obtained.
- the deformation motion having the extreme value is the estimated motion.
- Parabolic fitting, spline interpolation, etc. are used to search for extreme values in the similarity map.
- S7 In all the reference images, it is determined whether or not motion estimation has been performed in all the target reference images.
- S8 If the motion estimation has not been performed, the process returns to S3 and the reading process of the next image is continued. If motion estimation has been performed on all target reference images, the processing program ends.
- FIG. 9 is a conceptual diagram showing the estimation of the optimal similarity in the motion estimation performed by motion estimation section 107 described in FIG. Fig. 9 shows an example in which motion estimation is performed by parabolic fitting using the three points of black circles.
- the vertical axis represents the similarity
- the horizontal axis represents the deformation motion parameters. The smaller the value on the vertical axis, the higher the similarity.
- the gray circle with the minimum value on the vertical axis is the extreme value of the similarity.
- FIG. 10 is a conceptual diagram illustrating a processing example of the high-resolution image calculation area determination unit 112 in FIG.
- (a) shows the high-frequency component image output from the band separation processing unit 105
- (b) shows the image obtained from the super-resolution target frame selection unit 106.
- the high-resolution image calculation area determination unit 112 determines a high-resolution image estimation calculation area in the image from the area having the high-frequency component of (b), and determines the level of “1” in (c) from this area. Generate pixel information. By such a process, the high-resolution image estimation calculation is performed only in the region having the high-frequency component.
- FIG. 11 is a flowchart showing an algorithm of the high-resolution image estimation processing.
- Start the processing program S11: Reads n low-resolution images for use in high-resolution image estimation (n ⁇ l).
- S12 An initial high-resolution image is created by assuming an arbitrary one of a plurality of low-resolution images as a target frame and performing interpolation processing. This step can be optionally omitted.
- S13 The positional relationship between images is clarified by the motion between the target frame and the images of other frames, which is obtained in advance by some motion estimation method.
- S14 Obtain a point spread function (PSF: Point Spread Function) that takes into account the optical transfer function (0TF) and imaging characteristics such as the CCD aperture.
- PSF Point Spread Function
- PSF uses the Gauss function, for example.
- S15 Minimize the evaluation function f (z) based on the information of S3 and S4.
- f (z) is expressed by the following equation. Hmm
- y is a low-resolution image
- z is a high-resolution image
- A is an image conversion matrix representing an imaging system including motion between images, PSF, and the like.
- g (z) contains constraints such as smoothness of images and color correlation. Is a weighting factor. For example, the steepest descent method is used to minimize the evaluation function.
- S16 When f (z) obtained in S15 is minimized, the processing ends and a high-resolution image z is obtained.
- S 17 If f (z) has not been minimized yet, update the high-resolution image z and return to the processing of S 13.
- FIG. 12 is a configuration diagram of the high-resolution image estimation calculation unit 108 shown in FIG.
- the high-resolution image estimation processing unit 108 includes an initial image creation unit 1201, a convolution integration unit 1202, a PSF data holding unit 1203, an image comparison unit 1204, a multiplication unit 1205, a combination addition unit 1206, a storage addition unit 1207, and an update image generation. It comprises a unit 1208, an image storage unit 1209, an iterative operation determination unit 1210, and an iterative determination value holding unit 1211.
- the portion surrounded by a broken line in FIG. 12 is a minimization processing unit 1212 corresponding to the configuration for performing the process of minimizing the evaluation function f (z) described in S15 of FIG.
- PSF data is data of point spread function.
- high-frequency image information of a target frame is provided to the initial image creating unit 1201 from the high-resolution image estimation calculation region determining unit 112 of FIG. 1, and the image information given here is interpolated and enlarged, and this is initialized. It becomes an image.
- This initial image is supplied to the convolution integrator 1202, and is convolved with the PSF data transmitted from the PSF data storage 1203.
- the PSF data here is given in consideration of the motion of each frame.
- the initial image data is sent to the image storage unit 1209 at the same time, and is stored in the image storage unit 1209.
- the image data convolved by the integration unit 1202 is sent to the image comparison unit 1204, and the high-resolution image estimation calculation is performed at an appropriate coordinate position based on the motion of each frame obtained by the motion estimation unit. Captured image and ratio given by area determination unit 1 12
- the residuals compared by the image comparison unit 1204 are sent to the multiplication unit 1205 and multiplied by the value of each pixel of the PSF data supplied from the PSF data holding unit 1203.
- the result of this calculation is sent to the combining and adding unit 1206, and is arranged at the corresponding coordinate position.
- the image data from the multiplying unit 1205 has an overlap and the coordinate position shifts little by little while overlapping, the overlapping portion is added by the combination adding unit 1206.
- the data is sent to the accumulation adding unit 1207.
- the accumulating and adding unit 1207 sends the data sequentially until the processing for the number of frames is completed.
- the image data for each frame is sequentially added according to the estimated motion.
- the image data added by the accumulation adding unit 1207 is sent to the updated image generating unit 1208.
- the image data stored in the image storage unit 1209 is given to the update image generation unit 1208, and the two image data are weighted and added to generate the update image data.
- the generated updated image data is provided to an iterative operation determination unit 1210, and determines whether to repeat the operation based on the iterative determination value provided from the iterative determination value holding unit 1211.
- the data is sent to the convolution integrator 1202, and the above series of processing is repeated.
- the generated image data is output. By performing the above series of processing, the image output from the iterative operation determination unit 1210 has a higher resolution than the captured image.
- the motion for each frame is calculated by the motion estimation unit 107 in FIG. Is to be given.
- a portion surrounded by a broken line is a minimization processing unit 1212 corresponding to the minimization processing of the evaluation function f (z) performed in S15 of FIG.
- FIG. 13 is a configuration diagram showing the configuration of the synthesis operation processing unit 110 of FIG.
- the estimated high-resolution image information is supplied from the high-resolution image estimation calculation unit 108, and the interpolation-enlarged image information is supplied from the interpolation enlargement processing unit 109 to the synthesis operation processing unit 110.
- the bias level added at the time of band separation in FIG. 2 is subtracted from the high-resolution image given to the synthesis operation processing unit 110.
- the high-resolution image from which the bias level has been subtracted is added to the low-frequency image at the corresponding coordinate position in the image, so that an image in which only the high-frequency portion such as an edge has a high resolution is obtained. Synthesized.
- Such a composite image is output from the composite operation processing unit 110.
- the amount of high-frequency components is small.
- FIG. 14 is a configuration diagram showing a second embodiment of the present invention.
- an optical system 101 forms an optical image on an imager 102, and the sampled image data is given to a band separation processing unit 105 and a memory unit 113.
- the band separation processing unit 105 separates the image into a high-frequency component image and a low-frequency component image, and supplies only the information of the high-frequency component image to the processing area determination unit 114.
- the processing region determining unit 114 detects and cuts out a region having a large amount of high frequency components in the image, and supplies the region to the motion estimating unit 107.
- the basic algorithm of motion estimation section 107 is the same as that in the first embodiment.
- motion estimation is performed using only a part of the image containing a large amount of high-frequency components, and the motion estimation is used as the motion of the entire image to perform high-resolution image estimation.
- the processing area determination unit 114 specifies one or more high-frequency areas from the high-frequency components of the image, cuts out information on the areas, and sends the information to the motion estimation unit 107. Further, at this time, the processing region determining unit 114 may calculate luminance information of the high frequency component image, determine and cut out a region having a high frequency component from the luminance information, and send the region to the motion estimating unit 107. .
- the motion estimation data obtained from one region containing many high-frequency components in the image is supplied to the high-resolution image estimation calculation unit 108, and at the same time, the image data temporarily stored in the memory unit 113 is subjected to the high-resolution image estimation calculation. High given to part 108 A resolution image estimation calculation is performed. As a result, a high-resolution estimated image is generated.
- the details of the motion estimation and the high-resolution image estimation calculation are performed in the same manner as in the first embodiment.
- the arithmetic processing unit 110 becomes unnecessary. For this reason, it is possible to reduce the scale of processing required to obtain a high-resolution image.
- an imaging apparatus and a method for increasing the resolution of an image which efficiently perform a high-resolution image estimation operation and a motion estimation operation necessary for the high-resolution image estimation operation.
Landscapes
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Editing Of Facsimile Originals (AREA)
- Studio Circuits (AREA)
- Studio Devices (AREA)
- Television Systems (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/628,910 US20070177027A1 (en) | 2004-06-10 | 2005-06-09 | Imaging system and process for rendering the resolution of images high |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-172093 | 2004-06-10 | ||
JP2004172093A JP4429816B2 (ja) | 2004-06-10 | 2004-06-10 | 撮像装置および画像の高解像度化方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005122083A1 true WO2005122083A1 (ja) | 2005-12-22 |
Family
ID=35503289
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/010992 WO2005122083A1 (ja) | 2004-06-10 | 2005-06-09 | 撮像装置および画像の高解像度化方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20070177027A1 (ja) |
JP (1) | JP4429816B2 (ja) |
WO (1) | WO2005122083A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8385665B2 (en) | 2006-03-20 | 2013-02-26 | Panasonic Corporation | Image processing apparatus, image processing method, program and semiconductor integrated circuit |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7483059B2 (en) * | 2004-04-30 | 2009-01-27 | Hewlett-Packard Development Company, L.P. | Systems and methods for sampling an image sensor |
JP4837615B2 (ja) * | 2006-04-11 | 2011-12-14 | パナソニック株式会社 | 画像処理方法および画像処理装置 |
US7889264B2 (en) * | 2006-05-12 | 2011-02-15 | Ricoh Co., Ltd. | End-to-end design of superresolution electro-optic imaging systems |
JP5012805B2 (ja) * | 2006-09-14 | 2012-08-29 | 株式会社ニコン | 画像処理装置、電子カメラおよび画像処理プログラム |
JP2008077501A (ja) * | 2006-09-22 | 2008-04-03 | Olympus Corp | 画像処理装置及び画像処理制御プログラム |
JPWO2008102898A1 (ja) * | 2007-02-19 | 2010-05-27 | 国立大学法人東京工業大学 | 画質改善処理装置、画質改善処理方法及び画質改善処理プログラム |
JP4839448B2 (ja) * | 2007-03-19 | 2011-12-21 | 国立大学法人東京工業大学 | 複数領域に対応した画質改善処理方法及び画質改善処理プログラム |
JP4814840B2 (ja) * | 2007-05-23 | 2011-11-16 | オリンパス株式会社 | 画像処理装置又は画像処理プログラム |
JP4834636B2 (ja) * | 2007-09-27 | 2011-12-14 | 株式会社日立製作所 | 映像再生装置及び映像再生方法 |
JP5111088B2 (ja) * | 2007-12-14 | 2012-12-26 | 三洋電機株式会社 | 撮像装置及び画像再生装置 |
US8331714B2 (en) * | 2009-02-23 | 2012-12-11 | Sharp Laboratories Of America, Inc. | Methods and systems for image processing |
JP4991887B2 (ja) * | 2010-01-13 | 2012-08-01 | シャープ株式会社 | 撮像画像処理システム、撮像画像処理システムの制御方法、プログラムおよび記録媒体 |
JP5790944B2 (ja) | 2010-02-26 | 2015-10-07 | 日本電気株式会社 | 画像処理方法、画像処理装置及びプログラム |
JP5696560B2 (ja) * | 2011-03-28 | 2015-04-08 | 株式会社Jvcケンウッド | 信号処理装置、信号処理方法、および信号処理プログラム |
JP5644626B2 (ja) * | 2011-03-28 | 2014-12-24 | 株式会社Jvcケンウッド | 信号処理装置、信号処理方法、および信号処理プログラム |
JP5909865B2 (ja) * | 2011-04-18 | 2016-04-27 | 株式会社ニコン | 画像処理プログラム、画像処理方法、画像処理装置、撮像装置 |
US10346956B2 (en) * | 2015-07-22 | 2019-07-09 | Panasonic Intellectual Property Management Co., Ltd. | Image processing device |
JP6598365B2 (ja) * | 2015-10-20 | 2019-10-30 | 日本放送協会 | 帯域合成装置、帯域分割装置、解像度変換装置、超解像装置およびプログラム |
CN116761019A (zh) * | 2023-08-24 | 2023-09-15 | 瀚博半导体(上海)有限公司 | 视频处理方法、系统、计算机设备及计算机可读存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05260264A (ja) * | 1992-03-12 | 1993-10-08 | Sharp Corp | 画像処理装置 |
JPH0785246A (ja) * | 1993-09-10 | 1995-03-31 | Olympus Optical Co Ltd | 画像合成装置 |
JPH1069537A (ja) * | 1996-08-28 | 1998-03-10 | Nec Corp | 画像合成方法及び画像合成装置 |
JP2000244814A (ja) * | 1999-02-24 | 2000-09-08 | Hitachi Ltd | 画像合成装置、画像合成方法が記録された記録媒体 |
JP2000244851A (ja) * | 1999-02-18 | 2000-09-08 | Canon Inc | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
JP2000354244A (ja) * | 1999-06-11 | 2000-12-19 | Canon Inc | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6128416A (en) * | 1993-09-10 | 2000-10-03 | Olympus Optical Co., Ltd. | Image composing technique for optimally composing a single image from a plurality of digital images |
CA2289053C (en) * | 1998-11-10 | 2008-07-29 | Canon Kabushiki Kaisha | Image processing method and apparatus |
-
2004
- 2004-06-10 JP JP2004172093A patent/JP4429816B2/ja not_active Expired - Fee Related
-
2005
- 2005-06-09 US US11/628,910 patent/US20070177027A1/en not_active Abandoned
- 2005-06-09 WO PCT/JP2005/010992 patent/WO2005122083A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05260264A (ja) * | 1992-03-12 | 1993-10-08 | Sharp Corp | 画像処理装置 |
JPH0785246A (ja) * | 1993-09-10 | 1995-03-31 | Olympus Optical Co Ltd | 画像合成装置 |
JPH1069537A (ja) * | 1996-08-28 | 1998-03-10 | Nec Corp | 画像合成方法及び画像合成装置 |
JP2000244851A (ja) * | 1999-02-18 | 2000-09-08 | Canon Inc | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
JP2000244814A (ja) * | 1999-02-24 | 2000-09-08 | Hitachi Ltd | 画像合成装置、画像合成方法が記録された記録媒体 |
JP2000354244A (ja) * | 1999-06-11 | 2000-12-19 | Canon Inc | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8385665B2 (en) | 2006-03-20 | 2013-02-26 | Panasonic Corporation | Image processing apparatus, image processing method, program and semiconductor integrated circuit |
US8682089B2 (en) | 2006-03-20 | 2014-03-25 | Panasonic Corporation | Image processing apparatus, image processing method, program and semiconductor integrated circuit |
Also Published As
Publication number | Publication date |
---|---|
JP4429816B2 (ja) | 2010-03-10 |
US20070177027A1 (en) | 2007-08-02 |
JP2005352720A (ja) | 2005-12-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005122083A1 (ja) | 撮像装置および画像の高解像度化方法 | |
US7729563B2 (en) | Method and device for video image processing, calculating the similarity between video frames, and acquiring a synthesized frame by synthesizing a plurality of contiguous sampled frames | |
US7978234B2 (en) | Image acquisition apparatus, resolution enhancing method, and recording medium | |
US8896712B2 (en) | Determining and correcting for imaging device motion during an exposure | |
US8213744B2 (en) | Image processing apparatus and storage medium storing image processing program | |
EP2560375B1 (en) | Image processing device, image capture device, program, and image processing method | |
JP4151793B2 (ja) | 撮像装置および画像の高解像化方法 | |
US20080170126A1 (en) | Method and system for image stabilization | |
JP4371457B2 (ja) | 画像処理装置、方法及びコンピュータ読み取り可能な記憶媒体 | |
US20150334283A1 (en) | Tone Mapping For Low-Light Video Frame Enhancement | |
US20100091131A1 (en) | Imaging system and storage medium storing an imaging program | |
US20140078346A1 (en) | Imaging device and image generation method | |
WO2005122554A1 (ja) | 撮像装置 | |
WO2007077730A1 (ja) | 撮像システム、画像処理プログラム | |
JP2008077501A (ja) | 画像処理装置及び画像処理制御プログラム | |
US8121429B2 (en) | Image processing apparatus, image-capturing apparatus, image processing method, and program | |
JP4857933B2 (ja) | ノイズリダクション方法、プログラム、装置及び撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11628910 Country of ref document: US Ref document number: 2007177027 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 11628910 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |